The idea is this. What you take yourself to be at this very moment is actually a kind of informatic illusion.
For me, the picture has come to seem obvious, but I understand that this is the case for everyone with a theory to peddle. So the best I can do is explain why it seems obvious to me.
One of the things I have continually failed to do is present my take, Blind Brain Theory (BBT), in terms that systematically relate it to other well-known philosophical positions. The reason for this, I’m quite certain, is laziness on my part. As a nonacademic, I never have to exposit what I read for the purposes of teaching, and so the literature tends to fall into the impressionistic background of my theorization. I actually think this is liberating, insofar as it has insulated me from many habitual ways of thinking through problems. I’m not quite sure I would have been able to connect the dots the way I have chasing the institutional preoccupations of academe. But it has certainly made the task of communicating my views quite a bit harder than it perhaps should be.
So I’ve decided to bite the bullet and lay out the ways BBT overlaps and (I like to think!) outruns Daniel Dennett’s rather notorious and oft-misunderstood position on consciousness. For many, if not most, this will amount to using obscurity to clarify murk, but then you have to start somewhere.
First, we need to get one fact straight: consciousness possesses informatic boundaries. This is a fact Dennett ultimately accepts, no matter how his metaphors dance around it. Both of his theoretical figures, ‘multiple drafts’ and ‘fame in the brain’ imply boundaries, a transition of processes from unconsciousness to consciousness. Some among a myriad of anonymous processes find neural celebrity, or as he puts it in “Escape from the Cartesian Theater,” “make the cut into the elite circle of conscious events.” Many subpersonal drafts become one. What Dennett wants to resist is the notion that this transition is localized, that it’s brought together for the benefit of some ‘neural observer’ in the brain–what he calls the ‘Cartesian Theatre.’ One of the reasons so many readers have trouble making sense of his view has to do, I think, with the way he fails to recognize the granularity of this critical metaphor, and so over-interprets its significance. In Consciousness Explained, for instance, he continually asserts there is no ‘finishing line in the brain,” no point where consciousness comes together–”no turnstyle” as he puts it. Consciousness is not, he explicitly insists in his notorious piece (with Marcel Kinsbourne) “Time and the Observer” in Behavioural and Brain Sciences, a subsystem. And yet, at the same time you’ll find him deferring to Baars’ Global Workspace theory of consciousness, even though it was inspired by Jerry Fodor’s notion of some ‘horizontal’ integrative mechanism in the brain, an account that Dennett has roundly criticized as ‘Cartesian’ elsewhere.
The evidence that consciousness is localized (even if widely distributed) through the brain is piling up, which is a happy fact, since according to BBT consciousnesscan only be explained in subsystematic terms. Consciousness possesses dynamic informatic boundaries, both globally and internally, all of which are characterized, from the standpoint of consciousness, by various kinds of neglect.
In cognitive psychology and neurology, ‘neglect’ refers to an inability to detect or attend to some kind of deficit. Hemi-neglect, which is regularly mentioned in consciousness discussions, refers to the lateralized losses of awareness commonly suffered by stroke victims, who will sometimes go so far as to deny ownership of their own limbs. Cognitive psychology also uses the term to refer to our blindness various kinds of information in various problem-solving contexts. So ‘scope neglect,’ for instance, involves our curious inability to ‘value’ problems according to their size. My view is that the neglect revealed in various cognitive biases and neuropathologies actually structures ‘apparent consciousness’ as a whole. I think this particular theoretical cornerstone counts as one of Dennett’s ‘lost insights.’ Although he periodically raises the issue of neglect and anosognosia, his disavowal of ‘finishing lines’ makes it impossible for him to systematically pursue their relation to consciousness. He overgeneralizes his allergy to metaphors of boundary and place.
So, to give a quick example, where BBT views Frank Jackson’s Mary argument as a kind of ‘neglect detector,’ a thought experiment that reveals the scope of applicability of the ‘epistemic heuristic’ (EH), Dennett thinks it constitutes a genuine first-order challenge, a circle that must be squared. BBT is more interested in diagnosing than disputing the intuition that physical knowledge could be complete in the absence of any experience of red. Why does an obvious informatic addition to our environmental relationship (the experience of red) not strike us as an obvious epistemic addition? Well, because our ‘epistemic heuristic,’ even in its philosophically ‘refined’ forms, is still a heuristic, and as such, not universally applicable. Qualia simply lie outside the EH scope of applicability on my view.
I take Dennett’s infamous ‘verificationism’ as an example of a ‘near miss’ on his part. What he wants to show is that the cognitive relationship to qualia is informatically fixed–or ‘brainbound’–in a way that the cognitive relationship to environments are not: With redness, you have no informatic recourse the way you do with an apple–what you see is what you get, period. On my view, this is exactly what we should expect, given the evolutionary premium on environmental cognition: qualia are best understood as ‘phenomemes,’ subexistential combinatorial elements that enable environmental cognition similar to the way phonemes are subsemantic combinatorial elements that enable linguistic meaning (I’ll get to the strange metaphysical implications of this shortly). Granting that qualia are ‘cognition constitutive,’ we should expect severe informatic access constraints when attempting to cognize them. On the BBT account, asking what qualia ‘are’ is simply an informatic confusion on par with asking what the letter ‘p’ means. The primary difference is that we have a much better grasp of the limits of linguistic heuristics (LH) than we do EH. EH, thanks to neglect, strikes us as universal, as possessing an unlimited scope of applicability. Thus the value of Mary-type thought experiments.
Lacking the theoretical resources of BBT, Dennett can only form a granular notion of this problem. In one of his most famous essays, Quining Qualia, he takes the ‘informatic access’ problem, and argues that ‘qualia’ are conceptually incoherent because we lack the informatic resources to distinguish changes in them (it could be our memory that has been transformed), and empirically irrelevant because those changes would seem to make no difference one way or another. Where he uses the ‘informatic access problem’ as a argumentative tool to make the concept of qualia ‘look bad,’ I take the informatic access problem to be an investigative clue. What Dennett shows via his ‘intuition pumps,’ I think, are simply the limits of applicability of EH.
But this difference does broach the most substantial area of overlap between my position and Dennett’s. In a sense, what I’m calling EH could be characterized as an ‘epistemological stance,’ akin to the variety of stances proposed by Dennett.
BBT takes two interrelated angles on ‘brain blindness’ or neglect. The one has to do with how the appearance of consciousness–what we think we are enjoying this very moment–is conditioned by informatic constraints or ‘blindnesses.’ The other has to do with the plural, heuristic nature of human cognition, how our various problem-solving capacities are matched to various problems (the way cognition is ‘ecological’), and how they leverage efficiencies via strategic forms of informatic neglect. What I’m calling EH, for instance, seems to be both informatically sufficient and universally applicable, thanks to neglect–the same neglect that rendered it invisible altogether to our ancestors. In fact, however, it elides enormous amounts of relevant information, including the brain functions that make it possible. So, remaining faithful to the intuitions provided by EH, we conceive knowledge in terms of relations between knowers and things known, and philosophy sets to work trying to find ways to fit ever greater accumulations of scientific information into this ‘intuitive picture’–to no avail. How do mere causal relations conspire to create epistemological relations, which is to say, normative about relations? On my view, these relations are signature examples of informatic neglect: ‘aboutness’ is a shortcut, a way to relate devices in the absence of any causal information. ‘Normativity’ is also a shortcut, a way to model mechanism in the absence of any mechanistic information. (Likewise, ‘object’ is a shortcut, and even ‘existence’ is a shortcut–coarse-grained tools that get certain work done). Is it simply a coincidence that syntax can be construed as mechanism bled of everything save the barest information? Even worse, BBT suggests it could be the case that both aboutness and normativity are little more than reflective artifacts, merely deliberative cartoons of what we think we are doing given our meagre second-order informatic access to our brain’s activity.
In one of his most lucid positional essays, “Real Patterns,” Dennett argues the ‘realism’ of his stance approach vis-a-vis thinkers like Churchland, Davidson, and Rorty. In particular, he wants explain how his ‘intentional stance’ and the corresponding denial of ‘original intentionality’ does not reduce intentionality to the status of a ‘useful fiction.’ Referencing Churchland’s observations regarding the astronomical amount of compression involved in the linguistic coding of neural states (in “Eliminative Materialism and the Propositional Attitudes“), he makes the point that I’ve made here very many times: the informatic asymmetry between what the brain is doing and what we think we’re doing is nothing short of abyssal. When we attribute desires and beliefs and goals and so on to another brain, our cognitive heuristics are, Dennett wants to insist, trading in very real patterns, only compressed to a drastic degree. It’s the reality of those patterns that render the ‘intentional stance’ so useful. It’s the degree of compression that renders them incompatible with the patterns belonging to the ‘physical stance’–and thus, scientifically intractable.
The only real problem BBT has with this analysis is its granularity, a lack of resolution that leads Dennett to draw several erroneous conclusions. The problem, in a nutshell, is that far more than ‘compression’ is going on, as Dennett subsequently admits when discussing his differences with Davidson (the fact that two interpretative schemes can capture the same real pattern, and yet be incompatible with each other). Intentional idioms are heuristics in the full sense of the term: their effectiveness turns on informatic neglect as much as the algorithmic compression of informatic redundancies. To this extent, the famous ‘pixilated elephant’ Dennett provides to illustrate his argument is actually quite deceiving. The idea is to show the way two different schemes of dots can capture the same pattern–an elephant. What makes this example so deceptive is the simplistic account of informatic access it presupposes. It lends itself to the impression that ‘informatic depletion’ alone characterizes the relation between intentional idioms and the ‘real patterns’ they supposedly track. It entirely ignores the structural specifics of the informatic access at issue (the variety of bottlenecks posited by BBT), the fact that our Intentional Heuristic (IH), very much like EH, elides whole classes of information, such as the bottom-up causal provenance belonging to the patterns tracked. IH, in other words, suffers from informatic distortion and truncation as much as depletion.
His illustration would have been far more accurate if one of the pixilated figures only showed only the elephant’s trunk. When our attentional systems turn to our ‘intentional intuitions’ (when we reflect on intentionality), deliberative cognition only has access to the stored trace of globally broadcast (or integrated) information. Information regarding the neurofunctional context of that information is nowhere to be found. So in a profound sense, IH can only access/track acausal fragments of Dennett’s ‘real patterns.’ Because these fragments are systematically linked to what it is our brains are actually doing, IH will seem to be every bit as effective as our brains at predicting, manipulating, and understanding the behavioural outputs of other brains. Because of neglect (the absence of information flagging the insufficiency of available information), IH will seem complete, unbounded, which is likely why our ancestors used it to theorize the whole of creation. IH constitutively confuses the trunk for the whole elephant.
In other words, Dennett fails to grasp several crucial specifics of his own account. This oversight (and to be clear, there are always oversights, always important details overlooked, even in my own theoretical comic strips) marks a clear parting of the ways between his position and my own. It’s the way developmental and structural constraints consistently distort and truncate the information available to IH that explains the consistent pattern of conceptual incompatibilities between the causal and intentional domains. And as I discuss below, it’s a primary reason why I, unlike Dennett, remain unwilling to take theoretical refuge in pragmatism. No matter what the ‘reality’ of intentionality, BBT shows that the informatic asymmetry between it and the ‘real patterns’ it tracks is severe enough warrant suspending commitment to any theoretical extrapolation, even one as pseudo-deflationary as pragmatism, based upon it.
This oversight is also a big reason why I so often get that narcissistic ‘near miss’ feeling whenever I read Dennett–why he seems trapped using metaphors that can only capture the surface features of BBT. Consider the ‘skyhook’ and ‘crane’ concepts that he introduces in Darwin’s Dangerous Idea to explain the difference between free-floating, top-down religious and naturally grounded, bottom-up evolutionary approaches to explanation. On my reading, he might as well as used ‘trunk’ and ‘elephant’!
Moreover, because he overlooks the role played by neglect, he has no real way of explaining our conscious experience of cognition, the rather peculiar fact that we are utterly blind to the way our brains swap between heuristic cognitive modes. Instead, Dennett relies on the pragmatics of ‘perspective talk’–the commonsense way in which we say things like ‘in my view,’ ‘from his perspective,’ ‘from the standpoint of,’ and so on–to anchor our intuitions regarding the various ‘stances’ he discusses. Thus all the vague and (perhaps borderline) question-begging talk of ‘stances.’
BBT replaces this idiom with that of heuristics, thus avoiding the pitfalls of intentionality while availing itself of what we are learning about the practical advantages of specialized (which is to say, problem specific) cognitive systems, how ignoring information not only generates metabolic efficiencies, but computational ones as well. The reason for our ‘peculiar blindness’–the reason Dennett has had to do to such great lengths to make ‘Cartesian intuitions’ visible–is actually internal to the very notion of heuristics, which, in a curious sense, use blindness to leverage what they can see. From the BBT standpoint, Dennett consistently fails to recognize the role informatic neglect plays in all these phenomena. He understands the fractured, heuristic nature of cognition. He is acutely aware of the informatic limitations pertaining to thought on a variety of issues. But the pervasive, positive, structural role these limitations play in the appearance of consciousness largely eludes him. As a result, he can only argue that our traditional intuitions of consciousness are faulty. Because he has no principled means of explaining away ‘error consciousness,’ all he can do is plague it with problems and offer his own, alternative account. As a result, he finds himself arguing against intuitions he can only blame and never quite explain. BBT changes all of that. Given its resources, it can pinpoint the epistemic or intentional heuristics, enumerate all the information missing, then simply ask, ‘How should we determine the appropriate scope of applicability?’
The answer, simply enough, is ‘Where EH works!’ Or alternately, ‘Where IH works!’ BBT allows us, in other words, to view our philosophical perplexities as investigative clues, as signs of where we have run afoul informatic availability and/or cognitive applicability–where our ‘algorithms’ begin balking at the patterns provided. On my view, the myriad forms of neglect that characterize human cognition (and consciousness) can be glimpsed in the shadows they have cast across the whole history of philosophy.
Bur care must be taken to distinguish the pragmatism suggested by ‘where x works’ above from the philosophical pragmatism Dennett advocates. As I mentioned above, he accepts that intentional idiom is coarse-grained, but given its effectiveness, and given the mandatory nature of the manifest image, he thinks it’s in our ‘interests’ to simply redefine our folk-psychological understanding using science to lard in the missing information. So with regard to the will, he recommends (in Freedom Evolves) that we trade our incoherent traditional understanding in for a revised, scientifically informed understanding of free will as ‘behavioural versatility.’ Since, for Dennett, this is all ‘free will’ has ever been, redefinition along these lines is imminently reasonable. I remember once quipping in a graduate seminar that what Dennett was saying amounted to telling you, at your Grandma Mildred’s funeral, “Don’t worry. Just call rename your dog, Mildred.” After the laughter faded, one of the other students, I forget who, was quick to reply, “That only sounds bad if your dog wasn’t your Grandma Mildred all along.”
I’ve since come to think this exchange does a good job of illustrating the stakes of this particular turn of the debate.
You can raise the most obvious complaint against Dennett: that the inferential dimension of his redefinition makes usage of the concept ‘freedom’ tendentious. We would be doing nothing more than gaming all the ambiguities we can to interpret scientific ‘crane information’ into our preexisting folk-psychological conceptual scaffold–wilfully apologizing, assuming these scientific ‘cranes’ can be jammed into a ‘skyhook’ inferential infrastructure. Dennett himself admits that, given the information available to experience, ‘behavioural versatility’ is not what free will seems to be. Or put differently, that the feeling of willing is an illusion.
The ‘feeling of willing,’according to BBT, turns on a structural artifact of informatic neglect. We are skyhooks–from the informatic perspective of ourselves. The manifest image is magical. Intentionality is magical. On my view, the ‘scientific explanations’ are far more likely to resemble ‘explanations away’ than ‘explanations of.’ The question really is one of how other folk-psychological staples will fare as cognitive neuroscience proceeds. Will they be more radically incompatible or less? Imagine experience and the skein of intuitive judgments that seem to bind it as a kind of lateral plane passing through an orthogonal, or ‘medial,’ neurofunctional space. Before science and philosophy, that lateral plane was continuous and flat, or maximally intuitive. It was just the way things were. With the accumulation of information through the raising of philosophical questions (which provide information regarding the insufficiency of the information available to conscious experience) through history, the intuitive topography of the plane became progressively more and more dimpled and knotted. With the institutionalization of science, the first real rips appear. And now, as more information regarding various neurofunctions becomes available, the skewing and shredding are becoming more and more severe. The question is, what will the final ‘plane of experiential intuition’ look like? How will our native intuitions fare?
How deceptive is consciousness?
Dennett’s answer: Enough to warrant considerable skepticism, but not enough warrant abandoning existing folk-psychological concepts. The glass, in other words, is half full. My answer: Enough to warrant wondering if anyone has ever had a clue ever. The glass lies in pieces across the floor. The trend, at least, is foreboding. According to BBT, the informatic neglect that renders the ‘feeling of willing’ possible is a structural feature belonging to all intentional concepts. Given this, it predicts that very many folk-psychological concepts will suffer the fate the ‘feeling of willing’ seems to be undergoing as I write. From the standpoint of knowledge, experience is about to be cast into the neurofunctional wind.
Grandma Mildred isn’t you dog. She’s a ghost.
Either way, this is why I think pragmatic or inferentialist accounts are every bit as hopeless as traditional approaches. You can say, ‘There’s nothing but patterns, so lets run with them!’ and I’ll say, ‘Where? To the playground? Back to Hegel?’ When knowledge and experience break in two, the philosopher, to be a philosopher, must break with it. The world never wants for apologists.
BBT allows us to frame the problem with a clarity that evaded Dennett. If our difficulties turn on the limited applicability of our heuristics, the question really should be one of finding the heuristic that possesses the most applicability. In my view, that heuristic is the one that allows us to comprehend heuristics in the first place: nonsemantic information. The problem with pragmatism as a heuristic lies in the way it actively, as opposed to structurally (which it also does), utilizes informatic neglect. Anything can be taken as anything, if you game the ambiguities right. You could say it makes a virtue out of stupidity.
In place of philosophical pragmatism, my view recommends a kind of philosophical akratism, a recognition of the heuristic structure of human cognition, an understanding of the structural role of informatic neglect, and a realization that conscious experience and cognition are drastically, perhaps catastrophically, distorted as a result.
Deliberative human cognition has only the information globally broadcast (or integrated) at its disposal. Likewise, the information globally broadcast only has human cognition. The first means that human cognition has no access whatsoever to vast amounts of constitutive processing–which is to say, no access to neurofunctional contexts. The second means that we likely cognize conscious experience as experience via heuristics matched to our natural and social environments, as something quite other than whatever it is.
Small wonder consciousness has proven to be such a knot!
And this, for me, is where the fireworks lay: critics of Dennett often complain about the difficulty of getting a coherent sense of what his theory of consciousness is, as opposed to what it is not. For better or worse, BBT paints a very distinct–if almost preposterously radical–picture of consciousness.
So what does that picture look like?
It purports, for instance, to explain how the apparent reflexivity of consciousness can arise from the irreflexivity of natural processes. For me, this constitutes the most troubling, and at the same time, most breathtaking, theoretical dividend of BBT: the parsimonious way it explains away conscious reflexivity. Dennett (working with Marcel Kinsbourne) sails across the insight’s wake in “Time and the Observer” where he argues, among other things, for the thoroughgoing dissociation of the experience of time from the time of experience, how the time constraints imposed by the actual physical distribution of consciousness in the brain means that we should expect our conscious experience of time to ‘break down’ in psychophysical experimental contexts at or below certain thresholds of temporal resolution.
The centerpiece of his argument is the deeply puzzling experimental variant of the well-known ‘phi phenomenon,’ how two different closely separated spots projected in rapid sequence on a screen will seem to be a single spot moving from location to location. When experimenters use two different colours for each of the spots: not only do subjects report seeing the spot move, they claim to see it change colour, and here’s the thing, midway. What makes this so strange is the fact that they perceive the colour change before the second spot appears–before ‘seeing’ what the second colour is. Ruling out precognition, Dennett proposes two mechanisms to account for the illusion: either the subjects consciously see the spots as they are only to have the memory almost instantaneously revised for consistency, what he calls the ‘Orwellian’ explanation, or the subjects consciously see the product of some preconscious imposition of consistency, what he calls the ‘Stalinesque’ explanation. Given his quixotic allergy to neural boundaries, he argues that our inability to answer this question means there is no definite where and when of consciousness in the brain, at least at these levels of resolution.
Dennett’s insight here is absolutely pivotal: the brain ‘constructs,’ as opposed to perceives or measures, the passage of time, given the resources it has available. The time of temporal representation is not the time represented. But he misconstrues the insight, seeing in it a means to cement his critique of the Cartesian Theatre. The question of whether this process is Orwellian or Stalinist, whether neural history is rewritten or staged, simply underscores the informatic constraints on our experience of time, our utter blindness to neurofunctional context of the experience–which is to say, our utter blindness to the time of conscious experience. Dennett, in other words, is himself making a boundary argument, only this time from the inside out: the inability to arbitrate between the Orwellian and Stalinist scenarios clearly demarcates the information horizon of temporal experience.
And this is where the theoretical resources of BBT come into play. Wherever it encounters apparent informatic constraints,it asks how they find themselves expressed in experience. Saying that temporal experience possesses informatic boundaries is platitudinal. All modalities of experience are finite: we can only see, hear, taste, think, and time so much in a given moment. Saying that the informatic boundaries of experience are themselves expressed in experience is somewhat more tricky, but you need only attend to your own visual margins to see a dramatic example of such an expression.
You could say vision is an exceptional example, given the volume of information it provides in comparison to other experiential modalities. Nevertheless, one could argue that such boundaries must find some kind of experiential expression, even if, as in the cases of clinical neglect, it evades deliberative cognition. BBT proposes that neglect is complete in many, if not most cases, and information regarding informatic boundaries is only indirectly available, typically via contexts (such as psychological experimentation) that foreground discrepancies between brute environmental availability and actual access. The phi phenomenon provides a vivid demonstration of this–as does, for that matter, psychophysical phenomena such as flicker-fusion. For some mysterious reason (perhaps the mysterious reason), what cannot be discriminated, such as the flashing of lights below a certain temporal threshold, is consciously experienced as unitary. It seems a fact of experience almost too trivial to note, but perhaps immensely important: Why, in the absence of information, is identity the default?
If you think about it, a good number of the problems of consciousness can be formulated in terms of identity and information. BBT takes precisely this explanatory angle, interpreting things like the unity of consciousness, personal identity, and nowness or subjective time as products of various species of neglect–literally as kinds of ‘fusions.’
The issue of time as it is consciously experienced contains a cognitive impasse at least as old as Aristotle: the problem of the now. The problem, as Aristotle conceived it, lay in what might called the persistence of identity in difference that seems to characterize the now, how the now somehow remains the same across the succession of now moments. As we have seen, whenever BBT encounters an apparent cognitive impasse, it asks what role informatic constraints play. The constraints, as identified by Dennett and Kinsbourne in their analyses in “Time and the Observer,” turn on the dissociation of the time of representation from the time represented. In a very profound sense, our conscious experience of time is utterly blind to the time of conscious experience, which is to say, information pertaining to the timing of conscious timing.
So what does this, the conscious neglect of the time of conscious timing, mean? The same thing all instances of informatic neglect mean: fusion. The fusing of flickering lights when their frequency exceeds a certain informatic threshold seems innocuous likely because the phenomenon is so isolated within experience. The kind of temporal fusion at issue here, however, is coextensive with experience: as many commentators have noted, the so-called ‘window of presence’ is just experience in a profound sense. The now always seems to be the same now because the information regarding the time of conscious timing, the information required to globally distinguish moment from moment, is simply not available. In a very profound sense, ‘flicker fusion’ is a local, experientially isolated version of what we are.
Thus BBT offers a resolution of the now paradox and an explanation of personal identity in a single conceptual stroke, as it were. It provides, in other words, a way of explaining how natural and irreflexive processes give rise to the apparent reflexivity that so distinguishes consciousness. And by doing so it drastically reduces the explanatory burden of consciousness, leaving only ‘default identity’ or ‘fusion’ as the mystery to be explained. Given this, it provides a principled means of ‘explaining away’ consciousness as we seem to experience it. Using informatic neglect as our conceptual spade, one need only excavate the kinds of information the conscious brain cannot access from our scientific understanding of the brain to unearth something that resembles–to a remarkable degree–the first-person perspective. Consciousness, as we (think we) experience it, is fundamentally structured by various patterns of informatic neglect.
And it does so using an austere set of concepts and relatively uncontroversial assumptions. Conscious episodes are informatically encapsulated. Deliberative cognition is plural and heuristic (though neglect means it appears otherwise). Combining the informatic neglect pertaining to the first–which Dennett has mistakenly eschewed–with the problems of ‘matching’ pertaining to the second, produces what I think could very well be the single most parsimonious and comprehensive theory of ‘consciousness’ in the field.
But I anticipate it will be a hard sell, with the philosophy of mind crowd most of all. Among the many invisible heuristics that enable and plague us are those primed to dismiss outgroup deviations from ingroup norms–and I am, sadly, merely a tourist in these conceptual climes. Then there’s the brute fact of Hebb’s Law: the intuitions underwriting BBT demand more than a little neural plasticity, especially given the degree to which they defect from any number of implicit and canonically explicit assumptions. I’m asking huge populations of old neurons to fire in unprecedented ways–never a good thing, especially when you happen to an outgroup amateur!
And then there’s the problem of informatic neglect itself, especially with reference to what I earlier called the epistemic heuristic. I often find myself flabbergasted by how far out of step I’ve fallen with consensus opinion since the key insight behind BBT nixed my dissertation over a decade ago. Even the notion of content has come to seem alien to me! a preposterous artifact of philosophers blindly applying EH beyond its scope of application. On the BBT account, the most effective way to understand meaning is as an artifact of structured informatic neglect. In a real sense, it holds there is no such thing as meaning, so the wide-ranging debates on content and representation that form the assumptive baseline for so many debates you find in the philosophy of mind are little more than chimerical from its standpoint. Put simply, ‘truth’ and ‘reference’ (even ‘existence’!) are best understood as kinds of heuristics, cognitive adaptations that maximize effectiveness via forms of informatic neglect, and so possess limited scopes of applicability.
Even the classical metaphysical questions regarding materialism are best considered heuristic chimera on my view. Information, nonsemantically construed, allows the theorist to do an end run around all these dilemmas, as well as all the dichotomies and dualisms that fall out of them.
We are informatic subsystems attempting to extend our explanatory ‘algorithms’ as far into subordinate, parallel, and superordinate systems as we can, either by accumulating more information or by varying our algorithmic (cognitive) relation to the information already possessed. Whatever problem our system takes on, resolution depends upon this relation between information accumulation and algorithmic versatility. So as we saw with ‘qualia,’ our system is stranded: we cannot penetrate and interact with red the way we can with apples, and so the prospect of information accumulation are dim. Likewise, our algorithms are heuristic, possessing a neglect structure appropriate to environmental problem-solving (given various developmental and structural constraints), which is to say, a scope of applicability that simply does not (as one might expect) include qualia.
The ‘problem of consciousness,’ on the BBT account, is simply an artifact of literally being what science takes us to be: an informatic subsystem. What has been bewildering us all along is our blindness to our blindness, our inability to explicitly consider the prevalent and decisive role that informatic neglect plays in our understanding of human cognition. The problem of consciousness, in other words, is nothing less than a decisive demonstration of the heuristic nature of semantic/epistemic cognition–a fact that really, in the end, should come as no surprise. Why, when human and animal cognition is so obviously heuristic in so many ways, would we assume that a patron as stingy as evolution would flatter us with a universal problem-solving device, if not for simple blindness to the limitations of our brains?
The scientific problem of consciousness remains, of course. Default identity remains to be explained. But given BBT, the philosophical conundrums have for the most part been explained away…
As have we.