To Be and Not To Be: Some Radical Reflections on the Origins and Extent of the ‘Hard Problem’

Summary: Beginning with a novel consideration of Jackson’s Knowledge Argument, I propose the following thesis: that the explanatory gap is due to the inability of our brain to reconcile its environmentally mediated ‘interpretation’ of other brains as brains with its prior ‘self-interpretation’ as something otherthan a brain. Our brains, I suggest, simply cannot, in the first instance, recognize themselves for what they are. I then offer the story of ‘The Invisible Brain,’ a brief evolutionary and neuro-structural account of what I take to be three decisive features of the brain’s subreptive self-interpretation: instrumentality, normativity, and intentionality. Through this account, I propose the ‘Bottleneck Thesis,’ the claim that since all explanation must pass through the bottleneck of our brain’s intentional self-interpretation, there is absolutely no way of overcoming the hard problem. The resulting dilemma, I suggest, possesses implications that extend far beyond the philosophy of mind. Indeed, the Bottleneck Thesis individuates and explains several fundamental problems of philosophy.


The ‘hard problem’ of consciousness is one of explaining how and why neurophysiology generates experience. Methodological naturalism seems to have no difficulty explaining various functional aspects of consciousness, but when it comes to the question of conscious experience, it apparently founders. When something seems to congenitally ‘slip through the fingers’ of a proven explanatory paradigm, there are at least two ways to approach the problem: one can assume the problem stems from how things are with the world, or one can assume the problem stems from how things are with us. In the case of consciousness, David Chalmers is an exemplary proponent of the first ‘dogmatic’ approach: the reason phenomenal experience escapes naturalistic explanation is that consciousness constitutes something ‘special’ in the order of how things are. Colin McGinn, on the other hand, is an exemplary proponent of the second ‘critical’ approach: the reason phenomenal experience escapes naturalistic explanation is that we simply do not possess the requisite hardware.

Although I am also a proponent of the critical approach, the account I will propose differs from McGinn’s in several drastic respects. Working from Chomsky’s thesis that our theoretical cognitive abilities stem from functional adaptations of our neurophysiologically entrenched linguistic abilities, McGinn suggests the reason for our inability to naturalistically explain phenomenal experience is that the ‘psychophysical link’ escapes the combinatorial paradigm of our linguistic abilities.[1] For McGinn, then, the problem does not arise because phenomenal experience is something special, but because the link between phenomenal experience and its associated neurophysiology is something special, not insofar as it constitutes anything ‘supra-natural,’ but insofar as it transcends the basic neurophysiological template of human cognition. It simply cannot be ‘computed’ as something natural.

I think the problem is far more radical than this–far more radical, in fact, than anyone has hitherto suspected. I agree with McGinn that we are hardwired in such a way that we are constitutively unable to understand ourselves as hardware, but the problem has nothing do with processing power and elusive links. The problem resides, rather, in the two very different ways in which our brain can be related to its own neurophysiology. The hard problem, I will argue, stems from the incommensurability of the way the brain exo-neurophysiologically interprets itself via environmental inputs, which is to say, as a brain, and the way the brain interprets itself endo-neurophysiologically, which is to say, as the subject of experience. Methodological naturalism cannot explain phenomenal consciousness, in other words, because in the first instance our brain cannot understand itself as a brain. This generates what might be called a ‘cognitive bottleneck’: a point at which all our attempts to resolve the hard problem result in aporia.

The Knowledge Argument

Perhaps no paper has done more to blunt methodological naturalism’s claim to monopolize phenomenal experience than Thomas Nagel’s “What is it like to be a bat?”[2] But it is Frank Jackson’s ‘Knowledge Argument’ that gives Nagel’s general point in this paper what many consider its most provocative form.[3] Given the simplicity and sheer intuitive force of Jackson’s formulation of the problem, I will use it to 1) defend Nagel’s basic argument, that facts pertaining to ‘what it’s like’ to occupy a certain standpoint cannot be understood from the standpoint of methodological naturalism; and 2) lay the groundwork for an account of why this is so. Jackson’s thought experiment not only poses the problem, it also suggests an explanation of the problem’s origin.

In what is by now a classic thought experiment, Jackson postulates a cloistered neuroscientist–let’s call her Eve–whom after learning every possible physical fact regarding red in a completely colourless environment, steps outside, and for the very first time sees something red–an apple, say. The question is one of whether Eve learns a new fact about red, namely, what it is like. If Eve does learn a new fact, then it would seem some facts are not physical, and that physicalism is false.

The argument might be expressed as follows:

A1) Eve knows every physical fact there is to know about red.

A2) Eve does not know every fact there is to know about red.

/A3) Certain facts are not physical.

/A4) Physicalism is false.

The primary problem with this expression of the argument is that it attempts to tease metaphysical consequences out of a thought experiment that at most possesses methodological repercussions. It could be the case, for instance, that Eve’s subsequent experiential knowledge of what red is like is simply a different species of physical knowledge, one that cannot be expressed in physical statements.[4] Given this, the argument is perhaps better posed as:

B1) Eve knows every fact about red accessible to methodological naturalism.

B2) Eve does not know every fact about red.

/B3) Certain facts are not accessible to methodological naturalism.

Expressed in this way, the argument provides an intuitively forceful presentation of the hard problem, of methodological naturalism’s apparent inability to account for phenomenal experience. Typically, opponents of the hard problem will argue against, and proponents will argue for, the factuality of what Eve learns.

Let’s return to the thought experiment and assume for the moment that all phenomenal states are somehow identical to neurophysiological states. Given this, one can say that between Eve’s two instances of apparent knowledge, she possesses two different neurophysiologies: the neurophysiology of knowing the neurophysiology of red and the neurophysiology of knowing what red is like. Obviously these two neurophysiologies are physically different. What this means is that her brain, in the course of experiencing red, does not process itself the way it can, with a certain amount of training, process other brains as environmental inputs. This is why the experience of red tells us almost nothing about the neurophysiology of red. Apparently the brain possesses a way of immediately ‘knowing’ its own neurophysiological processes that differs from and precedes the possibility of scientifically knowing its own neurophysiological processes.

One signature characteristic of this difference is that the brain is initially blind to the neurophysiological provenance of red. In seeing a red apple, Eve does not see ‘her neurophysiology at work,’ nor even red in the sense of qualia, but rather a red apple. What Eve knows before seeing the red apple quite literally ‘drops out’ of what she knows while seeing a red apple. Knowing what red is like means knowing what a certain property (redness) of a certain thing in the world (an apple) ‘is like.’ By the same token, ‘knowing the neurophysiology of red’ does not entail knowing anything about the neurophysiology of knowing the neurophysiology of red. This is why we can discursively know something of neurophysiology while remaining utterly baffled by the neurophysiology of discursive cognition.

If one is a reductive materialist, the question of whether or not Eve learns a new fact becomes the question of whether the latter instance of Eve’s neurophysiological self-processing constitutes ‘knowledge of facts’ independent of those constituted by its neurophysiological processing of environmental inputted neurophysiology. Answering the fact question, in other words, demands knowledge of the neurophysiology of knowledge. Conceptual banter about the cognitive status of Eve’s experience of red may enable us to lay odds, but for the reductive materialist the answer will ultimately depend on the findings of a more mature cognitive science.

The problem here, however, is that it seems impossible for any cognitive science to ever answer this question. Say we successfully isolate the two neurophysiological networks at issue, subsystem A, pertaining to the neurophysiology of knowing the neurophysiology of red, and subsystem B, pertaining to the neurophysiology of knowing what red is like. Say we can describe and fully understand the functions of either subsystem within the brain and its immediate environment. So the question becomes: In virtue of what does the function of A constitute factual knowledge while the function of B does not?

The problem is that our answer to this question will always depend on a prior non-natural interpretation of knowledge. The question is not so much one of what neurophysiologically ‘constitutes’ knowledge as one of what counts as ‘neurophysiologically constitutive of knowledge.’ If a phenomenal noncognitivist were to confront a phenomenal cognitivist with the relevant neurophysiological data, the latter need only nod his or her head and say, ‘So that’s what factual phenomenal knowledge looks like at the neurophysiological level!’ No matter what salient differences between A and B the noncognitivist might adduce, he or she will have no way, short of their own prior interpretation of knowledge, of arguing that these are disqualifying differences. We need to know what counts as factual knowledge before we can assign a ‘factual knowledge function’ to any neurophysiological subsystem.[5]

If the isolation and functional descriptions of the respective neurophysiological subsystems involved in either instance of purported factual knowledge can in no way settle the question of whether Eve learns a new fact, then anyone who wants to argue against the factuality of Eve’s experiential knowledge of red must marshal non-natural facts, which is to say, a certain philosophical interpretation of what counts as knowledge, to do so. In other words, there is no way for the phenomenal noncognitivist to argue against the phenomenal cognitivist without committing themselves to one variety of ‘spooky facts’ in the name of expunging another. [this is a normative closure argument]

But why is this? If the functions of subsystems A and B could be mapped as doing this or that vis a vis other neural subsystems, why would it be so difficult to map them as doing this or that vis a vis cognition?

The important thing to note is that in this latter case we are pitting our brain’s interpretation of other brains against its interpretation of itself. We already possess an implicit or explicit understanding of knowledge, which given our ignorance regarding the neurophysiology of knowledge, must be an understanding constituted by our brain’s self-processing, by the way the brain ‘interprets’ its own cognitive activities. Since the neurophysiology of discursive cognition is absent from this interpretation, it differs quite drastically from our brain’s environmentally mediated interpretation of ‘other brain cognition.’

In the case of other brains, our brain maps neurophysiological functions laterally, which is to say, across its environment. ‘That1 does that2 to that3,’ where ‘that3’ finds itself processed in a similar manner to ‘that1&2,’ that is, as another environmental input. In the case of our own brain, however, it attempts to map neurophysiological functions medially, which is to say, from its environment to itself. ‘That1 does that2 which is this1,’ where ‘this1’ cannot be processed in a similar manner to that1&2 because it is not another environmental input.

The potential difficulties with mapping medial neurophysiological functions is at least threefold. First, there is the problem of ‘interpretative interference.’ Our brain’s interpretation of its own cognitive activity not only precedes its interpretation of other brains, it is almost certainly ‘fixed’ in some respect. We could very well possess some neurophysiologically entrenched ‘implicit self-understanding’ of cognition that runs counter to the possibility of neuro-functional explanation. If this understanding is neurophysiologically entrenched, then it will always remain the way we understand cognition ‘in the first instance.’ If it turns out to be fundamentally incommensurable with our environmentally mediated understanding of neurophysiology, then no amount of neuroscientific knowledge would allow us to ‘bridge the medial gap.’ [thus the emphasis on fixing the various explananda of cognition]

Second, there is the problem of ‘interpretative fuzziness.’ Despite being entrenched in some respect, it is also certain that our brain’s capacity to interpret its own cognitive activity is fundamentally hobbled compared to its ability to interpret environmental inputs. As a result, there are many different ways in which our implicit self-understanding of cognition can be made explicit. This means we possess no stable description of cognition as a medial explanandum of neurophysiology. In the absence of a stable explanandum, there is no definitive way to map medial functions. In other words, without some fact of the matter regarding our brain’s self-interpretation of cognition, some definitive way in which the brain interprets itself in the first instance, there is no fact of the matter regarding the medial functions of the corresponding neurophysiological subsystems.

And third, there is the problem of ‘interpretative priority.’ If our brain’s interpretation of its own cognitive activity is neurophysiologically entrenched in some fundamental respect, then our only way of understanding the cognitive status of its interpretation of other brain cognition, whatever it looks like, will be through this self-understanding. If this interpretation of other brain cognition turns out to be incommensurable with our brain’s self interpretation–which is to say, if our brain’s self-interpretation of its cognitive activity turns out to be a neurophysiologically entrenched ‘cognitive illusion’–we will find ourselves in the seemingly impossible situation of necessarily presupposing the very thing that must be explained away in order to explain it away.

The difficulty, essentially, could be that the brain does not, in the first instance, interpret itself as a brain. It quite simply cannot recognize itself for what it is in the first instance. The apparent irreducibility of things like cognition, consciousness, and qualia, one might suppose, stems from the fundamental conflict of interpretations that arises when our brain moves from its achieved environmentally mediated interpretations of neurophysiology to its pre-achieved self-interpretation.

In the case of Eve’s experience of red, for instance, the difficulty with medial neurophysiological explanation is primarily due to interpretative interference. The neurophysiology of red nowhere figures in her brain’s neurophysiologically entrenched self-interpretation of red apples. The phenomenal explanandum of Eve’s prior neurophysiological understanding of red, in other words, is characterized by the apparent lack of neurophysiological mediation. She learns what red ‘is like’ according to her brain’s self-interpretation of its own environmental inputs. When we attempt to give a medial functional account of the neurophysiology constitutive of this self-interpretation, we attempt to add the very element whose absence characterizes this pre-achieved self-interpretation. In other words we attempt to synthesize incommensurable interpretations.

The problem of determining what counts as cognition before determining what neurophysiologically constitutes cognition is an artifact of all three problems. On the one hand we possess a ‘fixed’ implicit understanding of cognition that, as in the case of red, is blind to its neurophysiological provenance, while on the other hand we possess variable explicit understandings of cognition. Any attempt to medially map functions from neurophysiology to cognition, therefore, first confronts the problem of interpretative interference, our brain balking at the imposition of an alien self-understanding, and then the problem of interpretative underdetermination, the question of just what is being explained. Lastly, it confronts the problem of interpretative priority, the fact that medial functional mapping can only be justified via the brain’s pre-achieved self-understanding, such that the discrepancies between interpretations cannot be intelligibly explained away.[6]

While all three of these problems play a role in the circumscription of methodological naturalism, the second, the problem of interpretative fuzziness, does the most to undermine the phenomenal noncognitivist. If there is to be a fact of the matter regarding medial neurophysiological function, then there must be a fact of the matter regarding the brain’s self-interpretation of cognition. We must know what is being explained. Likewise, if there is to be a fact of the matter regarding the medial function of the neurophysiology of red, there must be a fact of the matter of the brain’s self-interpretation of red.

To summarize then: By looking at the cognitive differences between our scientific and phenomenal knowledge of red in terms of differences in neurophysiology (the neurophysiology of knowing the neurophysiology of red as opposed to the neurophysiology of knowing what red is like), the question of phenomenal cognitivism was transformed into the question of whether the latter neurophysiology constituted factual knowledge. This led to the problem of prior interpretation, the inability of cognitive science to answer this question by recourse to natural facts alone. No matter how detailed our knowledge of the neurophysiology of experiencing red might be, we could always attribute the medial function of factual knowledge to it. The reason for this lies in the peculiarity of medial as opposed to lateral functions: the fact that the purpose of medial neurophysiological explanation is to explain the brain’s self-interpretation in terms of its environmentally mediated interpretation of other brains. Once the difficulty of medial neurophysiological explanation is understood, it becomes apparent that facts regarding phenomenal experience are a necessary condition of assigning medial functions to our neurophysiology.

This provides an entirely different way of conceiving the hard problem. The hard problem is no longer one of explaining the apparently miraculous generation of conscious experience by neural tissue, but rather one of harmonizing the brain’s immediate self-interpretations with its environmentally mediated self-interpretations. The hard problem of getting here from there, of getting this from that (which is to say, the problem of medial functions), stems from the brain’s neurophysiologically entrenched inability to interpret itself as a brain, from the absence of neurophysiology in our implicit self-understanding, not from the absence of consciousness in our neurophysiology.

Of course this ‘step sideways’ turns on some big assumptions, not the least of which involves the tacit conflation of phenomenal consciousness with our brain’s ‘self-interpretation.’ Aside from noting the apparent absence of any other way to understand cognitive import of the different neurophysiologies at issue in Eve’s knowledge of the facts, I will leave this question to those far more qualified. Those problematic assumptions that remain, I hope, will be discharged by what follows.

‘The Invisible Brain’

I would like to tell a story about the possible neurobiological origins of instrumentality, normativity, and intentionality; a story that, I think, offers a plausible account of why we might be hardwired in such a way that we constitutively cannot understand ourselves as hardware.

But first a few comments to help the story get off the ground. Our brains respond to environmental inputs by generating specific behavioural outputs. As it stands, we possess several general ways to understand the specificity of behavioural output, but for the moment I would like to focus on two: what might be called the etiological mode and the instrumental mode. Conceived in etiological terms, the specification of behavioural output might be called ‘bottom-up.’ Outputs are specified by their physical causes. Eve eats the apple because certain neurophysiological processes led to this behavioural output. Understood in instrumental terms, on the other hand, the specification of output might be called ‘top-down.’ Here outputs are specified by their effects, as means to pre-specified ends. Eve eats the apple ‘because’ she is hungry, and apple eating satisfies hunger. The effect of the behaviour is construed as the specifying ‘cause’ of the behaviour.[7]

I call attention to these two modes for the purpose of contrast. By and large it seems to be the case that when we explain otherwise instrumentally understood behaviour in the etiological mode, instrumentality ‘drains away.’ This is not to say these two modes indicate any sort of profound incommensurability–it might be the case that instrumentality and etiology merely indicate disparate ‘levels of description’ that will dovetail quite nicely once the neurophysiology of the former is understood–but merely that they seem to be incompatible as modes of behaviour specification. The nature and extent of this incompatibility, I think, will be made clear in what follows.

Now for the story. If one looks at morphology as the bottom-up output of certain genotypes, one can say evolution provides a bottom-up way to mimic top-down output specification. Since the reproduction of a given genotype depends upon the environmental effectiveness of its morphological expression, bottom-up morphological outputs become an effect of their effects. Over time genes, mutation, morphology, and natural selection produce the semblance of top-down output specification: morphological features that seem caused by their own effects, when in fact they are caused by the past effects of past bottom-up outputs.

Given the success of this ‘evolutionary effect feedback mechanism’ (EEF) in generating effective morphological output, it is perhaps not surprising that our brains might evolve a ‘behavioural effect feedback mechanism’ (BEF) in the generation of effective behavioural output. Brains ‘test’ bottom-up behavioural output against their effects, reproducing those that are effective and culling those that are not. Our brains render, in other words, their own bottom-up behavioural output an effect of their past effects.

Given the reproductive advantages of BEF, it is also not surprising that EEF would lead to the social coordination of our brains. Social coordination minimizes the production of ineffective behavioural outputs and maximizes the reproduction of effective behavioural outputs by providing more effect feedback and by preventing what might be called ‘effect feedback redundancy.’ Socially coordinated brains increase the scope of environmental inputs, provide more neurophysiological resources for ‘testing,’ and allow for the ‘division of neurophysiological labour.’ This latter is particularly important: the ‘sharing’ of effective bottom-up behavioural outputs makes neurophysiological resources available which would otherwise be engaged in redundant processing. Social coordination, especially via language, means that BEF is no longer restricted to the neurophysiological resources and environmental inputs of individual brains. Our brains become cogs in a broader etiological process of effect feedback, part of a ‘social behavioural effect feedback mechanism’ (SBEF), an EEF result selected for because of its efficient exploitation of neurophysiological resources.

A couple of things of note: First, the social coordination of brains demands that individual brains reproduce bottom-up behavioural output in the absence of any ‘natural’ effect feedback environmental inputs. This is just to say that brains generate behaviours that can only be ‘tested’ through the environmental inputs provided by other brains and not through other natural environmental inputs. SBEF requires ‘altruism,’ which is to say, bottom-up behavioural outputs whose actual effectiveness transcends the environmental access of individual brains. The result is ‘suspended’ bottom-up behavioural outputs, reproduced behaviours disconnected from any effect feedback save those environmental inputs provided by other brains.

Second, this story possesses interesting implications for our brain’s primary means of reproducing outputs in other brains: language. Language is a kind of behaviour, which is to say a kind of bottom-up output, (‘exo-neurophysiological,’ we might say, when manifested in overt behaviour, and ‘endo-neurophysiological’ when manifested in ‘thought’), whose primary EEF function is to rationalize neurophysiological resources through the reproduction of effective bottom-up behavioural outputs. Minimally, the trans-brain reproduction of bottom-up output specifications requires the ability to ‘mimic’ witnessed behaviour, which is to say, the ability to translate certain environmental inputs, the behaviour generated by other brains, into bottom-up outputs, the same behaviour generated by one’s own brain. The trans-brain reproduction of outputs via language, however, involves a far different species of ‘neurophysiological translation.’ Where mimicry involves a direct translation of environmental input into reproduced behavioural output, language involves the translation of behavioural output into a behavioural output utterly unlike the behaviour at issue, but which nonetheless effects the reproduction of that behavioural output in another brain. Language is a kind of bottom-up behavioural output distinct from the bottom-up behavioural output it reproduces in other brains. This means that language requires the brain be neurophysiologically reflexive in such a way that ‘output schemas’ belonging to one brain can be encoded into auditory output that, as an environmental input, can be decoded by a different brain and reproduced.

Thus far the story is incomplete. The varieties of effect feedback discussed, although they mimic top-down output specification, are in no way instrumental. The effect that causes, which is the hallmark of top-down output specification, is always a past effect in these effect feedback models. At every turn output arises ‘from the bottom up.’ If I have resorted to instrumental idioms in the telling of the story, it is only as a kind of ‘explanatory shorthand.’ The output of bottom-up effect feedback processes happen to lend themselves to top-down understanding, understanding in terms of for rather than in terms of from. The real question is one of why we possess this kind of understanding in the first place. The answer to this question, I believe, involves the question of the brain’s neurophysiological reflexivity.

How reflexive is our brain? Although the definitive answer to this question awaits a more mature neuroscience, there are at least three good reasons to hazard ‘not very’ as an answer. The first reason has to do with the limitations of ‘introspective access.’ Our brains, we now know, engage in many activities of which we have little or no awareness, and insofar as awareness is the product of neurophysiological reflexivity, the limitations of the former suggest the limitations of the latter.

The second reason is structural: neurophysiological processors are fundamentally unable to process their own processing simultaneous to that processing. They are essentially ‘blind’ to their immediate processes. They may be linked to other processors that can process their processing as it is processed, but these ‘meta-processors’ are likewise blind to their own processing, and taken as a whole, simply add to the amount of unprocessed processing. What might be called ‘process asymmetry,’ then, constitutes an intrinsic structural constraint on neurophysiological reflexivity.

The third reason is evolutionary: The primary function of the brain is the generation of effective behavioural output. Since effective behavioural output requires reliable orientation in the world, one would expect our brains to be far less ‘concerned’ with their own machinations than with the machinations of their immediate environment. The only way for ‘neurophysiological introversion’ to emerge as an effective evolutionary output is if it enables the emergence of more effective behavioural output. This suggests that neurophysiological introversion only emerges as a byproduct of more effective ‘neurophysiological extroversion.’ Brains are irreflexive by default, and reflexive only in the service of a more effective irreflexitivity. What might be called ‘default irreflexivity,’ then, constitutes an extrinsic evolutionary constraint on neurophysiological reflexivity.

The key to understanding the natural basis of intentionality, instrumentality, and normativity, I think, lies in these last two reasons. Process asymmetry assures that a brain possesses limited resources for ‘self-processing.’ Default irreflexivity suggests that inordinate self-processing would be selected against, that brains be primarily directed ‘away’ from themselves. The first reason, the limits of introspective access, is simply an effect of these constraints. So too, I think, are intentionality, instrumentality, and normativity.

As a consequence of process asymmetry and default irreflexivity, the brain is largely blind to itself. It cannot process itself in the same exhaustive manner that it processes environmental inputs. This is why, for instance, our thoughts and passions seem to be ‘nowhere’: because the brain is blind to itself as a brain, it has ‘no place to put them.’ Even the bulk of the neurophysiological reflexivity our brains do enjoy, the reflexivity constitutive of ‘consciousness,’ is yoked to the evolutionary demand for irreflexivity. This is why when we see red, for instance, we don’t see a neurophysiological process, but rather a feature of something in our environment–a red apple on the tree of knowledge, say.

This constitutive blindness pertains to bottom-up behavioural outputs as well. Although the brain’s behavioural outputs are in fact causally specified, the brain cannot process them as such because it cannot process itself as another process in the causal order of environmental inputs. Compared to environmental events, behavioural outputs seem to arise ex nihilo, to be ‘from nothing’ rather than from the neurophysiological processes of effect feedback that caused the behaviour to be reproduced. Since the neurophysiological processes involved in the causal specification of environmental events are unavailable, the brain must process its own behavioural outputs in another way. It must resort to some other means of behavioural output specification.

Instrumentality and normativity are the phenomenal manifestation of these other means. They constitute neurophysiologically entrenched ways to reflexively process bottom-up behavioural outputs in ‘bottomless’ terms. Our brains process their behavioural outputs as actions, as something paradoxically at once within and outside the causal circuit of its environment, because they cannot reflexively process themselves in the same manner they process environmental inputs.

Behavioural output specification in terms of instrumental propriety arises because our brain is able to reflexively process the neurophysiological processes associated with ‘desire’ and ‘effect expectation’ but not the neurophysiological processes that generate behavioural output. Eve’s arm moves and her hand twists off the apple. Since her brain can only reflexively process this behavioural output in terms of hunger and effect expectation, the ‘bottom drops out’ and her behaviour is processed as something purposive, as an action specified by ‘for’ rather than ‘from.’

Behavioural output specification in terms of normative propriety differs primarily in that the brain has even less environmental and reflexive access. As mentioned, the social coordination of brains requires the reproduction of behavioural outputs in the absence of effect feedback–or ‘altruism.’ Such ‘suspended behavioural outputs,’ when reflexively processed at all, escape processing in terms of motives and effects, and so are specified as ‘proper for their own sake.’

Despite their sketchiness, these explanations are compelling in at least one respect. The brain is primarily a behavioural output effect feedback mechanism that, due to structural and evolutionary constraints, must reflexively process its behavioural outputs in a ‘bottomless’ way. Is it just an uncanny coincidence that ‘bottomlessness’ also characterizes our instrumental and normative specifications of human behaviour? I think not.

The problem, however, is that once we acknowledge the broad-brush veracity of this account we also acknowledge that instrumentality and normativity are subreptive. Our brains do not make ‘errors,’ they only produce behavioural outputs that either do or do not effect reproduction at the levels of BEF, SBEF, and EEF. Our brains are ‘bottom-up’ all the way down. They only reflexively process behavioural outputs as top-down ‘successes’ and ‘mistakes’ because they cannot, in the first instance, process them otherwise. Our brains, in other words, fundamentally misunderstand themselves. Our apparent ability to explain and predict the behaviour of others in instrumental and normative terms is simply due to the fact, one the one hand, that different brains systematically misunderstand themselves in similar ways, and on the other, that this systematic misunderstanding is itself systematically related to how things are. When I put an ice-cube on the thermostat and say, ‘I’ve tricked into thinking it’s cold so that it’ll turn the thermostat on,’ I’ve completely misunderstood the thermostat by attributing beliefs and desires to it, but in such a way that I can successfully predict its output. This is all that matters from an evolutionary standpoint: that my brain falls into effective etiological relationships with its environment. And this is why instrumentality and normativity are subreptive rather than illusory through and through: our brains are systematically self-deluded in a manner that effects reproduction at the level of EEF. Instrumentality and normativity, one might say, are components of our brain’s ‘native ideology.’

The dimensions of this problem become evident when we consider intentionality. Default irreflexivity, it was suggested, was the reason why our brains process red as a feature belonging to the world and not as a product of a neurophysiological process. The EEF impetus behind the neurophysiological reflexivity constitutive of conscious experience (and required for translation into linguistic behavioural output) is the production and reproduction of effective behavioural outputs, where ‘effective’ simply means ‘effects reproduction.’ Where our neurophysiology performs operations on environmental inputs that generate behavioural outputs, we struggle to survive in the world. Our constitutive neurophysiology is nowhere to found in the ‘intentional scene’ of ‘being in the world.’ In other words, our brains are phenomenally transparent.

Brain transparency suggests that intentionality constitutes a ‘static’ analogue to instrumentality and normativity. As with instrumental and normative behavioural outputs, intentional experience seems to arise ex nihilo. In the first instance, Eve simply sees a red apple. She cannot see a red apple causing her to see a red apple because this would mean, given the transitivity of causal environmental inputs, that either she sees the apple before she sees the apple, or that what she sees is not a red apple, but something caused by a red apple. The absurdity of the first disjunct suggests that direct perceptual intentionality must be bottomless. The second disjunct, on the other hand, suggests that perhaps mediated perceptual intentionality need not be bottomless. Perhaps Eve ‘visually represents’ via some neural state the red apple causing her to visually represent a red apple. Perhaps intentionality does not require bottomlessness after all.

But aside from the staid perplexities of representationalism, the problem here is that Eve’s neurophysiologically instantiated representation, distributed or otherwise, must be about the red apple that causes it, which is to say, it must possess a special physical relation to what causes it. The obvious candidate for this relation, one might think, is the relation of ‘being caused.’ But since most everything, at a certain level of physical description at least, possesses the relation of being caused without possessing the relation of aboutness, we need to specify what distinguishes the being caused of her representational neural state such that it is about a red apple rather than simply caused by a red apple.

The causal chain between Eve’s representational neural state and the red apple is not direct, but mediated in innumerable ways. The red apple is simply one transitive link in a line of causes that extend beyond and between it, and it belongs to merely one line of transitive causation among a welter of others. For a neural state to be about a red apple, then, it must somehow specify both the line and the link from a cacophony of prior causes.

Note, however, that although ‘etiological line and link specification’ is a necessary condition of ‘natural aboutness,’ it is not sufficient. One could easily imagine a simple mechanism, X, consisting of a spring loaded armature fixed by a latch that can only be released when hit with a certain degree of force. X is placed on a pool table, and three billiard balls of increasing mass, A, B, and C, are bounced off a bumper at such an angle that each hits X in turn. When either of the lighter balls, A or B, hits X, it produces no output, but when it is hit by the heaviest ball, C, the latch is released, and the spring loaded armature swings out, striking the bumper that ‘reflected’ C. Although this mechanism ‘specifies’ both a line, the movement of C, and a link, the bumper, there is no sense in which it is about or represents or refers to the bumper.

And this would be true even if the mechanism and its environment were astronomically complex.

One could, by making the mechanism the neurobiological product of three billion years of evolutionary effect feedback, vastly inflate the ‘processing interval’ between input and output lines, such that the mechanism possesses structural ‘templates’ for automatic link specification, is able to process subsequent repetitions of input lines according to prior link specifications, and is even able to generate outputs that lead to probable future links. In the end, however, nothing belonging to the mechanism would ‘represent,’ ‘refer to,’ or ‘be about’ any particular link. [point taken, but all these examples are crap – need to press on the relationship of intentionality to complexity. Why is it the case that the more complex something becomes, the more appropriate intentional explanations seem? Intentional explanation – which is to say, intentionality – bears an important and particular relationship to complexity. Also, remember that these can only be called ‘subreptive modes’ relative to some ‘authentic’ mode of understanding. The most it seems we can say is that it’s suffers the constraints mentioned above]]

Why? Because the essential relations would still be etiological and not intentional. The crucial thing for the mechanism would not be ‘to get it right’ through the neurophysiological processing of ‘true representations,’ but rather to fall into effective causal relationships with its environment, which is to say, relationships of being caused and causing that effect reproduction. Such ‘etiological equilibrium’ [8] does not require that neural states ‘be about’ anything at all, only that they, as components of a greater neuro-etiological loom, weave lines of input into lines of effective output. Certain input and output lines may, because of the vagaries of processing, miss effective links, but there would be no ‘error’ here, anymore than X would be in error if its armature seized; there would simply be etiologically ineffective output.

Etiological line and link specification, no matter how complicated, is not a sufficient condition of natural aboutness, but then neither, it seems, is the neurophysiological instantiation of natural aboutness a necessary condition of producing mechanisms, ‘neuro-etiological looms,’ every bit as sophisticated and as able as Eve’s brain.[9] The suggestion, of course, is that Eve’s brain just is an astronomically complicated version of mechanism X, bereft of representations. The problem, however, is that Eve is at once indubitably intentional and no more than her brain.

The purpose of this last installment of the story was to provide a sense of the spookiness of intentionality, of the way it seems to evaporate from our brains, which belong to the world, just as assuredly as it has evaporated from the world at large since the Enlightenment.[10] But why is it so spooky?

If phenomenal consciousness is a product of neurophysiological reflexivity, and if neurophysiological reflexivity is constrained by process asymmetry and default irreflexivity, then we should expect the structure of phenomenal consciousness to reflect those constraints in some manner. As a central structural feature of phenomenal consciousness, intentionality should also reflect those constraints. So to return to our earlier example: Eve simply sees a red apple. Saying that she sees the apple causing her to see the apple leads us to absurdity. The suggestion that she visually represents the apple that causes her to visually represent the apple sends us on a hunt for something that cannot be found, and perhaps more importantly, need not be found: the neural correlates of representation. This is simply because intentionality expresses the structural and evolutionary constraints on neurophysiological reflexivity through its bottomlessness. In the first instance, Eve sees the apple. She is given the link without the line. The impact of photons across her retina, the subsequent encoding and transmission along the optical nerve, the preliminary subcortical processing , and the subsequent processing in the primary visual cortex are simply nowhere to be found in her intentional experience of seeing the apple. If her brain could exhaustively process all these bottom-up processes,[11] it would be difficult if not impossible to imagine what her resulting ‘etiological experience’ would be, but it would not be intentional.

So what about linguistic intentionality? Given the limits of our neurophysiological reflexivity, there is no reason to suppose that language, as a matter of empirical fact, must be intentional. According to the story told here, language is simply a specialized etiological line, something that locally effects the reproduction of effective bottom-up behavioural outputs in other brains and so globally effects the rationalization of neurophysiological resources. For instance, by encoding environmental inputs into auditory outputs, Eve’s brain can effect ‘etiological contact’ between Adam’s brain and environments that would otherwise lie beyond his brain’s ‘range of input.’ Likewise, by encoding behavioural outputs (processed as environmental inputs) into auditory outputs, Eve’s brain can effect the reproduction of effective behavioural outputs in Adam’s brain. The probabilities of Adam’s brain falling into etiological equilibrium are thereby increased. At no point does the ‘etio-linguistic line’ effected between Eve’s brain and Adam’s brain require the exotic property of aboutness.

From the ‘bottomless’ standpoint of brain transparency, on the other hand, Eve simply says, ‘There’s a tree over there with red apples you should eat.’ Since Adam’s brain cannot etiologically process itself, Adam simply ‘gets her meaning.’ The prespecified links triggered by different components of her linguistic auditory output are bottomless, and so must be related to his environment in a bottomless way. Hence, they refer to red apples over yonder. If the apples turn out to be green, then neuro-etiological disequilibrium results, which Adam’s brain, once again constrained by its self-blindness, bottomlessly interprets as ‘falsifying Eve’s reference.’ Adam ‘chastises’ Eve for her ‘mistake,’ while his brain generates bottom-up auditory output that effects her brain’s return to etiological equilibrium, and hence an overall increase in the probability of reproductive success.

Like instrumentality and normativity, then, intentionality is subreptive. Given that the brain is unable to interpret its own processing in etiological terms (which is to say, in the same way it processes local environmental inputs), it must interpret its own relation to objects as non-etiological or ‘bottomless’ in some respect. This is what renders Eve’s experience of a red apple a direct experience of a red apple. Intentionality simply is brain transparency, a phenomenal artifact of the brain’s bottomless self-processing, which is itself a structural and evolutionary artifact of process asymmetry and default irreflexivity.

This means there is nothing intentional about the neural correlates of intentionality, in the same way there is nothing red about the neural correlates of red, and nothing normative about the neural correlates of normativity. Brains, on this account, do not represent, know, or believe anything. They merely physically process environmental inputs into outputs that either do or do not effect reproduction. The semantic furniture of our ‘minds’ are merely bottomless artifacts of the constraints placed on the neurophysiological reflexivity constitutive of phenomenal experience, of the fact that the bottom-up provenance of our experience cannot be reflexively processed in the way environmental inputs are, and so must be processed otherwise.

The Hard Problem and the ‘Bottleneck Thesis’

This story, then, apparently amounts to a radically eliminativist hypothesis regarding intentionality in its general sense (as including what has hitherto been discussed as intentionality, instrumentality, and normativity). But such is not the case–at least not in any straightforward way.

Say our phenomenal self-sense of freedom is someday explained in terms of the neuro-etiological complexities of process asymmetry. Plainly such an explanation would be at once a falsification of our sense of free-will. We feel as though our acts and thoughts are ‘undetermined’ in some essential way, when in fact they are simply determined in such a way that our brains cannot interpret them as such. Our self-sense of freedom is at once explained and explained away. Beyond a certain threshold we feel a sense of ‘etiological independence,’ but we feel wrong.

The same, I think, would be true of intentionality, whether or not one thinks it conceptually presupposes freedom. Any naturalistic explanation of intentionality will constitute an explanation of ‘what lies at the bottom’ of its ‘bottomlessness.’ The explanation, in other words, would be at once a falsification. We feel as though we are intentionally related to the world, but we feel wrong.

The crucial difference between our self-sense of freedom and our self-sense of intentionality, however, is that there is absolutely no way to falsify the latter. Why? Because we can only make sense of ‘falsification’ in intentional terms. We cannot speak about our ‘false sense of aboutness’ or our ‘false sense of epistemic justification’ without commiting an obvious performative contradiction. Any explanation/falsification of intentionality will at once presuppose the truth of intentionality. This is the Bottleneck, the place where methodological naturalism, having scourged the world of intentionality, must pull up short, or risk scourging itself of meaning as well. Given our neurophysiology, we can, with some effort, comprehend our bottom-up neurophysiology, but we cannot comprehend it as identical to us without rendering this comprehension incomprehensible.

The Bottleneck Thesis is simply this: we are natural in such a way that it is impossible to fully conceive of ourselves as natural. In other words, we are our brains in such a way that we can only understand ourselves as something other than our brains. Expressed in this way, the thesis is not overtly contradictory. It possesses an ontological component, that we are fundamentally physical, and an epistemological component, that we cannot know ourselves as such. The plank in Reason breaks when we understand the medial significance of the claim–step inside it as it were. If we cannot understand ourselves as natural, then we must understand ourselves as something else. And indeed we do, as we must, understand ourselves as agents, knowers, sinners, and so on. We may define this something else in any number of ways, but they all share one thing in common: a commitment to a spooky bottomless ontology, be it social, existential, or otherwise, that is fundamentally incompatible with naturalism. We can disenchant the world, but not ourselves.

Although not contradictory, the Bottleneck Thesis does place us in a powerful cognitive double-bind. Despite the sheen of philosophical respectability, when we speak of the irreducibility of consciousness and norms as a way to secure the priority of life-worlds and language-games as ‘unexplained explainers,’ we are claiming an exemption from the natural. How could this not be tendentious? The only thing that separates our supra-natural posits from supernatural things such as souls, angels, and psychic abilities is the rigour of our philosophical rationale. Not a comforting thought, given philosophy’s track record. Moreover, these supra-natural posits are in fact fundamentally natural. Their apparent irreducibility is merely a subreptive artifact of our natural inability to understand them as such in the first instance. But then, once again, the only way we can assert this is by presupposing the very irreducibility we are attempting to explain away. We simply cannot be fundamentally natural because of the way we are fundamentally natural.

Given the absurdity of this, should we not just dismiss the Bottleneck out of hand? Perhaps, but at least two considerations should give us pause. First, there is a sense in which the Bottleneck Thesis is justified as an inference to the best explanation for the cognitive disarray that is our bread and butter.

Say sentients belonging to an advanced alien civilization found some dead human astronauts and studied their neurophysiology. Say these sentients were similar to us in every physiological respect save that evolution was far kinder to them, allowing them to neurophysiologically process their own neurophysiology the way they process environmental inputs, such that for them introspection was a viable mode of scientific investigation. Where we simply see apples in the first instance, they see apples as neurophysiological results in the first instance.

Studying the astronauts, these alien researchers discover a whole array of neuro-functional similarities, so that they can reliably conclude that this does that and that does this and so on. The primary difference they find, however, is that our brains have an extremely limited capacity for self-processing, and after intensive debate they conclude that humans brains likely lack the ability to process themselves as something belonging to the etiological order of their environment. Human brains, they realize, might understand themselves in anetiological terms. They then begin speculating about what it would be like to be human. What would anetiological phenomenal awareness look like? They cannot imagine this, so they shift to less taxing speculations.

On the issue of human self-understanding, the alien researchers suggest that with the early development of their scientific understanding, humans, remarkably, would begin to see themselves as an exception to the natural order of things, as something apart from their brains, and would be unable, no matter what the evidence to the contrary, to divest themselves of the intuition. ‘There would be much neuro-etiological disequilibrium’ they suggest, ‘regarding what they are.’

On the issue of the trans-brain coordination of reproduced behavioural outputs, the alien researchers conclude humans would be forced to specify their behaviours in anetiological terms, as behaviour somehow exempt from the etiology of behaviour, and as a result would be unable to reconcile this understanding with their scientific understanding of the world. Given that humans are capable of scientific understanding (the specimens were, after all, astronauts), they would perhaps attempt to organize their understanding of their behaviour in a scientific manner, perhaps elaborate a kind of ‘anetiological ethology,’ but they would be perpetually perplexed by their inability to reconcile that understanding with their science. ‘This species,’ they conclude, ‘must be afflicted by neuro-etiological disequilibrium regarding the reality of their modes of behaviour specification.’

Human understanding of their linguistic behavioural outputs, the alien researchers assume, would likewise be characterized by neuro-etiological disequilibrium. Once again the human’s understanding would anetiological, and given the maturation of their science, they might begin to question the reality of their hardwired default assumptions. ‘There might be some anetiological X,’ the aliens conclude, ‘that for them constitutes the heart of their immediate linguistic understanding, but it would seem to vanish every time they searched for it.’ Some more daring researchers suggest humans might eventually abandon this X, attempt to understand linguistic neuro-etiological lines in terms of anetiological functions embedded in the communal repetition of anetiological behavioural outputs. But this would provide no escape from neuro-etiological disequilibrium, they point out, since it would bring them no closer to a scientific understanding of their linguistic behavioural outputs.

And so the aliens continue speculating, all the while marvelling at the poor blinkered creatures, and at the capricious whim of evolutionary fate that perpetually prevents them from effectively rationalizing their neurophysiological resources.

Is this story that farfetched? Could aliens, given intact specimens, predict things like the mind/body problem, the problem of moral cognitivism, the problem of meaning, and the like? With enough patience and ingenuity, I suspect they could. The Bottleneck Thesis, I think, provides the framework for a very plausible explanation of the intractable difficulties associated with these and other issues.[12] The theoretical uroboros of the intentional and the physical, the human and the natural, has a long and hoary history, repeated time and again in drastically different forms through a variety of contexts. It is as though we continually find ourselves, in Foucault’s evocative words, at once “bound to the back of a tiger”[13] and “in the place belonging to the king.”[14] This apparent paradox is a fact of our intellectual history, one that requires a factual explanation. The only way to discredit the Bottleneck Thesis in this respect is to offer a better explanation.

The second thing that should give us pause before rejecting the Bottleneck Thesis is that it constitutes a bet made on a eminently plausible neuro-evolutionary hypothesis: that our neurophysiology did not evolve to process itself the way it processes environmental inputs. Given evolution’s penchant for shortcuts and morphological malapropisms, the possibility of such a neurophysiologically entrenched blind-spot, although grounds for consternation, should not be grounds for surprise. So we have evolved, and so long as we continue to reproduce, our genes simply will not give a damn. It would be pie-eyed optimism to assume otherwise.

There are a strong empirical and conceptual grounds, then, to think the Bottleneck Thesis is true. And short of actually discovering intentionality in nature, there is no way to rule it out as a possibility. Certainly the absurdity of its consequences cannot tell against it, because such absurdity is precisely what one would expect given the truth of the Bottleneck. If we have in fact evolved in such a way that we cannot understand ourselves as part of nature, then we should expect to be afflicted by cognitive difficulties at crucial junctures in our thought. We should expect philosophy.

Hence, in some bizarre but nontrivial sense, the fact that I am even arguing for the Bottleneck Thesis speaks to its truth. There are other possible explanations, certainly, but I think they would be hard pressed to muster comparable support. The Bottleneck Thesis is simple, empirically refutable, extraordinarily comprehensive, and consistent in its inconsistency.


[1] See McGinn (1991).

[2] See Nagel (1974).

[3] See Jackson (1983).

[4] See Horgan (1984).

[5] Of course this leaves out another disturbing (and as I hope to show, far more likely) possibility: that we will discover nothing in the brain even remotely suggesting knowledge as we presently and fractiously conceive it. If this were the case, then the issue of whether Eve’s neurophysiology of experiencing red constitutes factual knowledge will be moot.

[6] If medial neurophysiological explanation is so difficult, why does the neurophysiological explanation of specific cognitive abilities, such as face recognition, seem relatively unproblematic? One possibility is that the brain’s self-interpretation of its cognitive abilities is simply more commensurate with its environmentally mediated interpretation of neurophysiological functions. Another possibility is that our brain has ‘no prior opinion’ of its ability to recognize faces, or any other basic cognitive ability, outside of what it is like to possess these abilities. In the absence of any self-interpretation of its basic ‘cognitive skill set,’ the brain would encounter none of the problems above when assigning medial functions to its environmentally mediated interpretation of neurophysiology.

[7] ‘Top-down’ and ‘bottom-up,’ as they are used here, should not be confused with their conventional methodological usage in cognitive science. For a definition of these usages, see Churchland (1988), p. 96.

[8] How might describe a purely natural ‘noncognitive knowledge’? Could one look at science as a kind of trans-brain neurophysiological cooperative, falling into neuro-etiological equilibrium with ever more environmental etiological lines? This is difficult to imagine, yes, but certainly worth serious investigation in its own right.

[9] What about zombies? As Dennett describes it, the central thrust of ‘zombism’ is that mechanistic theories of consciousness are defeased by their inability to account for the difference between ‘conscious persons’ and ‘perfect zombies.’ Understood in these terms, the zombie is simply an intuitively powerful way to illustrate the explanatory gap, the inability of methodological naturalism to explain phenomenal experience. According to the present thesis, the problem is not that consciousness is something ‘special’ in the order of things, but rather that our brain cannot fit its self-interpretation into its interpretation of the world. This does not mean that our brain’s self-interpretation plays no natural functional role (that consciousness is epiphenomenal), but rather that it cannot, because of intrinsic structural constraints and extrinsic evolutionary constraints, fit its self-interpretation into any natural functional role. Medial functional explanation fails not because we are something ‘more’ than our brain, but because we are our brain. Given this, it becomes clear that the ‘zombie question’ is loaded. It assumes the failure of natural medial functional explanation warrants a metaphysical rather than an epistemological diagnosis.

[10] There is good grounds for a ‘pessimistic induction’ here. Given that methodological naturalism has, in fits and starts, purged the world of intentionality (in Weber’s terms, ‘disenchanted it’), and given that our brains are part of the world, do we not have inductive grounds to expect that it will purge our brains of intentionality as well?

[11] The problem, of course, is that such processing would itself remain unprocessed. Does this suggest that intentionality is something of an evolutionary inevitability?

[12] And further, I think it might explain why we find certain argumentative strategies so appealing. Is it just a coincidence that ‘functional exemption’ plays such a prominent role in things like inverted qualia and zombie arguments?

[13] Foucault (1994), p. 322.

[14] Foucault (1994), p. 312.


Churchland, Paul. (1988), Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind, (Cambridge: MIT Press).

Foucault, M. (1994), The Order of Things: An Archaeology of the Human Sciences, translated by A. M. Sheridan Smith, (New York: Vintage Books).

Horgan, T. (1984), “Jackson on Physical Information and Qualia,” Philosophical Quarterly, 34, pp. 147-52.

Jackson, F. (1983), “Epiphenomenal Qualia,” Philosophical Quarterly, 32, pp. 127-36.

McGinn, C. (1991), The Problem of Consciousness (Oxford: Blackwell).

Nagel, T. (1974), The Philosophical Review, 83, pp. 435-50.