The Missing Half of the Global Neuronal Workspace: A Commentary on Stanislaus Dehaene’s Consciousness and the Brain
by rsbakker
.
Introduction
Stanislaus Dehaene, to my mind at least, is the premier consciousness researcher on the planet, one of those rare scientists who seems equally at home in the theoretical aether (like we are here) and in the laboratory (where he is there). His latest book, Consciousness and the Brain provides an excellent, and at times brilliant, overview of the state of contemporary consciousness research. Consciousness has come a long way in the past two decades, and Dehaene deserves credit for much of the yardage gained.
I’ve been anticipating Consciousness and the Brain for quite some time, especially since I bumped across “The Eternal Silence of the Neuronal Spaces,” Dehaene’s review of Cristopher Koch’s Consciousness: Confessions of a Romantic Reductionist, where he concludes with a confession of his own: “Can neuroscience be reconciled with living a happy, meaningful, moral, and yet nondelusional life? I will confess that this question also occasionally keeps me lying awake at night.” Since the implications of the neuroscientific revolution, the prospects of having a technically actionable blueprint of the human soul, often keep my mind churning into the wee hours, I was hoping that I might see a more measured, less sanguine Dehaene in this book, one less inclined to soft-sell the troubling implications of neuroscientific research.
And in that one regard, I was disappointed. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts is written for a broad audience, so in a certain sense one can understand the authorial instinct to make things easy for the reader, but rendering a subject matter more amenable to lay understanding is quite a different thing than rendering it more amenable to lay sensibilities. Dehaene, I think, caters far too much to the very preconceptions his science is in the process of dismantling. As a result, the book, for all its organizational finesse, all its elegant formulations, and economical summaries of various angles of research, finds itself haunted by a jagged shadow, the intimation that things simply are not as they seem. A contradiction—of expressive modes if not factual claims.
Perhaps the most stark example of this contradiction comes at the very conclusion of the book, where Dehaene finally turns to consider some of the philosophical problems raised by his project. Adopting a quasi-Dennettian argument (from Freedom Evolves) that the only ‘free will’ that matters is the free will we actually happen to have (namely, one compatible with physics and biology), he writes:
“Our belief in free will expresses the idea that, under the right circumstances, we have the ability to guide our decisions by our higher-level thoughts, beliefs, values, and past experiences, and to exert control over our undesired lower-level impulses. Whenever we make an autonomous decision, we exercise our free will by considering all the available options, pondering them, and choosing the one that we favor. Some degree of chance may enter in a voluntary choice, but this is not an essential feature. Most of the time our willful acts are anything but random: they consist in a careful review of our options, followed by the deliberate selection of the one we favor.” 264
And yet for his penultimate, concluding line no less, he writes, “[a]s you close this book to ponder your own existence, ignited assemblies of neurons literally make up your mind” (266). At this point, the perceptive reader might be forgiven for asking, ‘What happened to me pondering, me choosing the interpretation I favour, me making up my mind?’ The easy answer, of course, is that ‘ignited assemblies of neurons’ are the reader, such that whatever they ‘make,’ the reader ‘makes’ as well. The problem, however, is that the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge. If ignited assemblies of neurons are somehow what he is, then he has no inkling what he is—or what it is he is supposedly doing.
As we shall see, this pattern of alternating expressive modes, swapping between the personal and the impersonal registers to describe various brain activities, occurs throughout Consciousness and the Brain. As I mentioned above, I’m sure this has much to do with Dehaene’s resolution to write a reader friendly book, and so to market the Global Neuronal Workspace Theory (GNWT) to the broader public. I’ve read enough of Dehaene’s articles to recognize the nondescript, clinical tone that animates the impersonally expressed passages, and so to see those passages expressed in more personal idioms as self-conscious attempts on his part to make the material more accessible. But as the free will quote above makes plain, there’s a sense in which Dehaene, despite his odd sleepless night, remains committed to the fundamental compatibility of the personal and the impersonal idioms. He thinks neuroscience can be reconciled with a meaningful and nondelusional life. In what follows I intend to show why, on the basis of his own theory, he’s mistaken. He’s mistaken because, when all is said and done, Dehaene possesses only half of what could count as a complete theory of consciousness—the most important half to be sure, but half all the same. Despite all the detailed explanations of consciousness he gives in the book, he actually has no account whatsoever of what we seem to take consciousness to be–namely, ourselves.
For that account, Stanislaus Dehaene needs to look closely at the implicature of his Global Neuronal Workspace Theory—it’s long theoretical shadow, if you will—because there, I think, he will find my own Blind Brain Theory (BBT), and with it the theoretical resources to show how the consciousness revealed in his laboratory can be reconciled with the consciousness revealed in us. This, then, will be my primary contention: that Dehaene’s Global Neuronal Workspace Theory directly implies the Blind Brain Theory, and that the two theories, taken together, offer a truly comprehensive account of consciousness…
The one that keeps me lying awake at night.
.
Function Dysfunction
Let’s look at a second example. After drawing up an inventory of varous, often intuition-defying, unconscious feats, Dehaene cautions the reader against drawing too pessimistic a conclusion regarding consciousness—what he calls the ‘zombie theory’ of consciousness. If unconscious processes, he asks, can plan, attend, sum, mean, read, recognize, value and so on, just what is consciousness good for? The threat of these findings, as he sees it, is that they seem to suggest that consciousness is merely epiphenomenal, a kind of kaliedoscopic side-effect to the more important, unconscious business of calculating brute possibilities. As he writes:
“The popular Danish science writer Tor Norretranders coined the term ‘user illusion’ to refer to our feeling of being in control, which may well be fallacious; every one of our decisions, he believes, stems from unconscious sources. Many other psychologists agree: consciousness is the proverbial backseat driver, a useless observer of actions that lie forever beyond its control.” 91
Dehaene disagrees, claiming that his account belongs to “what philosophers call the ‘functionalist’ view of consciousness” (91). He uses this passing criticism as a segue for his subsequent, fascinating account of the numerous functions discharged by consciousness—what makes consciousness a key evolutionary adaptation. The problem with this criticism is that it simply does not apply. Norretranders, for instance, nowhere espouses epiphenomenalism—at least not in The User Illusion. The same might be said of Daniel Wegner, one the ‘many psychologists,’ Dehaene references in the accompanying footnote. Far from epiphenomenalism, the argument that consciousness has no function whatsoever (as, say, Susan Pockett (2004) has argued), both of these authors contend that it’s ‘our feeling of being in control’ that is illusory. So in The Illusion of Conscious Will, for instance, Wegner proposes that the feeling of willing allows us to socially own our actions. For him, our consciousness of ‘control’ has a very determinate function, just one that contradicts our metacognitive intuition of that functionality.
Dehaene is simply in error here. He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy. He has simply run afoul the distinction between consciousness as it is and consciousness as appears to us—the distinction between consciousness as impersonally and personally construed. Note the way he actually slips between idioms in the passage quoted above, at first referencing ‘our feeling of being in control’ and then referencing ‘its control.’ Now one might think this distinction between these two very different perspectives on consciousness would be easy to police, but such is not the case (See Bennett and Hacker, 2003). Unfortunately, Dehaene is far from alone when it comes to running afoul this dichotomy.
For some time now, I’ve been arguing for what I’ve been calling a Dual Theory approach to the problem of consciousness. On the one hand, we need a theoretical apparatus that will allow us to discover what consciousness is as another natural phenomenon in the natural world. On the other hand, we need a theoretical apparatus that will allow us to explain (in a manner that makes empirically testable predictions) why consciousness appears the way that it does, namely, as something that simply cannot be another natural phenomenon in the natural world. Dehaene is in the business of providing the first kind of theory: a theory of what consciousness actually is. I’ve made a hobby of providing the second kind of theory: a theory of why consciousness appears to possess the baffling form that it does.
Few terms in the conceptual lexicon are quite so overdetermined as ‘consciousness.’ This is precisely what makes Dehaene’s operationalization of ‘conscious access’ invaluable. But salient among those traditional overdeterminations is the peculiarly tenacious assumption that consciousness ‘just is’ what it appears to be. Since what it appears to be is drastically at odds with anything else in the natural world, this assumption sets the explanatory bar rather high indeed. You could say consciousness needs a Dual Theory approach for the same reason that Dualism constitutes an intuitive default (Emmons 2014). Our dualistic intuitions arguably determine the structure of the entire debate. Either consciousness really is some wild, metaphysical exception to the natural order, or consciousness represents some novel, emergent twist that has hitherto eluded science, or something about our metacognitive access to consciousness simply makes it seem that way. Since the first leg of this trilemma belongs to theology, all the interesting action has fallen into orbit around the latter two options. The reason we need an ‘Appearance Theory’ when it comes to consciousness as opposed to other natural phenomena, has to do with our inability to pin down the explananda of consciousness, an inability that almost certainly turns on the idiosyncrasy of our access to the phenomena of consciousness compared to the phenomena of the natural world more generally. This, for instance, is the moral of Michael Graziano’s (otherwise flawed) Consciousness and the Social Brain: that the primary job of the neuroscientist is to explain consciousness, not our metacognitive perspective on consciousness.
The Blind Brain Theory is just such an Appearance Theory: it provides a systematic explanation of the kinds of cognitive confounds and access bottlenecks that make consciousness appear to be ‘supra-natural.’ It holds, with Dehaene, that consciousness is functional through and through, just not in any way we can readily intuit outside empirical work like Dehaene’s. As such, it takes findings such as Wegner’s, where the function we presume on the basis of intuition (free willing) is belied by some counter-to-intuition function (behaviour ownership), as paradigmatic. Far from epiphenomenalism, BBT constitutes a kind of ‘ulterior functionalism’: it acknowledges that consciousness discharges a myriad of functions, but it denies that metacognition is any position to cognize those functions (see “THE Something about Mary“) short of sustained empirical investigation.
Dehaene is certainly sensitive to the general outline of this problem: he devotes an entire chapter (“Consciousness Enters the Lab”) to discussing the ways he and others have overcome the notorious difficulties involved in experimentally ‘pinning consciousness down.’ And the masking and attention paradigms he has helped develop have done much to transform consciousness research into a legitimate field of scientific research. He even provides a splendid account of just how deep unconscious processing reaches into what we intuitively assume are wholly conscious exercises—an account that thoroughly identifies him as a fellow ulterior functionalist. He actually agrees with me and Norretranders and Wegner—he just doesn’t realize it quite yet.
.
The Global Neuronal Workspace
As I said, Dehaene is primarily interested in theorizing consciousness apart from how it appears. In order to show how the Blind Brain Theory actually follows from his findings, we need to consider both these findings and the theoretical apparatus that Dehaene and his colleagues use to make sense of them. We need to consider his Global Neuronal Workspace Theory of consciousness.
According to GNWT, the primary function of consciousness is to select, stabilize, solve, and broadcast information throughout the brain. As Dehaene writes:
“According to this theory, consciousness is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That’s because the brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it; this is why the capacity to report is a key feature of a conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are. The flexible dissemination of information, I argue, is a characteristic property of a conscious state.” 165
A signature virtue of Consciousness and the Brain lays in Dehaene’s ability to blend complexity and nuance with expressive economy. But again one needs to be wary of his tendency to resort to the personal idiom, as he does in this passage, where the functional versatility provided by consciousness is explicitly conflated with agency, the freedom to dispose of information ‘in whatever way we please.’ Elsewhere he writes:
“The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” 105
Here we find him making essentially the same claims in less anthropomorphic or ‘reader-friendly’ terms. Despite the folksy allure of the ‘workspace’ metaphor, this image of the brain as a ‘hybrid serial-parallel machine’ is what lies at the root of GNWT. For years now, Dehaene and others have been using masking and attention experiments in concert with fMRI, EEG, and MEG to track the comparative neural history of conscious and unconscious stimuli through the brain. This has allowed them to isolate what Dehaene calls the ‘signatures of consciousness,’ the events that distinguish percepts that cross the conscious threshold from percepts that do not. A theme that Dehaene repeatedly evokes is the information asymmetric nature of conscious versus unconscious processing. Since conscious access is the only access we possess to our brain’s operations, we tend to run afoul a version of what Daniel Kahneman (2012) calls WYSIATI, or the ‘what-you-see-is-all-there-is’ effect. Dehaene even goes so far as to state this peculiar tendency as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79). The fact is the nonconscious brain performs the vast, vast majority of the brain’s calculations.
The reason for this has to do with the Inverse Problem, the challenge of inferring the mechanics of some distal system, a predator or a flood, say, from the mechanics of some proximal system such as ambient light or sound. The crux of the problem lies in the ambiguity inherent to the proximal mechanism: a wild variety of distal events could explain any given retinal stimulus, for instance, and yet somehow we reliably perceive predators or floods or what have you. Dehaene writes:
“We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the ‘blind spot’ where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, and massive reinterpreted based on our previous experience of similar visual scenes.” 60
The brain can do this because it acts as a massively parallel Bayesian inference engine, analytically breaking down various elements of our retinal images, feeding them to specialized heuristic circuits, and cobbling together hypothesis after hypothesis.
“Below the conscious stage, myriad unconscious processors, operating in parallel, constantly strive to extract the most detailed and complete interpretation of our environment. They operate as nearly optimal statisticians who exploit the slightest perceptual hint—a faint movement, a shadow, a splotch of light—to calculate the probability that a given property holds true in the outside world.” 92
But hypotheses are not enough. All this machinery belongs to what is called the ‘sensorimotor loop.’ The whole evolutionary point of all this processing is to produce ‘actionable intelligence,’ which is to say, to help generate and drive effective behaviour. In many cases, when the bottom-up interpretations match the top-down expectations and behaviour is routine, say, such selection need not result in consciousness of the stimuli at issue. In other cases, however, the interpretations are relayed to the nonconscious attentional systems of the brain where they are ranked according to their relevance to ongoing behaviour and selected accordingly for conscious processing. Dehaene summarizes what happens next:
“Conscious perception results from a wave of neuronal activity that tips the cortex over its ignition threshold. A conscious stimulus triggers a self-amplifying avalanche of neural activity that ultimately ignites many regions into a tangled state. During that conscious state, which starts approximately 300 milliseconds after stimulus onset, the frontal regions of the brain are being informed of sensory inputs in a bottom-up manner, but these regions also send massive projections in the converse direction, top-down, and to many distributed areas. The end result is a brain web of synchronized areas whose various facets provide us with many signatures of consciousness: distributed activation, particularly in the frontal and parietal lobes, a P3 wave, gamma-band amplification, and massive long-distance synchrony.” 140
As Dehaene is at pains to point out, the machinery of consciousness is simply too extensive to not be functional somehow. The neurophysiological differences observed between the multiple interpretations that hover in nonconscious attention, and the interpretation that tips the ‘ignition threshold’ of consciousness is nothing if not dramatic. Information that was localized suddenly becomes globally accessible. Information that was transitory suddenly becomes stable. Information that was hypothetical suddenly becomes canonical. Information that was dedicated suddenly becomes fungible. Consciousness makes information spatially, temporally, and structurally available. And this, as Dehaene rightly argues, makes all the difference in the world, including the fact that “[t]he global availability of information is precisely what we subjectively experience as a conscious state” (168).
.
A Mile Wide and an Inch Thin
Consciousness is the Medieval Latin of neural processing. It makes information structurally available, both across time and across the brain. As Dehaene writes, “The capacity to synthesize information over time, space, and modalities of knowledge, and to rethink it at any time in the future, is a fundamental component of the conscious mind, one that seems likely to have been positively selected for during evolution” (101). But this evolutionary advantage comes with a number of crucial caveats, qualifications that, as we shall see, make some kind of Dual Theory approach unavoidable.
Once an interpretation commands the global workspace, it becomes available for processing via the nonconscious input of a number of different processors. Thus the metaphor of the workspace. The information can be ‘worked over,’ mined for novel opportunities, refined into something more useful, but only, as Dehaene points out numerous times, synoptically and sequentially.
Consciousness is synoptic insofar as it samples mere fractions of the information available: “An unconscious army of neurons evaluates all the possibilities,” Dehaene writes, “but consciousness receives only a stripped down report” (96). By selecting, in other words, the workspace is at once neglecting, not only all the alternate interpretations, but all the neural machinations responsible: “Paradoxically, the sampling that goes on in our conscious vision makes us forever blind to its inner complexity” (98).
And consciousness is sequential in that it can only sample one fraction at a time: “our conscious brain cannot experience two ignitions at once and lets us perceive only a single conscious ‘chunk’ at a given time,” he explains. “Whenever the prefrontal and parietal lobes are jointly engaged in processing a first stimulus, they cannot simultaneously reengage toward a second one” (125).
All this is to say that consciousness pertains to the serial portion of the ‘hybrid serial-parallel machine’ that is the human brain. Dehaene even goes so far as to analogize consciousness to a “biological Turing machine” (106), a kind of production system possessing the “capacity to implement any effective procedure” (105). He writes:
“A production system comprises a database, also called ‘working memory,’ and a vast array of if-then production rules… At each step, the system examines whether a rule matches the current state of its working memory. If multiple rules match, then they compete under the aegis of a stochastic prioritizing system. Finally, the winning rule ‘ignites’ and is allowed to change the contents of working memory before the entire process resumes. Thus this sequence of steps amounts to serial cycles of unconscious competition, conscious ignition, and broadcasting.” 105
The point of this analogy, Dehaene is quick to point out, isn’t to “revive the cliché of the brain as a classical computer” (106) so much as it is to understand the relationship between the conscious and nonconscious brain. Indeed, in subsequent experiments, Dehaene and his colleagues discovered that the nonconscious, for all its computational power, is generally incapable of making sequential inferences: “The mighty unconscious generates sophisticated hunches, but only a conscious mind can follow a rational strategy, step after step” (109). It seems something of a platitude to claim that rational deliberation requires consciousness, but to be able to provide an experimentally tested neurobiological account of why this is so is nothing short of astounding. Make no mistake: these are the kind of answers philosophy, rooting through the mire of intuition, has sought for millennia.
Dehaene, as I mentioned, is primarily interested in providing a positive account of what consciousness is apart from what we take it to be. “Putting together all the evidence inescapably leads us to a reductionist conclusion,” Dehaene writes. “All our conscious experiences, from the sound of an orchestra to the smell of burnt toast, result from a similar source: the activity of massive cerebral circuits that have reproducible neuronal signatures” (158). Though he does consider several philosophical implications of his ‘reductionist conclusions,’ he does so only in passing. He by no means dwells on them.
Given that consciousness research is a science attempting to bootstrap its way out of the miasma of philosophical speculation regarding the human soul, this reluctance is quite understandable—perhaps even laudable. The problem, however, is that philosophy and science both traffic in theory, general claims about basic things. As a result, the boundaries are constitutively muddled, typically to the detriment of the science, but sometimes to its advantage. A reluctance to speculate may keep the scientist safe, but to the extent that ‘data without theory is blind,’ it may also mean missed opportunities.
So consider Dehaene’s misplaced charge of epiphenomenalism, the way he seemed to be confusing the denial of our intuitions of conscious efficacy with the denial of conscious efficacy. The former, which I called ‘ulterior functionalism,’ entirely agrees that consciousness possesses functions; it denies only that we have reliable metacognitive access to those functions. Our only recourse, the ulterior functionalist holds, is to engage in empirical investigation. And this, I suggested, is clearly Dehaene’s own position. Consider:
“The discovery that a word or a digit can travel throughout the brain, bias our decisions, and affect our language networks, all the while remaining unseen, was an eye-opener for many cognitive scientists. We had underestimated the power of the unconscious. Our intuitions, it turned out, could not be trusted: we had no way of knowing what cognitive processes could or could not proceed without awareness. The matter was entirely empirical. We had to submit, one by one, each mental faculty to a thorough inspection of its component processes, and decide which of those faculties did or did not appeal to the conscious mind. Only careful experimentation could decide the matter…” 74
This could serve as a mission statement for ulterior functionalism. We cannot, as a matter of fact, trust any of our prescientific intuitions regarding what we are, no more than we could trust our prescientific intuitions regarding the natural world. This much seems conclusive. Then why does Dehaene find the kinds of claims advanced by Norretranders and Wegner problematic? What I want to say is that Dehaene, despite the occasional sleepless night, still believes that the account of consciousness as it is will somehow redeem the most essential aspects of consciousness as it appears, that something like a program of ‘Dennettian redefinition’ will be enough. Thus the attitude he takes toward free will. But then I encounter passages like this:
“Yet we never truly know ourselves. We remain largely ignorant of the actual unconscious determinants of our behaviour, and therefore cannot accurately predict what our behaviour will be in circumstances beyond the safety zone of our past experiences. The Greek motto ‘Know thyself,’ when applied to the minute details of our behaviour, remains an inaccessible ideal. Our ‘self’ is just a database that gets filled in through our social experiences, in the same format with which we attempt to understand other minds, and therefore it is just as likely to include glaring gaps, misunderstandings, and delusions.” 113
Claims like this, which radically contravene our intuitive, prescientific understanding of self, suggest that Dehaene simply does not know where he stands, that he alternately believes and does not believe that his work can be reconciled with our traditional understand of ‘meaningful life.’ Perhaps this explains the pendulum swing between the personal and the impersonal idiom that characterizes this book—down to the final line, no less!
Even though this is an eminently honest frame of mind to take to this subject matter, I personally think his research cuts against even this conflicted optimism. Not surprisingly, the Global Neuronal Workspace Theory of Consciousness casts an almost preposterously long theoretical shadow; it possesses an implicature that reaches to the furthest corners of the great human endeavour to understand itself. As I hope to show, the Blind Brain Theory of the Appearance of Consciousness provides a parsimonious and powerful way to make this downstream implicature explicit.
.
From Geocentrism to ‘Noocentrism’
“Most mental operations,” Dehaene writes, “are opaque to the mind’s eye; we have no insight into the operations that allow us to recognize a face, plan a step, add two digits, or name a word” (104-5). If one pauses to consider the hundreds of experiments that he directly references, not to mention the thousands of others that indirectly inform his work, this goes without saying. We require a science of consciousness simply because we have no other way of knowing what consciousness is. The science of consciousness is literally predicated on the fact of our metacognitive incapacity (See “The Introspective Peepshow“).
Demanding that science provide a positive explanation of consciousness as we intuit it is no different than demanding that science provide a positive explanation of geocentrism—which is to say, the celestial mechanics of the earth as we once intuited it. Any fool knows that the ground does not move. If anything, the fixity of the ground is what allows us to judge movement. Certainly the possibility that the earth moved was an ancient posit, but lacking evidence to the contrary, it could be little more than philosophical fancy. Only the slow accumulation of information allowed us to reconceive the ‘motionless earth’ as an artifact of ignorance, as something that only the absence of information could render obvious. Geocentrism is the product of a perspectival illusion, plain and simple, the fact that we literally stood too close to the earth to comprehend what the earth in fact was.
We stand even closer to consciousness—so close as to be coextensive! Nonetheless, a good number of very intelligent people insist on taking (some version of) consciousness as we intuit it to be the primary explanandum of consciousness research. Given his ‘law’ (We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79)), Dehaene is duly skeptical. He is a scientific reductionist, after all. So with reference to David Chalmers’ ‘hard problem’ of consciousness, we find him writing:
“My opinion is that Chalmers swapped the labels: it is the ‘easy’ problem that is hard, while the hard problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmer’s hard problem will evaporate.” 262
Referencing the way modern molecular biology has overthrown vitalism, he continues:
“Likewise, the science of consciousness will keep eating away at the hard problem until it vanishes. For instance, current models of visual perception already explain not only why the human brain suffers from a variety of visual illusions but also why such illusions would appear in any rational machine confronted with the same computational problem. The science of consciousness already explains significant chunks of our subjective experience, and I see no obvious limits to this approach.” 262
I agree entirely. The intuitions underwriting the so-called ‘hard problem’ are perspectival artifacts. As in the case of geocentrism, our cognitive systems stand entirely too close to consciousness to not run afoul a number of profound illusions. And I think Dehaene, not unlike Galileo, is using the ‘Dutch Spyglass’ afforded by masking and attention paradigms to accumulate the information required to overcome those illusions. I just think he remains, despite his intellectual scruples, a residual hostage of the selfsame intuitions he is bent on helping us overcome.
Dehaene only needs to think through the consequences of GNWT as it stands. So when he continues to discuss other ‘hail Mary’ attempts (those of Eccles and Penrose) to find some positive account of consciousness as it appears, writing that “the intuition that our mind chooses its actions ‘at will’ begs for an explanation” (263), I’m inclined to think he already possesses the resources to advance such an explanation. He just needs to look at his own findings in a different way.
Consider the synoptic and sequential nature of what Dehaene calls ‘ignition,’ the becoming conscious of some nonconscious interpretation. The synoptic nature of ignition, the fact that consciousness merely samples interpretations, means that consciousness is radically privative, that every instance of selection involves massive neglect. The sequential nature of ignition, on the other hand, the fact that the becoming conscious of any interpretation precludes the becoming conscious of another interpretation, means that each moment of consciousness is an all or nothing affair. As I hope to show, these two characteristics possess profound implications when applied to the question of human metacognitive capacity—which is to say, our capacity to intuit our own makeup.
Dehaene actually has very little to say regarding self-consciousness and metacognition in Consciousness and the Brain, aside from speculating on the enabling role played by language. Where other mammalian species clearly seem to possess metacognitive capacity, it seems restricted to the second-order estimation of the reliability of their first-order estimations. They lack “the potential infinity of concepts that a recursive language affords” (252). He provides an inventory of the anatomical differences between primates and other mammals, such as specialized ‘broadcast neurons,’ and between humans and their closest primate kin, such as the size of the dendritic trees possessed by human prefrontal neurons. As he writes:
“All these adaptations point to the same evolutionary trend. During hominization, the networks of our prefrontal cortex grew denser and denser, to a larger extent than would be predicted by brain size alone. Our workspace circuits expanded way beyond proportion, but this increase is probably just the tip of the iceberg. We are more than just primates with larger brains. I would not be surprised if, in the coming years, cognitive neuroscientists find that the human brain possesses unique microcircuits that give it access to a new level of recursive, language-like operations.” 253
Presuming the remainder of the ‘iceberg’ does not overthrow Dehaene’s workspace paradigm, however, it seems safe to assume that our metacognitive machinery feeds from the same informational trough, that it is simply one among the many consumers of the information broadcast in conscious ignition. The ‘information horizon’ of the Workspace, in other words, is the information horizon of conscious metacognition. This would be why our capacity to report seems to be coextensive with our capacity to consciously metacognize: the information we can report constitutes the sum of information available for reflective problem-solving.
So consider the problem of a human brain attempting to consciously cognize the origins of its own activity—for the purposes of reporting to other brains, say. The first thing to note is that the actual, neurobiological origins of that activity are entirely unavailable. Since only information that ignites is broadcast, only information that ignites is available. The synoptic nature of the information ignited renders the astronomical complexities of ignition inaccessible to conscious access. Even more profoundly, the serial nature of ignition suggests that consciousness, in a strange sense, is always too late. Information pertaining to ignition can never be processed for ignition. This is why so much careful experimentation is required, why our intuitions are ‘ill-defined,’ why ‘most mental operations are opaque.’ The neurofunctional context of the workspace is something that lies outside the capacity of the workspace to access.
This explains the out-and-out inevitability of what I called ‘ulterior functionalism’ above: the information ignited constitutes the sum of the information available for conscious metacognition. Whenever we interrogate the origins or our conscious episodes, reflection only has our working memory of prior conscious episodes to go on. This suggests something as obvious as it is counterintuitive: that conscious metacognition should suffer a profound form of source blindness. Whenever conscious metacognition searches for the origins of its own activity, it finds only itself.
Free will, in other words, is a metacognitive illusion arising out of the structure of the global neuronal workspace, one that, while perhaps not appearing “in any rational machine confronted with the same computational problem” (262), would appear in any conscious system possessing the same structural features as the global neuronal workspace. The situation is almost directly analogous to the situation faced by our ancestors before Galileo. Absent any information regarding the actual celestial mechanics of the earth, the default assumption is that the earth has no such mechanics. Likewise, absent any information regarding the actual neural mechanics of consciousness, the default assumption is that consciousness also has no such mechanics.
But free will is simply one of many problems pertaining to our metacognitive intuitions. According to the Blind Brain Theory of the Appearance of Consciousness, a great number of the ancient and modern perplexities can be likewise explained in terms of metacognitive neglect, attributed to the fact that the structure and dynamics of the workspace render the workspace effectively blind to its own structure and dynamics. Taking Dehaene’s Global Neuronal Workspace Theory of Consciousness, it can explain away the ‘ill-defined intuitions’ that underwrite attributions of some extraordinary irreducibility to conscious phenomena.
On BBT, the myriad structural peculiarities that theologians and philosophers have historically attributed to the first person are perspectival illusions, artifacts of neglect—things that seem obvious only so long as we remain ignorant of the actual mechanics involved (See, “Cognition Obscura“). Our prescientific conception of ourselves is radically delusional, and the kind of counterintuitive findings Dehaene uses to patiently develop and explain GNWT are simply what we should expect. Noocentrism is as doomed as was geocentrism. Our prescientific image of ourselves is as blinkered as our prescientific image of the world, a possibility which should, perhaps, come as no surprise. We are simply another pocket of the natural world, after all.
But the overthrow of noocentrism is bound to generate even more controversy than the overthrow of geocentrism or biocentrism, given that so much of our self and social understanding relies upon this prescientific image. Perhaps we should all lay awake at night, pondering our pondering…
You say: “Far from epiphenomenalism, BBT constitutes a kind of ‘ulterior functionalism’: it acknowledges that consciousness discharges a myriad of functions, but it denies that metacognition is any position to cognize those functions (see “THE Something about Mary“) short of sustained empirical investigation.”
Just so I can picture this I want to make sure what we mean by functionalism. Churchland gives a succinct abbreviation in his work Matter and Consciousness as:
“According to functionalism, the essential or defining feature of any type of mental state is the set of causal relations it bears to (1) environmental effects on the body, (2) other types of mental states, and (3) bodily behavior. Pain, for example, characteristically results from some bodily damage or trauma; it causes distress, annoyance, and practical reasoning aimed at relief; and it causes wincing, blanching, verbal outbursts, and nursing of the traumatized area. Any state that plays exactly that functional role is a pain, according to functionalism. Similarly, other types of mental states (sensations, fears, beliefs, and so on) are also defined by their unique causal roles in a complex economy of internal states mediating sensory inputs and behavioral outputs.”
So you accept this and agree that consciousness allows for multiple functions being causally discharged by this functional apparatus, but that you disallow anyone having some privileged metacognitive access – an introspective access through reflection on these processes – to which one could analyze or “cognize” these processes (i.e., to fit your BBT theoretic: they are blind to these processes and have no Archimedean point outside of these processes (metacognition) from which to gain access to the workings of these processes.
Sorry for being long winded, just want to clarify the details to make sure your terms, since this concept of “ulterior functionalism” is more of a qualification of the standard theory by adding your BBT into it. Yes? So that ulterior functionalism is just the standard isomorphic functionalist approach with one addition: BBT theory.
More and more I think as I read the literature it’s making more sense. As I read Churchland then you are not a strict eliminative materialist at all, but an explicit functionalist with certain qualified additions to what that means. Yes, no, or am I as usual – all wet behind the ears 🙂
I’m actually just using the term in the sense that Dehaene specifies, namely to say that consciousness has effects that can be specified in standard terms of reductive analysis. ‘Functionalism’ with reference to the mental is a hairball he steers clear of – and for good reason I think. Remaining agnostic on the subject of the mental contributes to the clarity of his message.
For my part, I’m ultimately arguing that there is no such thing as the ‘mental,’ that Craver and Piccinini are right, that ‘mental processes’ are better understood as ‘mechanism sketches.’ I think, as Dehaene does, that consciousness is a mechanism of the brain. This ‘mechanical functionalism’ is quite pedestrian, bereft of supra natural intentional properties, and so ‘ulterior’ to what we think we cognize via metacognition/introspection. This is where my eliminativism comes in.
OK, that clarifies it… yes, was getting confused with other statements you’ve offered in other posts. Two different issues then… yea, that makes sense now!
You say: “This would be why our capacity to report seems to be coextensive with our capacity to consciously metacognize: the information we can report constitutes the sum of information available for reflective problem-solving.”
Will our models of the brain ever align with the actuality of the brain?
The vast amount of unconscious processing that goes in to just doing something as simple as picking up a ball and throwing it at a target seem manifestly incalculable to our conscious mind; yet, we are developing tools that hopefully as they become more adaptable to testing situations will be able to actually quantify the complete circuit of these processes end to end someday. What will that tell us? Can we like physics quantify the brain? If so this would allow us to reverse engineer the brain much like mathematical physics is reverse engineering the universe and then testing it using the colliders, etc. Sounds like we’ll need to continue down the path of developing a parallel robotic/intelligence path to be able to successfully prove our case of the brain as well – or, something like it.
… as well as be prepared to blow up our traditional forms of self-understanding in the process! All bets are off when it comes to the Posthuman.
It almost seems that those who eliminate the folk psychology of the past and completely immerse themselves in the scientific image have already entered the arena of the posthuman; or, let’s say at least – gone beyond the humanist world view and entered the posthumanist, which is a first step toward what David Roden and others are seeing in the posthuman singularity. Almost like an event horizon beyond which if we take that plunge there will be no going back… we may already be on that path with all these NBIC convergence technologies… just a matter of time and science?
The point of BBT is basically that we’ve always been posthuman, and only now have the tools to recognize this. NBIC is simply going to force the issue. The whole point of post-intentional philosophy, as I take it, is to already have some kind of workable conceptuality in place when that happens.
Reading Redish’s book The Mind Within the Brain which seems to agree with most of what your saying. His conception is that most of our decision making processes make us what we are, and that most of the decisions are done in the brain not in consciousness. He does see consciousness as a mechanism that does have a role in other decisioning processes, but haven’t read the book to be clear on this. His basic premise:
In order to be able to scientifically measure decision-making, we define decisions as “taking an action.” There are multiple decision-making systems within each of us. The actions we take are a consequence of the interactions of those systems. Our irrationality occurs when those multiple systems disagree with each other. Most of these systems are unconscious sub-systems, not conscious functions although this does not pre-clude the mechanism of consciousness itself from making certain functional decisions.
Redish, A. David (2013-06-19). The Mind within the Brain: How We Make Decisions and How those Decisions Go Wrong (p. 3). Oxford University Press, USA. Kindle Edition.
I’ll definitely take a looksee.
Awesome post. I’m not really what sure to say so I’ll field some notation and see what comes up. Sometimes (often) literary and philosophic references will escape me here and it was nice to read a BBT focused take on psychology (especially Dehaene). Don’t feel the need to respond or to anything in particular.
Also – cannot wait for TTBD.
He thinks neuroscience can be reconciled with a meaningful and nondelusional life.
As you wrote, Dehaene is narrow-casting to a generally pop-science crowd and very few of his contemporaries. Isn’t the honest pursuit of BBT on your part a hope that new information will reframe neglect into a type of “volitional action?”
Far from epiphenomenalism, BBT constitutes a kind of ‘ulterior functionalism’: it acknowledges that consciousness discharges a myriad of functions, but it denies that metacognition is any position to cognize those functions (see “THE Something about Mary“) short of sustained empirical investigation.
I know there is Cognition Obscura, Beastiary of Consciousnesses, Ex Nihilo, and Semantica, but I’m not sure you’ve offered that many formed speculations on possible metacognitive cognitions so much as you’ve elucidated that we don’t metacognize as, or what, we think we’re cogitating. But you are working from the assumption that some kind of volition is possible? (I very much hope so too but I’m just checking where you stand at this juncture.)
He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy. He has simply run afoul the distinction between consciousness as it is and consciousness as appears to us—the distinction between consciousness as impersonally and personally construed.
This highlights why science needs philosophy and why philosophy shouldn’t be considered dead. It also validates TPB as neurophilosophy, in my opinion.
For years now, Dehaene and others have been using masking and attention experiments in concert with fMRI, EEG, and MEG to track the comparative neural history of conscious and unconscious stimuli through the brain. This has allowed them to isolate what Dehaene calls the ‘signatures of consciousness,’ the events that distinguish percepts that cross the conscious threshold from percepts that do not.
This is a really good way to frame the brain’s concerted function, though, I’m not sure your take does justice to the complexity involved for readers who might not be so versed in Dehaene’s experimental context. Each “subjective state” might well be categorized by a differently patterned distribution of excited and inhibited neurons across the brain – its distinct signature, as he highlighted. And each “subjective state’s signature” might well be constituted of a hundred different “signature” modules, as it were, acting in concert.
Topically, you and I conversed once about how even if it is possible to train a decrease (increase?) in perceptual thresholds (that point of “ignition” in this review), there would eventually be a biological cap at BBT’s edge. Any more thoughts on how particular signatures might act to unlock or leverage more complex pattern recognition (not necessarily unlike the “key” metaphor in LTG, which always stunned me as brilliant, but in this case referencing increased “intelligence,” rather than the “magic pattern for murderer”)?
All this machinery belongs to what is called the ‘sensorimotor loop.’
During that conscious state, which starts approximately 300 milliseconds after stimulus onset, the frontal regions of the brain are being informed of sensory inputs in a bottom-up manner, but these regions also send massive projections in the converse direction, top-down, and to many distributed areas.
This. It’s so complex to fathom, let alone for you to paraphrase here. It makes me suspect that there is room enough for us to be mistaken about preliminary predictions at this point. I like the comment you made to noir about remaining agnostic about it.
Consciousness is the Medieval Latin of neural processing.
One-liners like this get me super-excited for TTBD and Semantica.
And consciousness is sequential in that it can only sample one fraction at a time: “our conscious brain cannot experience two ignitions at once and lets us perceive only a single conscious ‘chunk’ at a given time,” he explains. “Whenever the prefrontal and parietal lobes are jointly engaged in processing a first stimulus, they cannot simultaneously reengage toward a second one” (125).
I don’t actually think the research will continue to support this. I immediately think about experiments testing articulatory suppression and the different disruptions and kinds of memory. Or more anecdotally, Da Vinci writing two different things simultaneously or James Garfield writing in Greek and Latin simultaneously, let alone the possible different patterns of cognitive simultaneity. Imagining fractal pandas, right?
The first thing to note is that the actual, neurobiological origins of that activity are entirely unavailable. Since only information that ignites is broadcast, only information that ignites is available. The synoptic nature of the information ignited renders the astronomical complexities of ignition inaccessible to conscious access.
Having a cognizant experience of the actual, neurobiological origins may be impossible, though we should be able to cleave to a more extreme edge than we do now. However, what kind of metacognition do you think eliminativism can leverage then? (Again, I think that “I” can affect change by constructing practices based on neurophysiological knowledge but just asking).
Lol – overall, reflecting your respect for Dehaene’s yardage gained, your major (and seemingly only) criticism is that he didn’t take implications far enough and doesn’t address those implications in his personal idioms?
Cheers. Again, great read.
These are some great replies folks! This one especially:
“though, I’m not sure your take does justice to the complexity involved for readers who might not be so versed in Dehaene’s experimental context”
Ayuh. This occurred to me afterward, that I really should have given some more concrete examples of how these work.
“Any more thoughts on how particular signatures might act to unlock or leverage more complex pattern recognition (not necessarily unlike the “key” metaphor in LTG, which always stunned me as brilliant, but in this case referencing increased “intelligence,” rather than the “magic pattern for murderer”)?”
This is a million dollar question, of sorts, figuring out precisely how it is theoretical metacognition does happen upon different ‘keys’ – like those leading to the development of science, for instance. The only thing I know for sure is that the whole scene has to be thought thru sans intentional terms. That’s the argument I’ve made to Ben before, for instance.
“I don’t actually think the research will continue to support this. I immediately think about experiments testing articulatory suppression and the different disruptions and kinds of memory. Or more anecdotally, Da Vinci writing two different things simultaneously or James Garfield writing in Greek and Latin simultaneously, let alone the possible different patterns of cognitive simultaneity. Imagining fractal pandas, right?”
Or Mandate Schoolmen singing Odaini Concussion Cants! Kidding aside, Mariano Sigman has some pretty convincing work on the cognitive bottleneck. Whether it turns out to be ‘mostly serial’ or ‘purely serial’ remains to be seen.
“However, what kind of metacognition do you think eliminativism can leverage then? (Again, I think that “I” can affect change by constructing practices based on neurophysiological knowledge but just asking).”
I’m not sure I understand this question. I’ve been thinking a lot about how throwing bones when lost actually statistically increases your chances of finding your way, and I’m beginning to think that philosophy plays precisely this function, as a noise generator, churning out mutations that may or may not be functional in some way.
This is a million dollar question, of sorts, figuring out precisely how it is theoretical metacognition does happen upon different ‘keys’ – like those leading to the development of science, for instance. The only thing I know for sure is that the whole scene has to be thought thru sans intentional terms. That’s the argument I’ve made to Ben before, for instance.
Indeed. Very interesting.
Or Mandate Schoolmen singing Odaini Concussion Cants! Kidding aside, Mariano Sigman has some pretty convincing work on the cognitive bottleneck. Whether it turns out to be ‘mostly serial’ or ‘purely serial’ remains to be seen.
Absolutely like a Mandati. I’m looking up Sigman and, of course, find I’m downloading duplicates of some of his papers into my folder – which means I likely didn’t read them last time I came across him. I will rectify this.
I’m not sure I understand this question. I’ve been thinking a lot about how throwing bones when lost actually statistically increases your chances of finding your way, and I’m beginning to think that philosophy plays precisely this function, as a noise generator, churning out mutations that may or may not be functional in some way.
This is a relevant answer. However, asked more to deduce whether you thought volitional facilitation or cultivation of these speculative “metacognitions” was possible? If that is more clear.
Regarding the last, I think there’s good evidence that this is what meditation provides over the long haul – don’t you?
I do – though you’ve sometimes made my commitment to cognitive facilitation by meditation questionable ;).
Though, if I remember right, we both discount the “spiritual” aspects of meditative practice?
You wrote:
>Dehaene even goes so far as to analogize consciousness to a “biological Turing machine” (106), a kind of production system possessing the “capacity to implement any effective procedure”
I am in good company then, since I think the ‘explanatory gap’ and the Entscheidungsproblem have some kind of relationship. Somehow, applying reductionist thinking to thinking itself gives back ‘unsolvable’, just like the halting problem.
You think it has something to do with asymptotic energy demands as metacognition demands greater and greater resources as it chases its own tail, but it might be better stated as a kind of structural constraint.
The cognitive bottleneck described by Dehaene and others holds the key. In the cognitive neuroscience literature they often refer to it as the PRP, the psychological refractory period, but if you think it through in mechanical terms (irreflexive, systematic events), the ongoing activity of solving cannot itself be solved (thus leading to medial neglect). From this and evolution, I think the perception of something like an explanatory gap simply follows, that you can predict that aliens would have their own version of the ‘hard problem.’ Evolution often proceeds by piling solutions atops prior solutions: so in the case of the human recursive solving system, you would expect to have suites of solutions (like ‘yellow,’ say) that are simply hardwired in, brute, only as tractable to recursive conscious processing as our ancestors needed them to be. Beyond that, subpersonal metacognitive consumers are left starving. If, as seems to be the case, theoretical metacognition is prone to confuse such starvation for a bounteous feast, then things begin to seem incredibly mysterious in very short order.
Insofar as the Halting Problem simply follows from the way the ongoing activity of solving cannot itself be solved in serial systems, you have your relationship, courtesy of medial neglect and BBT 😉
Hello Scott,
I’ve been poking through this very substantial post, and an idea now occurs to me: Apparently you and I explore the dynamics of consciousness with very different objectives in mind. It may have just been my stupidity, but only now do I realized that most participants in the Peter’s “Conscious Entities” blog, are highly interested in finding a physiological way to explain consciousness. Though “The Hard Problem” is actually of no interest to me, I do now see that I mustn’t imply that the life’s work of a fellow theorist, might be “inconsequential.”
My ideas have received a cool reception to say the least. With the tremendous amount of work that you and others constantly put into your very complex projects, how could simplistic ideas such as mine be welcome? Though I cannot hope to participate in such involved physiology/engineering discussions (and will also try not to impede them in the future), I do nevertheless hope to interest you and others in my own separate quest. For example, while you might seek to reverse engineer the computer that I’m holding, all I want to do is effectively use the damn thing. Similarly, whatever it is that causes the human to be what it is, I’d just like to help Psychologists, Psychiatrists, Sociologists, and so on, have a better grasp of what they are actually dealing will — their progress seems hopeless to me without theory of that which is “good,” as well as a practical model of the conscious mind. Though our separate quests may be very associated, perhaps a mutual respect will indeed grow? I do hope so.
In this spirit I wonder if we could have an exchange regarding the concept of “mind.” In a comment above you mentioned that reality should be viewed as “mechanical” in an ultimate sense, and as a fellow materialist I do appreciate this. But from a slightly smaller perspective I do still like to use the concept of “mind,” and I simply define this as “that which processes information.” Here our old mechanical typewriters would not have minds, though our computers would to the extent that they “processes information.” Furthermore if a tree has some kind of central mechanism by which inputs are processed to somewhat help it “do whatever it does,” then it would have a mind as well. (Though I actually suspect they are purely mechanical.)
Any thoughts for me?
Well, I guess that makes me your discursive foe! Which, since everybody is everybody’s discursive foe in this business, is just the cost of doing business. I highly recommend you check out Terrence Deacon’s Incomplete Nature, not simply because his project is sympathetic to yours, but because he does such a fantastic job characterizing the shape of the problem, and the landscape as it presently stands. It should provide you with a great roadmap, PE, and help you figure our where your project fits in.
Reading this post is like experiencing that Buddhist kōan where one monk says of a flag in the wind, “The flag is moving.” and the other one says, “The wind is moving.” and then Chán/Zen Patriach Dajian Huineng walks by and says, “Not the flag, not the wind; mind is moving.” only it’s a VHS tape and everyone is obscured by a visual artifact and the sound is distorted and halfway through the tape cuts out because someone taped over the kōan with http://www.youtube.com/watch?v=Fzp7iCaWNvE
Worst Rick Astley video ever.
Oh how I WISH Rick Astley’s videos were all just Nietzsche dying.
Best coach the Senators have ever had.
Fascinating. I’m not sure why I should lose (additional) sleep over it, though. Maybe I’ve already accepted my sketchiness. Or, more likely, I haven’t quite gotten it yet. So here are some questions.
“the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge. If ignited assemblies of neurons are somehow what he is, then he has no inkling what he is—or what it is he is supposedly doing.”
Is “everything” happening outside the reader’s knowledge? If so, there’s no knowledge. There can be no knowledge of the unobserved, yes? And, no inkling or some? I get the impression you want to say “some,” right? It’s just a badly distorted inkling. How badly? Bad enough that a dice roll would be just as effective?
In the meantime, should the neurons doing this consciousness thing here (what normal humans call “I”) take the least distorted inklings, clean them up as much as possible, and send them back down into the opaque synopsis-creating neurons hoping the improvements are habit-forming and/or physically move the body in accordance with least delusional understanding so as to create certain desired effects? Can conscious neurons have positive (happiness-producing, broadly defined) effects? If so, can I say that’s me? “Me” seems to be posthoc narrative perhaps buy isn’t it also pre-hoc? Is part of this problem an intuitive failure (not yours, per se, but in general) to reconcile wholes appearing as more than sums of parts? The sun is not its simply its molecules or quarks, in other words. Or so it appears.
But yes, you’re right. In politics, the problem is not simply that there are shady dealings at the highest levels, it’s that at every juncture where the interests of the one with resources battle the interests of the one without, the well-resourced one tends to determine outcomes. And there are a billion such junctures, and they add up. Speculation bubbles work the same way. A billion profit-oriented, externalize pain/ internalize gain decisions add up to homeless people in Detroit. A billion junctures determined by a pleasure/pain calculus add up to a delusional self. The self is a bubble economy.
Also, this:
“Is part of this problem an intuitive failure (not yours, per se, but in general) to reconcile wholes appearing as more than sums of parts? The sun is not its simply its molecules or quarks, in other words. Or so it appears.”
This is what some argue – the old emergentists, certainly. But if by ‘whole’ you mean ‘whole organism,’ or ‘whole brain,’ then there’s no problem at all understanding the relation between wholes and parts. But if by whole you mean ‘whole person‘ then it becomes pretty clear that it’s not just a metaphysics of mereology that is generating problems. It’s not ‘whole versus parts’ its ‘personal wholes versus impersonal parts’ that’s at issue. And this is precisely the issue that BBT sets out to solve: it provides a systematic way of seeing ‘personal wholes’ as a cognitive illusion pertaining to the structure and dynamics of various impersonal parts.
Depends if ones given up on making a distinction between life and inanimate objects (if so, okays). Otherwise how does ‘person’ fall outside of being a life form and just inanimate parts/fragments…unless ‘person’ was taken to be even higher than life somehow? Well I guess with the notion of the soul, it was…is that the big problem?
I hope Bakker at least go a consulting fee for this:
http://abc.go.com/shows/mind-games
Heh.
Looks nifty! I wonder if they’ll try the double play – you have viewers thinking they couldn’t be influenced, so seed the program with an expectation they might take up…then by the end of the program point out the expectation they may very well have adopted. Of course it’s easier to describe that than to write up 30 minutes of material that does it, but anyway…
I just have this knack for timing my untimeliness…
goT. (as in “got a consulting…”). Ugh. Sorry.
Also, @Bakker – nice review. I am reading a review article on GNW now, thanks to it. I confess I had written off much of this because of Changeux’s involvement in the early theorizing, but this review summarizes the empirical evidence for distinguishing conscious from subliminal processing (“The Global Neuronal Workspace Model of Conscious Access: From Neuronal Architectures to Clinical Applications”). Haven’t got there yet, though- still holding my nose through the computational models.
I’m even reading Dennett’s “Intuition Pumps…” now. I know, I know. Never fear – it’s as irksome as I’d feared, and he’s basically my favorite in his field. Of course, I’m not the target audience, as I am also aware.
I’m very glad you popped by, ochlo! What troubles you most about the models? For that matter, what are the things you generally find yourself most critical of when reading neuroscientific research on consciousness? (Code: what should a layknob like me be looking for?) Is there anyone you think is doing better work than Dehaene?
Intuition Pumps is a good book, especially as an interpretivist primer, but he’s reached that ‘rehash stage’ in his career, I think.
This reference seems like the most fun for readers of the blog (provided you have access:
http://www.pnas.org/content/105/9/3599.full
It should also be available here (Quiroga, 2008; “Human single neuron…”):
http://www.cnl.ucla.edu/publications1.htm
In fact, that website has a range of human neurophysiology data that may be of interest (fun fact: Indre Viskontas is now the co-host of the Inquiring Minds podcast, with Chris Mooney. I recommend it.).
For those who love the “horror” of our wetly instantiated selves, check out the paper on volition at the top of the list (Fried et al., 2011). It’s the classic Libet experiments, updated.
Cheers.
Great website indeed! Have you ever come across any reviews of research into adaptive coding, ochlo?
Hah! The first two words of my PhD dissertation are “Adaptive” and “coding” (and in that order).
As is often the case, I can’t be sure we are using the terms the same way, however. Do you mean “adaptive coding” in the data compression sense, or something more like this:
http://www.nature.com/nrn/journal/v2/n11/abs/nrn1101-820a.html
(I have not read that one, but it seems to fit, though it’s rather old).
There’s this, for visual object representation (I like Connor’s work, generally):
http://www.annualreviews.org/doi/full/10.1146/annurev-neuro-060909-153218
My guess is that this is what you’re after (it’s current, and quite interesting):
http://www.nature.com/nature/journal/v503/n7474/full/nature12742.html
You can find the link here:
http://monkeybiz.stanford.edu/pubs.html
As always, not sure what you see outside the Great Wall of Science in terms of links (I’m logged in through my institution).
Random question for Bakker: Have you read Iain Banks’s Hydrogen Sonata? There’s a section in it that is eerily similar to your Nature piece, and I was curious if you had seen it. It deals with the ethical quandaries raised by predictive simulations conducted by powerful AIs.
I’ve now viewed the description of Terrence Deacon’s “Incomplete Nature” that you’ve suggested Scott, but as an explaination of how consciousness might have physically emerged, this is exactly what I’ve said does not interest me. Some might now consider me a hypocrite for trying to tempt you with my own interests, though I just consider this bad salesmanship (though honest). I’m actually far too pessimistic about humanity to suspect that it will thrive long enough, to then become smart enough, to build conscious entities of its own. Furthermore the resulting “People for the Ethical Treatment of Machines” might actually win my support.
I wish you and others luck in solving this engineering mystery, but perhaps I can even help. As you know, there isn’t a model of the conscious human mind today that has remotely become accepted. But wouldn’t such a thing be great for your quest?
My definition for “mind” was mentioned earlier, and I use a computer like “non-conscious mind” idea to constitute the vast majority of our mental function. (Freud’s “unconscious mind” legacy has impeded us far too long.) But the real fun comes in the “consciousness” model. For a brief description there are senses, sensations, and memory which serve as “input”, with thought as “processor,” and muscle operation as “output.” It is concise and clearly written, and I think unlike anything on the market today. I do hope that you and your readers will have a look — its graph alone should be worth some reflection.
I get lost a lot. I think the reason throwing sticks helps you find your way is because you get lost by thinking you know where you are and what you are doing when you really don’t. Resorting to random methods stops you from thinking. If you are so confused that trying to figure it out only makes you more confused, then anything that stops you from thinking is likely to help you become less confused. Perhaps you know of a way to apply random methods to philosphy of mind.
What does throwing sticks/throwing bones refer to?
Scott: I found the post very interesting and looking forward to reading the book. On the functional level I find it no more problematic than kidney cell functions and kidney parts adding up to the total kidney function. Reading some of Damasio’s work and his theory that the neocortex is built on the old brain and basic emotions so the massive parallel-serial processes in the neo-cortex are add-ons so that even meaning is just a more refined form of feeling which is achieved by feedback between the structures.
No doubt the brain does not just draw information from the environment but the self becomes the environment or scientist becomes more the scientist when he’s in the lab and the researcher becomes more the researcher when he’s in the library.
Although we focus on the self as a self, it is really the social self or our ability to blend in with the social environment. Likewise free will may be more about limiting our will or finding the right paths and boundaries; i.e. putting people on the organizational charts.
What do you become more of when you’re on the internet, then? 😉
I largely agree with what you say and feel similarly, Victor – so we’ll see what Canadian explosion might be the responce to that? 🙂
I don’t see how the readiness of a givenness constitutes a problem for the givenness itself. I think they go well together. If there are limits to a certain givenness, how does this present itself as a problem to our account of those limits? Isn’t that just part of the job? Is not something lost if you then take it as a not-given, after all? And could you not just have an overloaded or ontologically restrictive image of the philosophy of mind? Is there any need for the digging of trenches? Implicit in this apriori antagonism seems to be the believe that the intuitive insight of brain processes is always complete or obvious, either by the theoreticist or some internal logic. But why not allow both views? What should be more obvious, even necessary?
And once you have completed the circuits, would this not just reaffirm that “meaning” is always present to us (as “human lifeforms” etc.) in some form, in a way which we knew all along but were also critical of, and, quite naturally, analysed and differentiated? The problem itself seems to be somewhat nebulous to me, and I think overly prejudiced by the analytic tradition, which I think is sometimes more confused than their claim of solutions, and entirely needlessly arrogant and affected in their reading of others (with similar problems but perhaps some useful points). (I’m thinking of Rorty and Kuhn as representatives of the epistemology of language and science in the 50’s to 70’s.)
Not really my field at the moment. That is, I can’t uphold what is perceived as the problematic aspect here for long enough… I can take the impetus and critical function, some of which actually figures in my reading (in fact used against analytic forms of relativism), but I don’t see the need for trenches and “throwing everything overboard” (often over singular points which can be reassessed as rather classical and logical preconceptions- egocentrism).
Besides, it is surely just a coincidence, but there is a resemblance to Schopenhauer (problems reappearing)…
I should add that Rorty and Kuhn are rather nice fellows, I didn’t mean them by arrogant and affected, but others (there could be a philosophical textbook on this, I think). I do take them as epistemological confusion though, but again not solely them.
I swear last post in a row, but just for fairness, I’m mainly reading Elmar Holenstein at the moment. He combines phenomenology with cognitive science (at first only phenomenology). Most of his work is relatively old now (early 80’s), but I think reasonable for my purposes. I don’t think much of it is translated. It’s also not a huge deal anyway, I think I should just mention it in case.
Hey Scott,
How come you don’t post at Westeros.org anymore?
Off-topic, Mr. Bakker but…
Remember that book you wrote Neuropath?
And then how you got accused of being sexist in your writing and all kinds of attackers came out of the Internet nooks and crannies?
A Glenn Greenwald must-read for anyone who wants to see how the greedy a-holes who run their government actually operate. “How Covert Agents Infiltrate the Internet to Manipulate, Deceive, and Destroy Reputations”
https://firstlook.org/theintercept/2014/02/24/jtrig-manipulation/
Just sayin’.
I was just going to leave a link to this as well because it is indeed an interesting read. the four d’s deny/disrupt/degrade/deceive. It is a crazy thing to do to normal activists and dissenting views
Not to be too paranoid, but if Blind Brain Theory proves to be the blow to human self-regard that you think it will be you may eventually make significant enemies. The Catholic Church no longer has the power to give heretics a hard time but it’s not all that hard to imagine noocentrists choosing to defend their ideal of humanity the way some Muslims have chosen to defend their ideal of God. One more thing to lay in bed and ponder.
[…] given the radical nature of the cognitive bottleneck—just how little information is available for conscious, serial processing—how could any evolved metacognitive capacity whatsoever come close to apprehending the functional […]
[…] now, given all that we have learned the past two decades. On a converging number of accounts, human consciousness is a mechanism for selecting, preserving, and broadcasting information for more general neural consumption. When we theoretically reflect on cognitive activity, such as […]
[…] as a training interface, where the deliberative repetition of actions can be committed to automatic systems. So perhaps it should come as no surprise that, like behaviour, it is largely serial. When […]
[…] picture of consciousness that researchers around the world are piecing together is the picture predicted by Blind Brain […]
[…] know that conscious cognition involves selective information uptake for broadcasting throughout the brain. We also know that no information regarding the astronomically complex activities constitutive of […]
[…] involving intentionality or ‘experience’ more generally are limited to what makes the ‘conscious access cut.’ You could say the situation is actually far worse, since conscious deliberation on conscious […]
[…] that consciousness, in addition to broadcasting information, also stabilizes it, slows it down (Consciousness and the Brain). Only information that is so broadcast can be accessed for verbal report. From this it follows […]
[…] [10] Consciousness and the Brain, p. 79. For an extended consideration of the implications of the Global Neuronal Workspace Theory of Consciousness regarding this issue see, R. Scott Bakker, The Missing Half of the Global Neuronal Workspace. […]
[…] As a bona fide empirical theory, HNT, unlike any traditional theory of intentionality, will be sorted. Either science will find that metacognition actually neglects information in the ways I propose, or it won’t. Either science will find this neglect possesses the consequences I theorize, or it won’t. Nothing exceptional and contentious is required. With our growing understanding of the brain and consciousness comes a growing understanding of information access and processing capacity—and the neglect structures that fall out of them. The human brain abounds in bottlenecks, none of which are more dramatic than consciousness itself. […]