Three Pound Brain

No bells, just whistling in the dark…

Month: February, 2014

Interstellar Dualists and X-phi Alien Superfreaks

by rsbakker

I came up with this little alien thought experiment to illustrate a cornerstone of the Blind Brain Theory: the way systems can mistake information deficits for positive ontological properties, using a species I call the Walleyes (pronounced ‘Wally’s’):

Walleyes possess two very different visual systems, the one high dimensional, adapted to tracking motion and resolving innumerable details, the other myopic in the extreme, adapted to resolving blurry gestalts at best, blobs of shape and colour. Both are exquisitely adapted to solve their respective problem-ecologies, however; those ecologies just happen to be radically divergent. The Walleyes, it turns out, inhabit the twilight line of a world that forever keeps one face turned to its sun. They grow in a linear row that tracks the same longitude around the entire planet, at least wherever there’s land. The high capacity eye is the eye possessing dayvision, adapted to take down mobile predators using poisonous darts. The low capacity eye is the eye possessing nightvision, adapted to send tendrils out to feed on organic debris. The Walleyes, in fact, have nearly a 360 degree view of their environment: only the margin of each defeats them.

The problem, however, is that Walleyes, like anenomes, are a kind of animal that is rooted in place. Save for the odd storm, which blows the ‘head’ about from time to time, there is very little overlap in their respective visual fields, even though each engages (two very different halves of) the same environment. What’s more, the nightvision eye, despite its manifest myopia, continually signals that it possesses a greater degree of fidelity than the first.

Now imagine an advanced alien species introduces a virus that rewires Walleyes for discursive, conscious experience. Since their low-dimensional nightvision system insists (by default) that it sees everything there is to be seen, and its high-dimensional system, always suspicious of camoflaged predators, regularly signals estimates of reliability, the Walleyes have no reason to think heuristic neglect is a problem. Nothing signals the possibility that the problem might be perspectival (related to issues of information access and problem solving capacity), so the metacognitive default of the Walleyes is to construe themselves as special beings that dwell on the interstice of two very different worlds. They become natural dualists…

The same way we seem to be.

Perhaps some X-phi super-aliens are snickering as they read this!

The Missing Half of the Global Neuronal Workspace: A Commentary on Stanislaus Dehaene’s Consciousness and the Brain

by rsbakker

Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts

.

Introduction

Stanislaus Dehaene, to my mind at least, is the premier consciousness researcher on the planet, one of those rare scientists who seems equally at home in the theoretical aether (like we are here) and in the laboratory (where he is there). His latest book, Consciousness and the Brain provides an excellent, and at times brilliant, overview of the state of contemporary consciousness research. Consciousness has come a long way in the past two decades, and Dehaene deserves credit for much of the yardage gained.

I’ve been anticipating Consciousness and the Brain for quite some time, especially since I bumped across “The Eternal Silence of the Neuronal Spaces,” Dehaene’s review of Cristopher Koch’s Consciousness: Confessions of a Romantic Reductionist, where he concludes with a confession of his own: “Can neuroscience be reconciled with living a happy, meaningful, moral, and yet nondelusional life? I will confess that this question also occasionally keeps me lying awake at night.” Since the implications of the neuroscientific revolution, the prospects of having a technically actionable blueprint of the human soul, often keep my mind churning into the wee hours, I was hoping that I might see a more measured, less sanguine Dehaene in this book, one less inclined to soft-sell the troubling implications of neuroscientific research.

And in that one regard, I was disappointed. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts is written for a broad audience, so in a certain sense one can understand the authorial instinct to make things easy for the reader, but rendering a subject matter more amenable to lay understanding is quite a different thing than rendering it more amenable to lay sensibilities. Dehaene, I think, caters far too much to the very preconceptions his science is in the process of dismantling. As a result, the book, for all its organizational finesse, all its elegant formulations, and economical summaries of various angles of research, finds itself haunted by a jagged shadow, the intimation that things simply are not as they seem. A contradiction—of expressive modes if not factual claims.

Perhaps the most stark example of this contradiction comes at the very conclusion of the book, where Dehaene finally turns to consider some of the philosophical problems raised by his project. Adopting a quasi-Dennettian argument (from Freedom Evolves) that the only ‘free will’ that matters is the free will we actually happen to have (namely, one compatible with physics and biology), he writes:

“Our belief in free will expresses the idea that, under the right circumstances, we have the ability to guide our decisions by our higher-level thoughts, beliefs, values, and past experiences, and to exert control over our undesired lower-level impulses. Whenever we make an autonomous decision, we exercise our free will by considering all the available options, pondering them, and choosing the one that we favor. Some degree of chance may enter in a voluntary choice, but this is not an essential feature. Most of the time our willful acts are anything but random: they consist in a careful review of our options, followed by the deliberate selection of the one we favor.” 264

And yet for his penultimate, concluding line no less, he writes, “[a]s you close this book to ponder your own existence, ignited assemblies of neurons literally make up your mind” (266). At this point, the perceptive reader might be forgiven for asking, ‘What happened to me pondering, me choosing the interpretation I favour, me making up my mind?’ The easy answer, of course, is that ‘ignited assemblies of neurons’ are the reader, such that whatever they ‘make,’ the reader ‘makes’ as well. The problem, however, is that the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge. If ignited assemblies of neurons are somehow what he is, then he has no inkling what he is—or what it is he is supposedly doing.

As we shall see, this pattern of alternating expressive modes, swapping between the personal and the impersonal registers to describe various brain activities, occurs throughout Consciousness and the Brain. As I mentioned above, I’m sure this has much to do with Dehaene’s resolution to write a reader friendly book, and so to market the Global Neuronal Workspace Theory (GNWT) to the broader public. I’ve read enough of Dehaene’s articles to recognize the nondescript, clinical tone that animates the impersonally expressed passages, and so to see those passages expressed in more personal idioms as self-conscious attempts on his part to make the material more accessible. But as the free will quote above makes plain, there’s a sense in which Dehaene, despite his odd sleepless night, remains committed to the fundamental compatibility of the personal and the impersonal idioms. He thinks neuroscience can be reconciled with a meaningful and nondelusional life. In what follows I intend to show why, on the basis of his own theory, he’s mistaken. He’s mistaken because, when all is said and done, Dehaene possesses only half of what could count as a complete theory of consciousness—the most important half to be sure, but half all the same. Despite all the detailed explanations of consciousness he gives in the book, he actually has no account whatsoever of what we seem to take consciousness to be–namely, ourselves.

For that account, Stanislaus Dehaene needs to look closely at the implicature of his Global Neuronal Workspace Theory—it’s long theoretical shadow, if you will—because there, I think, he will find my own Blind Brain Theory (BBT), and with it the theoretical resources to show how the consciousness revealed in his laboratory can be reconciled with the consciousness revealed in us. This, then, will be my primary contention: that Dehaene’s Global Neuronal Workspace Theory directly implies the Blind Brain Theory, and that the two theories, taken together, offer a truly comprehensive account of consciousness…

The one that keeps me lying awake at night.

.

Function Dysfunction

Let’s look at a second example. After drawing up an inventory of varous, often intuition-defying, unconscious feats, Dehaene cautions the reader against drawing too pessimistic a conclusion regarding consciousness—what he calls the ‘zombie theory’ of consciousness. If unconscious processes, he asks, can plan, attend, sum, mean, read, recognize, value and so on, just what is consciousness good for? The threat of these findings, as he sees it, is that they seem to suggest that consciousness is merely epiphenomenal, a kind of kaliedoscopic side-effect to the more important, unconscious business of calculating brute possibilities. As he writes:

“The popular Danish science writer Tor Norretranders coined the term ‘user illusion’ to refer to our feeling of being in control, which may well be fallacious; every one of our decisions, he believes, stems from unconscious sources. Many other psychologists agree: consciousness is the proverbial backseat driver, a useless observer of actions that lie forever beyond its control.” 91

Dehaene disagrees, claiming that his account belongs to “what philosophers call the ‘functionalist’ view of consciousness” (91). He uses this passing criticism as a segue for his subsequent, fascinating account of the numerous functions discharged by consciousness—what makes consciousness a key evolutionary adaptation. The problem with this criticism is that it simply does not apply. Norretranders, for instance, nowhere espouses epiphenomenalism—at least not in The User Illusion. The same might be said of Daniel Wegner, one the ‘many psychologists,’ Dehaene references in the accompanying footnote. Far from epiphenomenalism, the argument that consciousness has no function whatsoever (as, say, Susan Pockett (2004) has argued), both of these authors contend that it’s ‘our feeling of being in control’ that is illusory. So in The Illusion of Conscious Will, for instance, Wegner proposes that the feeling of willing allows us to socially own our actions. For him, our consciousness of ‘control’ has a very determinate function, just one that contradicts our metacognitive intuition of that functionality.

Dehaene is simply in error here. He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy. He has simply run afoul the distinction between consciousness as it is and consciousness as appears to us—the distinction between consciousness as impersonally and personally construed. Note the way he actually slips between idioms in the passage quoted above, at first referencing ‘our feeling of being in control’ and then referencing ‘its control.’ Now one might think this distinction between these two very different perspectives on consciousness would be easy to police, but such is not the case (See Bennett and Hacker, 2003). Unfortunately, Dehaene is far from alone when it comes to running afoul this dichotomy.

For some time now, I’ve been arguing for what I’ve been calling a Dual Theory approach to the problem of consciousness. On the one hand, we need a theoretical apparatus that will allow us to discover what consciousness is as another natural phenomenon in the natural world. On the other hand, we need a theoretical apparatus that will allow us to explain (in a manner that makes empirically testable predictions) why consciousness appears the way that it does, namely, as something that simply cannot be another natural phenomenon in the natural world. Dehaene is in the business of providing the first kind of theory: a theory of what consciousness actually is. I’ve made a hobby of providing the second kind of theory: a theory of why consciousness appears to possess the baffling form that it does.

Few terms in the conceptual lexicon are quite so overdetermined as ‘consciousness.’ This is precisely what makes Dehaene’s operationalization of ‘conscious access’ invaluable. But salient among those traditional overdeterminations is the peculiarly tenacious assumption that consciousness ‘just is’ what it appears to be. Since what it appears to be is drastically at odds with anything else in the natural world, this assumption sets the explanatory bar rather high indeed. You could say consciousness needs a Dual Theory approach for the same reason that Dualism constitutes an intuitive default (Emmons 2014). Our dualistic intuitions arguably determine the structure of the entire debate. Either consciousness really is some wild, metaphysical exception to the natural order, or consciousness represents some novel, emergent twist that has hitherto eluded science, or something about our metacognitive access to consciousness simply makes it seem that way. Since the first leg of this trilemma belongs to theology, all the interesting action has fallen into orbit around the latter two options. The reason we need an ‘Appearance Theory’ when it comes to consciousness as opposed to other natural phenomena, has to do with our inability to pin down the explananda of consciousness, an inability that almost certainly turns on the idiosyncrasy of our access to the phenomena of consciousness compared to the phenomena of the natural world more generally. This, for instance, is the moral of Michael Graziano’s (otherwise flawed) Consciousness and the Social Brain: that the primary job of the neuroscientist is to explain consciousness, not our metacognitive perspective on consciousness.

The Blind Brain Theory is just such an Appearance Theory: it provides a systematic explanation of the kinds of cognitive confounds and access bottlenecks that make consciousness appear to be ‘supra-natural.’ It holds, with Dehaene, that consciousness is functional through and through, just not in any way we can readily intuit outside empirical work like Dehaene’s. As such, it takes findings such as Wegner’s, where the function we presume on the basis of intuition (free willing) is belied by some counter-to-intuition function (behaviour ownership), as paradigmatic. Far from epiphenomenalism, BBT constitutes a kind of ‘ulterior functionalism’: it acknowledges that consciousness discharges a myriad of functions, but it denies that metacognition is any position to cognize those functions (see “THE Something about Mary“) short of sustained empirical investigation.

Dehaene is certainly sensitive to the general outline of this problem: he devotes an entire chapter (“Consciousness Enters the Lab”) to discussing the ways he and others have overcome the notorious difficulties involved in experimentally ‘pinning consciousness down.’ And the masking and attention paradigms he has helped develop have done much to transform consciousness research into a legitimate field of scientific research. He even provides a splendid account of just how deep unconscious processing reaches into what we intuitively assume are wholly conscious exercises—an account that thoroughly identifies him as a fellow ulterior functionalist. He actually agrees with me and Norretranders and Wegner—he just doesn’t realize it quite yet.

.

The Global Neuronal Workspace

As I said, Dehaene is primarily interested in theorizing consciousness apart from how it appears. In order to show how the Blind Brain Theory actually follows from his findings, we need to consider both these findings and the theoretical apparatus that Dehaene and his colleagues use to make sense of them. We need to consider his Global Neuronal Workspace Theory of consciousness.

According to GNWT, the primary function of consciousness is to select, stabilize, solve, and broadcast information throughout the brain. As Dehaene writes:

“According to this theory, consciousness is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That’s because the brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it; this is why the capacity to report is a key feature of a conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are. The flexible dissemination of information, I argue, is a characteristic property of a conscious state.” 165

A signature virtue of Consciousness and the Brain lays in Dehaene’s ability to blend complexity and nuance with expressive economy. But again one needs to be wary of his tendency to resort to the personal idiom, as he does in this passage, where the functional versatility provided by consciousness is explicitly conflated with agency, the freedom to dispose of information ‘in whatever way we please.’ Elsewhere he writes:

“The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” 105

Here we find him making essentially the same claims in less anthropomorphic or ‘reader-friendly’ terms. Despite the folksy allure of the ‘workspace’ metaphor, this image of the brain as a ‘hybrid serial-parallel machine’ is what lies at the root of GNWT. For years now, Dehaene and others have been using masking and attention experiments in concert with fMRI, EEG, and MEG to track the comparative neural history of conscious and unconscious stimuli through the brain. This has allowed them to isolate what Dehaene calls the ‘signatures of consciousness,’ the events that distinguish percepts that cross the conscious threshold from percepts that do not. A theme that Dehaene repeatedly evokes is the information asymmetric nature of conscious versus unconscious processing. Since conscious access is the only access we possess to our brain’s operations, we tend to run afoul a version of what Daniel Kahneman (2012) calls WYSIATI, or the ‘what-you-see-is-all-there-is’ effect. Dehaene even goes so far as to state this peculiar tendency as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79). The fact is the nonconscious brain performs the vast, vast majority of the brain’s calculations.

The reason for this has to do with the Inverse Problem, the challenge of inferring the mechanics of some distal system, a predator or a flood, say, from the mechanics of some proximal system such as ambient light or sound. The crux of the problem lies in the ambiguity inherent to the proximal mechanism: a wild variety of distal events could explain any given retinal stimulus, for instance, and yet somehow we reliably perceive predators or floods or what have you. Dehaene writes:

“We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the ‘blind spot’ where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, and massive reinterpreted based on our previous experience of similar visual scenes.” 60

The brain can do this because it acts as a massively parallel Bayesian inference engine, analytically breaking down various elements of our retinal images, feeding them to specialized heuristic circuits, and cobbling together hypothesis after hypothesis.

“Below the conscious stage, myriad unconscious processors, operating in parallel, constantly strive to extract the most detailed and complete interpretation of our environment. They operate as nearly optimal statisticians who exploit the slightest perceptual hint—a faint movement, a shadow, a splotch of light—to calculate the probability that a given property holds true in the outside world.” 92

But hypotheses are not enough. All this machinery belongs to what is called the ‘sensorimotor loop.’ The whole evolutionary point of all this processing is to produce ‘actionable intelligence,’ which is to say, to help generate and drive effective behaviour. In many cases, when the bottom-up interpretations match the top-down expectations and behaviour is routine, say, such selection need not result in consciousness of the stimuli at issue. In other cases, however, the interpretations are relayed to the nonconscious attentional systems of the brain where they are ranked according to their relevance to ongoing behaviour and selected accordingly for conscious processing. Dehaene summarizes what happens next:

“Conscious perception results from a wave of neuronal activity that tips the cortex over its ignition threshold. A conscious stimulus triggers a self-amplifying avalanche of neural activity that ultimately ignites many regions into a tangled state. During that conscious state, which starts approximately 300 milliseconds after stimulus onset, the frontal regions of the brain are being informed of sensory inputs in a bottom-up manner, but these regions also send massive projections in the converse direction, top-down, and to many distributed areas. The end result is a brain web of synchronized areas whose various facets provide us with many signatures of consciousness: distributed activation, particularly in the frontal and parietal lobes, a P3 wave, gamma-band amplification, and massive long-distance synchrony.” 140

As Dehaene is at pains to point out, the machinery of consciousness is simply too extensive to not be functional somehow. The neurophysiological differences observed between the multiple interpretations that hover in nonconscious attention, and the interpretation that tips the ‘ignition threshold’ of consciousness is nothing if not dramatic. Information that was localized suddenly becomes globally accessible. Information that was transitory suddenly becomes stable. Information that was hypothetical suddenly becomes canonical. Information that was dedicated suddenly becomes fungible. Consciousness makes information spatially, temporally, and structurally available. And this, as Dehaene rightly argues, makes all the difference in the world, including the fact that “[t]he global availability of information is precisely what we subjectively experience as a conscious state” (168).

.

A Mile Wide and an Inch Thin

Consciousness is the Medieval Latin of neural processing. It makes information structurally available, both across time and across the brain. As Dehaene writes, “The capacity to synthesize information over time, space, and modalities of knowledge, and to rethink it at any time in the future, is a fundamental component of the conscious mind, one that seems likely to have been positively selected for during evolution” (101). But this evolutionary advantage comes with a number of crucial caveats, qualifications that, as we shall see, make some kind of Dual Theory approach unavoidable.

Once an interpretation commands the global workspace, it becomes available for processing via the nonconscious input of a number of different processors. Thus the metaphor of the workspace. The information can be ‘worked over,’ mined for novel opportunities, refined into something more useful, but only, as Dehaene points out numerous times, synoptically and sequentially.

Consciousness is synoptic insofar as it samples mere fractions of the information available: “An unconscious army of neurons evaluates all the possibilities,” Dehaene writes, “but consciousness receives only a stripped down report” (96). By selecting, in other words, the workspace is at once neglecting, not only all the alternate interpretations, but all the neural machinations responsible: “Paradoxically, the sampling that goes on in our conscious vision makes us forever blind to its inner complexity” (98).

And consciousness is sequential in that it can only sample one fraction at a time: “our conscious brain cannot experience two ignitions at once and lets us perceive only a single conscious ‘chunk’ at a given time,” he explains. “Whenever the prefrontal and parietal lobes are jointly engaged in processing a first stimulus, they cannot simultaneously reengage toward a second one” (125).

All this is to say that consciousness pertains to the serial portion of the ‘hybrid serial-parallel machine’ that is the human brain. Dehaene even goes so far as to analogize consciousness to a “biological Turing machine” (106), a kind of production system possessing the “capacity to implement any effective procedure” (105). He writes:

“A production system comprises a database, also called ‘working memory,’ and a vast array of if-then production rules… At each step, the system examines whether a rule matches the current state of its working memory. If multiple rules match, then they compete under the aegis of a stochastic prioritizing system. Finally, the winning rule ‘ignites’ and is allowed to change the contents of working memory before the entire process resumes. Thus this sequence of steps amounts to serial cycles of unconscious competition, conscious ignition, and broadcasting.” 105

The point of this analogy, Dehaene is quick to point out, isn’t to “revive the cliché of the brain as a classical computer” (106) so much as it is to understand the relationship between the conscious and nonconscious brain. Indeed, in subsequent experiments, Dehaene and his colleagues discovered that the nonconscious, for all its computational power, is generally incapable of making sequential inferences: “The mighty unconscious generates sophisticated hunches, but only a conscious mind can follow a rational strategy, step after step” (109). It seems something of a platitude to claim that rational deliberation requires consciousness, but to be able to provide an experimentally tested neurobiological account of why this is so is nothing short of astounding. Make no mistake: these are the kind of answers philosophy, rooting through the mire of intuition, has sought for millennia.

Dehaene, as I mentioned, is primarily interested in providing a positive account of what consciousness is apart from what we take it to be. “Putting together all the evidence inescapably leads us to a reductionist conclusion,” Dehaene writes. “All our conscious experiences, from the sound of an orchestra to the smell of burnt toast, result from a similar source: the activity of massive cerebral circuits that have reproducible neuronal signatures” (158). Though he does consider several philosophical implications of his ‘reductionist conclusions,’ he does so only in passing. He by no means dwells on them.

Given that consciousness research is a science attempting to bootstrap its way out of the miasma of philosophical speculation regarding the human soul, this reluctance is quite understandable—perhaps even laudable. The problem, however, is that philosophy and science both traffic in theory, general claims about basic things. As a result, the boundaries are constitutively muddled, typically to the detriment of the science, but sometimes to its advantage. A reluctance to speculate may keep the scientist safe, but to the extent that ‘data without theory is blind,’ it may also mean missed opportunities.

So consider Dehaene’s misplaced charge of epiphenomenalism, the way he seemed to be confusing the denial of our intuitions of conscious efficacy with the denial of conscious efficacy. The former, which I called ‘ulterior functionalism,’ entirely agrees that consciousness possesses functions; it denies only that we have reliable metacognitive access to those functions. Our only recourse, the ulterior functionalist holds, is to engage in empirical investigation. And this, I suggested, is clearly Dehaene’s own position. Consider:

“The discovery that a word or a digit can travel throughout the brain, bias our decisions, and affect our language networks, all the while remaining unseen, was an eye-opener for many cognitive scientists. We had underestimated the power of the unconscious. Our intuitions, it turned out, could not be trusted: we had no way of knowing what cognitive processes could or could not proceed without awareness. The matter was entirely empirical. We had to submit, one by one, each mental faculty to a thorough inspection of its component processes, and decide which of those faculties did or did not appeal to the conscious mind. Only careful experimentation could decide the matter…” 74

This could serve as a mission statement for ulterior functionalism. We cannot, as a matter of fact, trust any of our prescientific intuitions regarding what we are, no more than we could trust our prescientific intuitions regarding the natural world. This much seems conclusive. Then why does Dehaene find the kinds of claims advanced by Norretranders and Wegner problematic? What I want to say is that Dehaene, despite the occasional sleepless night, still believes that the account of consciousness as it is will somehow redeem the most essential aspects of consciousness as it appears, that something like a program of ‘Dennettian redefinition’ will be enough. Thus the attitude he takes toward free will. But then I encounter passages like this:

“Yet we never truly know ourselves. We remain largely ignorant of the actual unconscious determinants of our behaviour, and therefore cannot accurately predict what our behaviour will be in circumstances beyond the safety zone of our past experiences. The Greek motto ‘Know thyself,’ when applied to the minute details of our behaviour, remains an inaccessible ideal. Our ‘self’ is just a database that gets filled in through our social experiences, in the same format with which we attempt to understand other minds, and therefore it is just as likely to include glaring gaps, misunderstandings, and delusions.” 113

Claims like this, which radically contravene our intuitive, prescientific understanding of self, suggest that Dehaene simply does not know where he stands, that he alternately believes and does not believe that his work can be reconciled with our traditional understand of ‘meaningful life.’ Perhaps this explains the pendulum swing between the personal and the impersonal idiom that characterizes this book—down to the final line, no less!

Even though this is an eminently honest frame of mind to take to this subject matter, I personally think his research cuts against even this conflicted optimism. Not surprisingly, the Global Neuronal Workspace Theory of Consciousness casts an almost preposterously long theoretical shadow; it possesses an implicature that reaches to the furthest corners of the great human endeavour to understand itself. As I hope to show, the Blind Brain Theory of the Appearance of Consciousness provides a parsimonious and powerful way to make this downstream implicature explicit.

.

From Geocentrism to ‘Noocentrism’

“Most mental operations,” Dehaene writes, “are opaque to the mind’s eye; we have no insight into the operations that allow us to recognize a face, plan a step, add two digits, or name a word” (104-5). If one pauses to consider the hundreds of experiments that he directly references, not to mention the thousands of others that indirectly inform his work, this goes without saying. We require a science of consciousness simply because we have no other way of knowing what consciousness is. The science of consciousness is literally predicated on the fact of our metacognitive incapacity (See “The Introspective Peepshow“).

Demanding that science provide a positive explanation of consciousness as we intuit it is no different than demanding that science provide a positive explanation of geocentrism—which is to say, the celestial mechanics of the earth as we once intuited it. Any fool knows that the ground does not move. If anything, the fixity of the ground is what allows us to judge movement. Certainly the possibility that the earth moved was an ancient posit, but lacking evidence to the contrary, it could be little more than philosophical fancy. Only the slow accumulation of information allowed us to reconceive the ‘motionless earth’ as an artifact of ignorance, as something that only the absence of information could render obvious. Geocentrism is the product of a perspectival illusion, plain and simple, the fact that we literally stood too close to the earth to comprehend what the earth in fact was.

We stand even closer to consciousness—so close as to be coextensive! Nonetheless, a good number of very intelligent people insist on taking (some version of) consciousness as we intuit it to be the primary explanandum of consciousness research. Given his ‘law’ (We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79)), Dehaene is duly skeptical. He is a scientific reductionist, after all. So with reference to David Chalmers’ ‘hard problem’ of consciousness, we find him writing:

“My opinion is that Chalmers swapped the labels: it is the ‘easy’ problem that is hard, while the hard problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmer’s hard problem will evaporate.” 262

Referencing the way modern molecular biology has overthrown vitalism, he continues:

“Likewise, the science of consciousness will keep eating away at the hard problem until it vanishes. For instance, current models of visual perception already explain not only why the human brain suffers from a variety of visual illusions but also why such illusions would appear in any rational machine confronted with the same computational problem. The science of consciousness already explains significant chunks of our subjective experience, and I see no obvious limits to this approach.” 262

I agree entirely. The intuitions underwriting the so-called ‘hard problem’ are perspectival artifacts. As in the case of geocentrism, our cognitive systems stand entirely too close to consciousness to not run afoul a number of profound illusions. And I think Dehaene, not unlike Galileo, is using the ‘Dutch Spyglass’ afforded by masking and attention paradigms to accumulate the information required to overcome those illusions. I just think he remains, despite his intellectual scruples, a residual hostage of the selfsame intuitions he is bent on helping us overcome.

Dehaene only needs to think through the consequences of GNWT as it stands. So when he continues to discuss other ‘hail Mary’ attempts (those of Eccles and Penrose) to find some positive account of consciousness as it appears, writing that “the intuition that our mind chooses its actions ‘at will’ begs for an explanation” (263), I’m inclined to think he already possesses the resources to advance such an explanation. He just needs to look at his own findings in a different way.

Consider the synoptic and sequential nature of what Dehaene calls ‘ignition,’ the becoming conscious of some nonconscious interpretation. The synoptic nature of ignition, the fact that consciousness merely samples interpretations, means that consciousness is radically privative, that every instance of selection involves massive neglect. The sequential nature of ignition, on the other hand, the fact that the becoming conscious of any interpretation precludes the becoming conscious of another interpretation, means that each moment of consciousness is an all or nothing affair. As I hope to show, these two characteristics possess profound implications when applied to the question of human metacognitive capacity—which is to say, our capacity to intuit our own makeup.

Dehaene actually has very little to say regarding self-consciousness and metacognition in Consciousness and the Brain, aside from speculating on the enabling role played by language. Where other mammalian species clearly seem to possess metacognitive capacity, it seems restricted to the second-order estimation of the reliability of their first-order estimations. They lack “the potential infinity of concepts that a recursive language affords” (252). He provides an inventory of the anatomical differences between primates and other mammals, such as specialized ‘broadcast neurons,’ and between humans and their closest primate kin, such as the size of the dendritic trees possessed by human prefrontal neurons. As he writes:

“All these adaptations point to the same evolutionary trend. During hominization, the networks of our prefrontal cortex grew denser and denser, to a larger extent than would be predicted by brain size alone. Our workspace circuits expanded way beyond proportion, but this increase is probably just the tip of the iceberg. We are more than just primates with larger brains. I would not be surprised if, in the coming years, cognitive neuroscientists find that the human brain possesses unique microcircuits that give it access to a new level of recursive, language-like operations.” 253

Presuming the remainder of the ‘iceberg’ does not overthrow Dehaene’s workspace paradigm, however, it seems safe to assume that our metacognitive machinery feeds from the same informational trough, that it is simply one among the many consumers of the information broadcast in conscious ignition. The ‘information horizon’ of the Workspace, in other words, is the information horizon of conscious metacognition. This would be why our capacity to report seems to be coextensive with our capacity to consciously metacognize: the information we can report constitutes the sum of information available for reflective problem-solving.

So consider the problem of a human brain attempting to consciously cognize the origins of its own activity—for the purposes of reporting to other brains, say. The first thing to note is that the actual, neurobiological origins of that activity are entirely unavailable. Since only information that ignites is broadcast, only information that ignites is available. The synoptic nature of the information ignited renders the astronomical complexities of ignition inaccessible to conscious access. Even more profoundly, the serial nature of ignition suggests that consciousness, in a strange sense, is always too late. Information pertaining to ignition can never be processed for ignition. This is why so much careful experimentation is required, why our intuitions are ‘ill-defined,’ why ‘most mental operations are opaque.’ The neurofunctional context of the workspace is something that lies outside the capacity of the workspace to access.

This explains the out-and-out inevitability of what I called ‘ulterior functionalism’ above: the information ignited constitutes the sum of the information available for conscious metacognition. Whenever we interrogate the origins or our conscious episodes, reflection only has our working memory of prior conscious episodes to go on. This suggests something as obvious as it is counterintuitive: that conscious metacognition should suffer a profound form of source blindness. Whenever conscious metacognition searches for the origins of its own activity, it finds only itself.

Free will, in other words, is a metacognitive illusion arising out of the structure of the global neuronal workspace, one that, while perhaps not appearing “in any rational machine confronted with the same computational problem” (262), would appear in any conscious system possessing the same structural features as the global neuronal workspace. The situation is almost directly analogous to the situation faced by our ancestors before Galileo. Absent any information regarding the actual celestial mechanics of the earth, the default assumption is that the earth has no such mechanics. Likewise, absent any information regarding the actual neural mechanics of consciousness, the default assumption is that consciousness also has no such mechanics.

But free will is simply one of many problems pertaining to our metacognitive intuitions. According to the Blind Brain Theory of the Appearance of Consciousness, a great number of the ancient and modern perplexities can be likewise explained in terms of metacognitive neglect, attributed to the fact that the structure and dynamics of the workspace render the workspace effectively blind to its own structure and dynamics. Taking Dehaene’s Global Neuronal Workspace Theory of Consciousness, it can explain away the ‘ill-defined intuitions’ that underwrite attributions of some extraordinary irreducibility to conscious phenomena.

On BBT, the myriad structural peculiarities that theologians and philosophers have historically attributed to the first person are perspectival illusions, artifacts of neglect—things that seem obvious only so long as we remain ignorant of the actual mechanics involved (See, “Cognition Obscura“). Our prescientific conception of ourselves is radically delusional, and the kind of counterintuitive findings Dehaene uses to patiently develop and explain GNWT are simply what we should expect. Noocentrism is as doomed as was geocentrism. Our prescientific image of ourselves is as blinkered as our prescientific image of the world, a possibility which should, perhaps, come as no surprise. We are simply another pocket of the natural world, after all.

But the overthrow of noocentrism is bound to generate even more controversy than the overthrow of geocentrism or biocentrism, given that so much of our self and social understanding relies upon this prescientific image. Perhaps we should all lay awake at night, pondering our pondering…

The Ironies of Modern Progress and Infantilization (by Ben Cain)

by rsbakker

It’s commonly observed that we tend to rationalize our flaws and failings, to avoid the pain of cognitive dissonance, so that we all come to think of ourselves as fundamentally good persons even though many of us must instead be bad if “good” is to have any contrastive meaning. Societies, too, often exhibit pride which leads their chief representatives to embarrass themselves by declaring that their nation is the greatest that’s ever been in history. Both the ancients and the moderns did this, but it’s hard to deny the facts of modern technological acceleration. Just in the last century, global and instant communications have been established, intelligent machines run much of our infrastructure, robots have taken over many menial jobs, the awesome power of nuclear weapons has been demonstrated, and humans have visited the moon. We tend to think that the social impact of such uniquely powerful machines must be for the better. We speak casually, therefore, of technological advance or progress.

The familiar criticism of technology is that it destroys at least as much as it creates, so that the optimists tell only one side of the story. I’m not going to argue that neo-Luddite case here. Instead, I’m interested in the source of our judgment about progress through technology. Ironically, the more modern technology we see, the less reason we have to think there’s any kind of progress at all. This is because modernists from Descartes and Galileo onward have been compelled to distinguish between real and superficial properties, the former being physical and quantitative and the latter being subjective and qualitative. Examples of the superficial, “secondary” aspects are the contents of consciousness, but also symbolic meaning, purpose, and moral value, which include the normative idea of progress. For the most part, modernists think of subjective qualities as illusory, and because they devised scientific methods of investigation that bypass personal impressions and biases, modernists acquired knowledge of how natural processes actually work, which has enabled us to produce so much technology. So it’s curious to hear so many of us still assuming that our societies are generally superior to premodern ones, thanks in particular to our technological advantage. On the contrary, our technology is arguably the sign of a cognitive development that renders such an assumption vacuous.

.

Animism and Angst

One way of making sense of this apparent lack of social awareness is to point out that there are always elites who understand their society better than do the masses. And we could add that because the modern technological changes have happened so swiftly and have such staggering implications, many people won’t catch up to them or will even pretend there are no such consequences because they’re horrifying. But I think this makes for only part of the explanation. The masses aren’t merely ignoring the materialistic implications of science or the bad omens that technologies represent; instead, they have a commonsense conviction that technology must be good because it improves our lives.

In short, most citizens of modern, technologically-developed societies are pragmatic about technology. If you asked them whether they think their societies are better than earlier ones, they’d say yes and if you asked them why, they’d say that technology enables us to do what we want more efficiently, which is to say that technology empowers us to achieve our goals. And it turns out that this pragmatic attitude is more or less consistent with modern materialism. There’s no appeal here to some transcendent ideal, but just an egocentric view of technologies as useful tools. So our societies are more advanced than ancient ones because the ancients had to work harder to achieve their goals, whereas modern technology makes our lives easier. Mind you, this assumes that everyone in history has had some goals in common, and indeed our instinctive, animalistic desires are universal in so far as they’re matters of biology. By contrast, if all societies were alien and incommensurable to each other, national pride would be egregiously irrational. And most people probably also assume that our universal desires ought to be satisfied, because we have human rights, so that there’s moral force behind this social progress.

The instincts to acquire shelter, food, sex, power, and prestige, however, seem to me likewise insufficient to explain our incessant artificialization of nature. There’s another universal urge, which we can think of as the existential one and this is the need to overcome our fear of the ultimate natural truths. There are two ways of doing so, with authenticity or with inauthenticity, which is to say with honour, integrity, and creativity or with delusions arising from a weak will. (Again, this raises the question of whether even these values make sense in the naturalistic picture, and I’ll come back to this at the end of this article.) Elsewhere, I talk about the ancient worldviews as glorifying our penchant for personification. Prehistoric animists saw all of nature as alive, partly because hardly anything at that time was redesigned and refashioned to suit human interests and the predominant wilderness was full of plant and animal life. Also, the ancients hadn’t learned to repress their childlike urge to vent the products of their imagination. At that time, populations were sparse and there were no machines standing as solemn proofs of objective facts; moreover, there wasn’t much historical information to humble the Paleolithic peoples with knowledge of opposing views and thus to rein in their speculations. For such reasons, those ancients must have confronted the world much as all children do—at least with respect to their trust in their imagination.

More precisely, they didn’t confront the world at all. When a modern adult rises in the morning, she leaves behind her irrational dreams and prides herself on believing that she controls her waking hours with her autonomous and rational ego. By contrast, there’s no such divergence between the child’s dream life and waking hours, since the child’s dreams spill into her playful interpretations of everything that happens to her. To be sure, modern children have their imagination tempered by the educational system that’s bursting at the seams with lessons from history. But children generally have only a fuzzy distinction between subject and object. That distinction becomes paramount after the technoscientific proofs of the world’s natural impersonality. The world has always been impersonal and amoral, but only modernists have every reason to believe as much and thus only we inheritors of that knowledge face the starkest existential choice between personal authenticity and its opposite. The prehistoric protopeople, who were still experimenting with their newly acquired excess brain power, faced no such decision between intellectual integrity and flagrant self-deception. They didn’t choose to personify the world, because they knew no different; instead, they projected their mental creations onto the wilderness with childlike abandon and so distracted themselves from their potential to understand the nature of the world’s apparent indifference. After all, in spite of the relative abundance of the ancient environments, things didn’t always go the ancients’ way; they suffered and died like everyone else. Moreover, even early humans were much cleverer than most other species.

Thus, the ancients weren’t so innocent or ignorant that they felt no fear, if only because few animals are that helpless. But human fear differs from the reactionary animal kind, because ours has an existential dimension due to the breadth of our categories and thus of our understanding. Humans attach labels to so many things in the world not just because we’re curious, but because we’re audacious and we have excess (redundant) brain capacity. Animals feel immediate pain and perhaps even the alienness of the world beyond their home territory, but not the profound horror of death’s inexorability or of the world’s undeadness, which is to say the fear of nature’s way of developing (through complexification, natural selection, and the laws of probability) without any normative reason. Animals don’t see the world for what it is, because their vision and thus their concern are so narrow, whereas we’ve looked far out into the macrocosmic and microcosmic magnitudes of the universe. We’ve found no reassuring Mind at the bottom of anything, not even in our bodies. Our overactive brains compel us to care about aspects of the world that are bad for our mental health, and so we’re liable to feel anxious. And as I say, we cope with that anxiety in different ways.

.

Modernity and Infantilization

But how does this existentialism relate to the source of our myth of modern progress? Well, I see a comparison between prehistoric, mythopoeic reverie and the modern consumer’s infantilization. In each case, we have a lack of enlightenment, a retreat from rational neutrality, and an intermixing of subject and object. I’ve discussed the mythopoeic worldview elsewhere, so here I’ll just say that it amounts to thinking of the world as entirely enchanted and filled with vitality. Again, the modern revolutions (science and capitalistic industry) have led to our disenchantment with nature, because we’ve been forced to see the world as dead inside. That’s why late modernists are at best pragmatic about progress. We must somehow express our naïve pride in ourselves and in our self-destructive modern nations, because we prefer not to suffer as alienated outsiders. But modernity’s ideal of ultrarationality makes absolutist and xenophobic pride seem uncivilized—although American audiences are notorious for stooping to that sort of savagery when they chant “USA! USA!” to quell disturbances in their proceedings. In any case, we postmodern pragmatists think of progress as being relative to our interests.

Arguably, then, we should all be despairing, nihilistic antinatalists, cheering on our species’ extinction to spare us more horror from our accursed powers of reason, because of the atheistic implications of science-led philosophical naturalism. But something funny happened along the way to the postmodern now, which is that our high-tech environment has driven most of us to revert to the mythopoeic trance. We, too, collapse the distinction between subject and object, because we’re not surrounded by the wilderness that science has shown to be the “product” of undead forces; instead, we’ve blocked out that world from our daily life and immersed ourselves in our technosphere. That artificial world is at our beck and call: our technology is designed for us and it answers to us a thousand times a day. Science has not yet shown us to be exactly as impersonal as the lifeless universe and so we can take comfort in our amenities as we assume that while there’s no spirit under any rock, there’s a mind behind every iPhone.

So while we’re aware of the scientist’s abstract concept of the physical object, we don’t typically experience the world as including such absurdly remote quantities. Heidegger spoke of the pragmatic stance as the instrumentalization of every object, in which case we can look at a rock and see a potential tool, a “ready-to-hand” helper, not just an impersonal, undead and “given” object. (This is in contrast to objectification, in which we treat things only as “present-to-hand,” or as submitting to scientific scrutiny. The latter seems to reduce to the former, though, since objectification is still anthropocentric, in that the object is viewed not as a fully independent noumenon, but as a subject of human explanation and that makes it a sort of tool. True objectivity is the torment not of scientists but of those suffering from angst on account of their experience of nature’s horrible indifference and undeadness. True objectivity is just angst, when we despair that we can’t do anything with the world because we’re not at home in it and nature marches on regardless. All other attitudes, roughly speaking, are pragmatic.) In any case, the modern environment surpasses that instrumentalism with infantilization, because we late modernists usually encounter actual artifacts, not just potential ones. The big cities, at least, are almost entirely artificial places. Of course, everything in a city is also physical, on some level of scientific explanation, but that’s irrelevant to how we interpret the world we experience. A city is made up of artifacts and artifacts are objects whose functions extend the intentions of some subjects. Thus, hypermodern places bridge the divide between subjects and objects at the experiential level.

However, that’s only a precondition of infantilization. What is it for an adult to live as a child? To answer this, we need standards of psychological adulthood and infancy. My idea of adulthood derives from the modern myths of liberty and rational self-empowerment. Ours is a modern world, albeit one infected with our postmodern self-doubts, so it’s fitting that we be judged according to the standards set by modern European cultures. The modern individual, then, is liberated by the Enlightenment’s break with the past, made free to pursue her self-interest. Above all, this individual is rational since reason makes for her autonomy. Moreover, she’s skeptical of authority and tradition, since the modern experience is of how ancient Church teachings became dogmas that stifled the pursuit of more objective knowledge; indeed, the Church demonized and persecuted those who posed untraditional questions. The modern adult idolizes our hero, the Scientist, who relies on her critical faculties to uncover the truth, which is to say that the modern adult should be expected to be fearlessly individualistic in her assessments and tastes. Finally, this adult should be cosmopolitan—which is very different from Catholic universalism, for example. The Catholic has a vision of everyone’s obligation to convert to Catholicism, whereas the modernist appreciates everyone’s equal potential for self-determination, and so the modernist is classically liberal in welcoming a wide variety of opinions and lifestyles.

What, then, are the relevant characteristics of an infant? The infant is almost entirely dependent on a higher power. A biological infant has no choice in the matter and her infancy is only a stage in a process of maturation. Similarly, an infantile adult lacks autonomy and may be fed information in the same way a biological infant is fed food. For example, a cult member who defers to the charismatic leader in all matters of judgment is infantile with respect to that act of self-surrender. Many premodern cultures have been likewise infantile and our notion of modern progress compares the transition from that anti-modern version of maturity to the modern ideal of the individual’s rational autonomy, with the baby’s growth into a more independent being.

That’s the theory, anyway. The reality is that modern science is wedded to industry which applies our knowledge of nature, and the resulting artificial world infantilizes the masses. How so? For starters, through the post-WWII capitalistic imperative to grow the economy through hyper-consumption. Artificial demand is stimulated through propaganda, which is to say through mostly irrational, associative advertising. The demand is artificial in that it’s manufactured by corporations that have mastered the inhuman science of persuasion. That demand is met by mass-produced supply, the products of which tend to be planned for obsolescence and thus shoddier than they need to be.

The familiar result is the rebranding of the two biologically normal social classes: the rich and powerful alphas and everyone else (the following masses). Modern wealth is rationalized with myths of self-determination and genius, since no credible appeal can be made now to the divine right of kings. Mind you, the exception has been the creation of distinct middle classes which is due to socialist policies in liberal parts of the world that challenge the social Darwinian cynicism that’s implicit in capitalism. Maintaining a middle class in a capitalistic society, though, is a Sisyphean task: it’s like pushing a boulder up a hill we’re doomed to have to keep reclimbing. The middle class members are fattened like livestock awaiting slaughter by the predators that are groomed by capitalistic institutions such as the elite business schools. And so the middle class inevitably goes into debt and joins the poor, while the wealthy consolidate their power as the ruling oligarchs, as has happened in Canada and the US. (For more on what are effectively the hidden differences between democratic liberals and capitalistic conservatives, see here.)

The masses, then, are targeted by the propaganda arm of modern industry, while the wealthy live in a more rarified world. For example, the wealthy tend not to watch television, they’re not in the market for cheap, mass-produced merchandise, and they don’t even gullibly link their self-worth to their hording of possessions in the crass materialistic fashion. No, the oligarchs who come to power through the capitalistic competition have a much graver flaw: they’re as undead as the rest of nature, which makes them fitting avatars of nature’s inhumanity. Those who are obsessed with becoming very powerful or who are corrupted by their power tend to be sociopathic, which means they lack the ability to care what others feel. For that reason, the power elite are more like machines than people: they tend not to be idealistic and so associative advertising won’t work on them, since that kind of advertising construes the consumption of a material good as a means of fulfilling an archetypal desire. Of course, the relatively poor masses are just the opposite: burdened by their conscience, they trust that our modern world isn’t a horror show. Thus, they’re all-too ready to seek advice from advertisers on how to be happy, even though advertisers are actually deeply cynical. The masses are thereby indoctrinated into cultural materialism.

Workers in the service industry literally talk to the customer as if she were a baby, constantly smiling and speaking in a lilting, sing-songy voice; telling the customer whatever she wants to hear, because the customer is always right (just as Baby gets whatever it wants); working like a dog to satisfy the customer as though the latter were the boss and the true adult in the room—but she’s not. The real power elite don’t deal directly with lowly service providers, such as the employees of the average mall. Their underlings do both their buying and their selling for them, so that they needn’t mix with lower folk. This is why George H. W. Bush had never before seen a grocery scanner. No, the service provider is the surrogate parent who is available around the clock to service the consumer, just as a mother must be prepared at any moment to drop everything and attend to Baby. The consumer is the baby—and a whining, selfish one she is at that. That’s the unsettling truth obscured by the illusion of freedom in a consumption-driven society. A consumer can choose which brand name to support out of the hundreds she surveys in the department store, and that bewildering selection reassures her that she’s living the modern dream. But just as the democratic privileges in an effective plutocracy are superficial and structurally irrelevant, so too the consumer’s freedom of choice is belied by her lack of what Isaiah Berlin calls positive freedom. Consumers have negative freedom in that they’re free from coercion so that they can do whatever they want (as long as they don’t hurt anyone). But they lack the positive freedom of being able to fulfill their potential.

In particular, consumers fail to live up to the above ideal of modern adulthood. Choosing which brand of soft drink to buy, when you’ve been indoctrinated by a materialistic culture, is like an infant preferring to receive milk from the left breast rather than the right. Obviously, the deeper choice is to prefer something other than limitless consumption, but that choice is anathema because it’s bad for business. Still, in so far as we have the potential to be mature in the modern sense, to be like those iconoclastic early modern scientists who overcame their Christian culture by way of discovering for themselves how the real world works, we manic consumers have fallen far short. Almost all of us are grossly immature, regardless of how old we are or whether consumer-friendly psychologists pronounce us “normal.”

Now, you might think I’ve established, at best, not a one-way dependence of the masses on the plutocrats, but a sort of sadomasochistic interdependence between them. After all, the producers need consumers to buy their goods, just as a farmer needs to maintain his livestock out of self-interest. Unfortunately, this isn’t so in the globalized world, since the predators of our age have learned that they can express the nihilism at the heart of social Darwinian capitalism, without reservation, just by draining one country of its resources at a time and then by taking their business to a developing country when the previous host has expired, perhaps one day returning as that prior host revivifies in something like the Spenglerian manner. Thus, while it’s true that sellers need buyers, in general, it’s not the case that transnational sellers need any particular country’s buyers, as long as some country somewhere includes willing and able customers. But whereas the transnational sellers don’t need any particular consumers and the consumers can choose between brands (even though companies tend to merge to avoid competing, becoming monopolies or oligopolies), there’s asymmetry in the fact that the mass consumer’s self-worth is attached to consumption and thus to the buyer-seller relationship, whereas that’s not so for the wealthy producers.

Again, that’s because the more power you have, the more dehumanized you become, so that the power elite can’t afford moral principles or a conscience or a vision of a better world. Those who come to be in positions of great power become custodians of the social system (the dominance hierarchy), and all such systems tend to have unequal power distributions so that they can be efficiently managed. (To take a classic example, soviet communism failed largely because its system had to waste so much energy on the pretense that its power wasn’t centralized.) Centralized power naturally corrupts the leaders or else it attracts those who are already corrupt or amoral. So powerful leaders are disproportionately inhuman, psychologically speaking. (I take it this is the kernel of truth in David Icke’s conspiracy theory that our rulers are secretly evil lizards from another dimension.) Although the oligarch may be inclined to consume for her pleasure and indeed she obviously has many more material possessions than the average consumer, the oligarch attaches no value to consumption, because she’s without human feeling. She feels pleasure and pain like most animals, but she lacks complex, altruistic emotions. Ironically, then, the more wealth and power you have, the fewer human rights you ought to have. (For more on this naturalistic, albeit counterintuitive interpretation of oligarchy, see here.)

In any case, to return to the childish consumer, the point is that consumption-driven capitalism infantilizes the masses by establishing this asymmetric relationship between transnational producer and the average buyer. Just as a biological baby is almost wholly dependent on its guardian, the average consumer depends on the economic system that satisfies her craving for more and more material goods. The wealthy consume because they’re predatory machines, like viruses that are only semi-alive, but the masses consume because we’ve been misled into believing that owning things makes us happy and we dearly want to be happy. We think wealth and power liberate us, because with enough money we can buy whatever we want. But we forget the essence of our modern ideal or else we’ve outgrown that ideal in our postmodern phase. What makes the modern individual heroic is her independence, which is why our prototypes (Copernicus, Galileo, Bruno, Darwin, Nietzsche) were modern especially because of their socially subversive inquiries. We consumers aren’t nearly so modern or individualistic, regardless of our libertarian or pragmatic bluster. As consumers, we’re dependent on the mass producers and on our material possessions themselves. We’re not autonomous iconoclasts, we’re just politically correct followers. We don’t think for ourselves, but put our faith in the contemptible balderdash of corporate propaganda. We haven’t the rationality even to laugh at the foolish fallacies that are the bread and butter of associative ads. It doesn’t matter what we say or write; if we enjoy consuming material goods, our subconscious has been colonized by materialistic memes and so our working values are as shallow as they can be without being as empty as those of the animalistic power elite. As consumers, we’re children playing at adult dress-up; we’re cattle that make-believe we’re free just because we routinely choose from among a preselected array of options.

So both technology and capitalism infantilize the masses. By doing our bidding and so making us feel we’re of central importance in the artificial world, technology suppresses angst and alienation. We therefore live not the modern dream but the ancient mythopoeic one—which is also the child’s experience of playing in a magical place, regardless of where the child actually happens to be. And capitalism turns us into consumers, first and foremost, and constant consumption is the very name of the infant’s game, because the infant needs abundant fuel to support her accelerated growth.

A third source of our existential immaturity is inherent in the myth of the modern hero. For many years, this problem with modernism lay dormant because of the early modernists’ persistent sexism, racism, and imperialism. Only white European males were thought of as proper individuals. Their rationalism, however, implied egalitarianism since we’re all innately rational, to some extent, and once the civil rights of women and minorities were recognized, there was a perceptible decline in the manliness of the modern hero. No longer a bold rebel against dogmas or a skeptical lover of the truth, the late-modern individual now is someone who must tolerate all differences. Ours is a multicultural, global village and so we’re consigned to moral relativism and forced to defer to politically correct conventions out of respect for each other’s right to our opinions. Thus, bold originality, once regarded as heroic, is now considered boorish. Early modernists loved to discuss ideas in Salons, but now even to broach a political or religious subject in public is considered impolite, because you may offend someone.

Such rules of political correctness are like parents’ futile restrictions on their child’s thoughts and actions. Western children are protected from coarse language and violence and nudity, because postmodern parents labour under the illusion that their children will be infantile for their entire lifespan, whereas we’re all primarily animals and so are bound to run up against the horrors of natural life sooner or later. Compare these arbitrary strictures with the medieval Church’s laws against heresy. In all three cases (taboos for infantilized adults, protectionist illusions for children, and medieval Christian imperialism), the rules are uninspired as solutions to the existential problem of how to face reality, but the Church went as far as to torture and kill on behalf of its absurd notions. At most, postmodern parents may spank their child for saying a bad word, while an adult who carries the albatross of the archaic ideal of the independent person and so wishes to test the merit of her assumptions by attempting to engage others in a conversation about ideas will only find herself alone and ignored at the party, inspecting the plant in the corner of the room. Still, our postmodern mode of infantilization is fully degrading despite the lack of severe consequences when we step out of bounds.

This is the ethic of care that’s implicit in modern individualism, which is at odds with the modern hunt for the truth. Modernism was originally framed in the masculine terms of a conflict between scientific truth and Christian dogmatic opinion, but now that everyone is recognized as an autonomous, dignified modern person, feminine values have surged. And just as someone with a hammer sees everything else as a nail, a woman is inclined to see everyone else as a baby. This is why, for example, young women who haven’t outgrown their motherly instincts overuse the word “cute”: handbags are cute, as are small pets and even handsome men. This is also why girls worship not tough, rugged male celebrities, but androgynous ones like Justin Bieber. As conservative social critics appreciate, manliness is out of fashion. Even hair on a man’s chest is perceived as revolting, let alone the hair on his back. Men’s bodies must be shorn of any such symbol of their unruly desires, because men are obliged to fulfill women’s fantasy that men are babies who need to be nurtured. Men must be innocent, not savage; they must be eternally youthful and thus hairless, not battered and scarred by the heartless world; they must be doe-eyed and cheerful, not grim, aloof and embittered. Men must be babies, not the manly heroes celebrated by the early modernists, who brought Europe out of the relative Dark Age. Men have been feminized, thanks ironically to the early modern ideal of personal autonomy through reason. As for women themselves, those who must see themselves primarily as care-givers in so far as they’re naturally inclined to infantilize men, they too become child-like, because “care” is reflexive. And so modern women baby themselves, treating themselves to the spa, to the latest fashions and accessories, to the inanities of daytime television, to the sentimental fantasies of soap operas and romance novels, and to the platitudes of flattering, feel-good New Age cults.

.

The Ignorant Baby and the Enlightened Aesthete

Those are three sources of modern infantilization: technology, capitalism, and postmodern culture. I submit, then, that the reason we can be so ignorant as to speak of technoscientific progress, even though scientific theories imply naturalism which in turn implies the unreality of normative values and the undeadness of all processes, is that we lack self-knowledge because we’re infantile. We’re distracted by the games of possessing and playing with our technotoys, because our artificial environment trains us to be babies. And babies aren’t interested in ideas, let alone in terribly dispiriting philosophies such as naturalism with its atheistic and dark existential implications. That’s why we can parrot the meme of modern progress, because we’ve already swallowed a thousand corporate myths by the time we’ve watched a year’s worth of materialistic ads on TV. What’s one more piece of foolishness added to that pile? If we were to look at the myth of progress, we’d see it derives from ancient theistic apocalypticism, and specifically from the Zoroastrian idea of a linear and teleological arrow of historical time. The idea was that time would come to a cataclysmic end when God would perfect the fallen world and defeat the forces of evil in a climactic battle. All prior events are made meaningful in relation to that ultimate endpoint. In that teleological metaphysics, the idea of real progress makes sense. But there’s no such teleology in naturalism, so there can be no modern progress. At best, some scientific theory or piece of technology can meet with our approval and allow us to achieve our personal goals more readily, but that subjective progress loses its normative force. Mind you, that’s the only kind of progress that pragmatists are entitled to affirm, but there’s no real goodness in modernity if that’s all we mean by the word.

The titular ironies, then, are that the so-called technoscientific signs of modern progress are indications rather of the superficiality or illusoriness of the very concept of social progress that most people have in mind, despite their pragmatic attitude, and that the late great modernists who are supposed to stand tall as the current leaders of humanity are instead largely infantilized by modernity and so are similar to the mythopoeic, childlike ancients.

Here, finally, I’ve pointed out that there’s no real progress in nature, since nature is undead rather than enchanted by personal qualities such as meaning or purpose, and yet I affirmed the existential value of personal authenticity. I promised to return to this apparent contradiction. My solution, as I’ve explained at length elsewhere, is to reduce normative evaluation to the aesthetic kind. For example, I say intellectual integrity is better than self-delusion. But is that judgment as superficial and subjective as a moral principle in light of philosophical naturalism? Not if the goodness of personal integrity and more specifically of the coherence of your worldview which drives your behaviour, is thought of as a kind of beauty. When we take up the aesthetic perspective, all processes seem not just undead but artistically creative. Life itself becomes art and our aesthetic duty is to avoid the ugliness of cliché and to strive for ingenious and subversive originality in our actions.

Is the aesthetic attitude as arbitrary as a theistic interpretation of the world, given science-centered naturalism? No, because aesthetics falls out of the objectification made possible by scientific skepticism. We see something as an art object when we see it as complete in itself and thus as useless and indifferent to our concerns, the opposite being a utilitarian or pragmatic stance. And that’s precisely the essence of cosmicism, which is the darkest part of modern wisdom. Natural things, as such, are complete in themselves, meaning that they exist and develop for no human reason. That’s the horror of nature: the world doesn’t care about us, our adaptability notwithstanding, and so we’re bound to be overwhelmed by natural forces and to perish with just as little warning as we were given when nature evolved us in the first place. But the point here is that the flipside of this horror is that nature is full of art! The undeadness of things is also their sublime beauty or raw ugliness. When we recognize the alienness and monstrosity of natural processes, because we’ve given up naïve anthropocentrism, we’ve already adopted the aesthetic attitude. That’s because we’ve declined to project our interests onto what are wholly impersonal things, and so we objectify and aestheticize them with one and the same act of humility. The angst and the horror we feel when we understand what nature really is and thus how impersonal we ourselves are are also aesthetic reactions. Angst is the dawning of awe as we begin to fathom nature’s monstrous scope, horror the awakening of pantheistic fear of the madness of the artist responsible for so much wasted art. The aesthetic values which are also existential ones aren’t merely subjective, because nature’s undead creativity is all-too real.

Text as Teeter-Totter

by rsbakker

Neuropath will always occupy a special, yet prickly place in my psyche. The book is special to me because of its genesis, first and foremost, arising as it did out of what (I can now see) was a truly exceptional experience teaching Popular Culture and a bet with my incredulous wife. But it’s also special because of the kind of critical reception it’s since received: I’ve actually come across reviews warning people to take Thomas Metzinger’s blurb, “You should think twice before reading this!” seriously. I was aiming for something that balanced the visceral on a philosophical edge, so I was overjoyed by these kinds of visceral responses. But I was troubled that no one seemed to be grasping the philosophical beyond the visceral, seeing the implications considered in the book beyond what was merely personal. Then, several years back someone sent me this link to Steven Shaviro’s penetrating and erudite review of Neuropath. And I can remember feeling as though some kind of essential circuit between author, book, and critic, had been closed.

That the book had truly been completed.

Now I’m genuinely honoured to have the opportunity to once again complete that circuit in the flesh at Western University in a couple weeks time. Steven Shaviro has spent his career jamming cutting edge speculative fiction and speculative theory together in his skull, a semantic Large Hadron Collider, and publishing the resulting Feynman diagrams on the ground-breaking The Pinnochio Theory as well as in his numerous scholarly works. He will be presenting on Neuropath, and I will be responding, at a public lecture on Thursday, February 13th, at 4:30 PM, the North Campus Building, Rm 117. All are welcome.