Three Pound Brain

No bells, just whistling in the dark…

Tag: meaning

On the Death of Meaning

by rsbakker

My copy of New Directions In Philosophy and Literature arrived yesterday…

New Directions

The anthology features an introduction by Claire Colebrook, as well as papers by Graham Harman, Graham Priest, Charlie Blake, and more. A prepub version of my contribution, “On the Death of Meaning,” can be found here.

Advertisements

Dennett’s Black Boxes (Or, Meaning Naturalized)

by rsbakker

“Dennett’s basic insight is that there are under-explored possibilities implicit in contemporary scientific ideas about human nature that are, for various well understood reasons, difficult for brains like ours to grasp. However, there is a familiar remedy for this situation: as our species has done throughout its history when restrained by the cognitive limitations of the human brain, the solution is to engineer new cognitive tools that enable us to transcend these limitations. ”

—T. W. Zadwidzki, “As close to the definitive Dennett as we’re going to get.”

So the challenge confronting cognitive science, as I see it, is to find some kind of theoretical lingua franca, a way to understand different research paradigms relative to one another. This is the function that Darwin’s theory of evolution plays in the biological sciences, that of a common star chart, a way for myriad disciplines to chart their courses vis a vis one another.

Taking a cognitive version of ‘modern synthesis’ as the challenge, you can read Dennett’s “Two Black Boxes: a Fable” as an argument against the need for such a synthesis. What I would like to show is the way his fable can be carved along different joints to reach a far different conclusion. Beguiled by his own simplifications, Dennett trips into the same cognitive ‘crash space’ that has trapped traditional speculation on the nature of cognition more generally, fooling him into asserting explanatory limits that are apparent only.

Dennett’s fable tells the story (originally found in Darwin’s Dangerous Idea, 412-27) of a group of researchers stranded with two black boxes, each containing a supercomputer with a database of ‘true facts’ about the world, one in English, the other in Swedish. One box has two buttons labeled alpha and beta, while the second box has three lights coloured yellow, red, and green. Unbeknownst to the researchers, the button box simply transmits a true statement from the one supercomputer when the alpha button is pushed, which the other supercomputer acknowledges by lighting the red bulb for agreement, and a false statement when the beta button is pushed, which the bulb box acknowledges by lighting the green bulb for disagreement. The yellow bulb illuminates only when the bulb box can make no sense of the transmission, which is always the case when the researcher disconnect the boxes and, being entirely ignorant of any of these details, substitute signals of their own.

The intuitive power of the fable turns on the ignorance of the researchers, who begin by noting the manifest relations above, how pushing alpha illuminates red, pushing beta illuminates green, and how interfering with the signal between the boxes invariably illuminates yellow. Until the two hackers who built the supercomputers arrive, they have no way of explaining why the three actions—alpha pushing, beta pushing, and signal interfering—illuminate the lights they do. Even when they crack open the boxes and begin reverse engineering the supercomputers within, they find themselves no closer to solving the problem. This is what makes their ignorance so striking: not even the sustained, systematic application of mechanical cognition paradigmatic of science can solve the problem. Certainly a mechanical account of all the downstream consequences of pushing alpha or beta or interfering with the signal is possible, but this inevitably cumbersome account nevertheless fails to explain the significance of what is going on.

Dennett’s black boxes, in other words, are actually made of glass. They can be cracked open and mechanically understood. It’s their communication that remains inscrutable, the fact that no matter what resources the researchers throw at the problem, they have no way of knowing what is being communicated. The only way to do this, Dennett wants to argue, is to adopt the ‘intentional stance.’ This is exactly what Al and Bo, the two hackers responsible for designing and building the black boxes, provide when they finally let the researchers in on their game.

Now Dennett argues that the explanatory problem is the same whether or not the hackers simply hide themselves in the black boxes, Al in one and Bo in the other, but you don’t have to buy into the mythical distinction between derived and original intentionality to see this simply cannot be the case. The fact that the hackers are required to resolve the research conundrum pretty clearly suggests they cannot simply be swapped out with their machines. As soon as the researchers crack open the boxes and find two human beings are behind the communication the whole nature of the research enterprise is radically transformed, much as it is when they show up to explain their ‘philosophical toy.’

This underscores a crucial point: Only the fact that Al and Bo share a vast background of contingencies with the researchers allows for the ‘semantic demystification’ of the signals passing between the boxes. If anything, cognitive ecology is the real black box at work in this fable. If Al and Bo had been aliens, their appearance would have simply constituted an extension of the problem. As it is, they deliver a powerful, but ultimately heuristic, understanding of what the two boxes are doing. They provide, in other words, a black box understanding of the signals passing between our two glass boxes.

The key feature of heuristic cognition is evinced in the now widely cited gaze heuristic, the way fielders fix the ball in their visual field while running to keep the ball in place. The most economical way to catch pop flies isn’t to calculate angles and velocities but to simply ‘lock onto’ the target, orient locomotion to maintain its visual position, and let the ball guide you in. Heuristic cognition solves problems not via modelling systems, but via correlation, by comporting us to cues, features systematically correlated to the systems requiring solution. IIR heat-seeking missiles, for instance, need understand nothing of the targets they track and destroy. Heuristic cognition allows us to solve environmental systems (including ourselves) without the need to model those systems. It enables, in other words, the solution of environmental black boxes, systems possessing unknown causal structures, via known environmental regularities correlated to those structures.

This is why Al and Bo’s revelation has the effect of mooting most all of the work the researchers had done thus far. The boxes might as well be black, given the heuristic nature of their explanation. The arrival of the hackers provides a black box (homuncular) ‘glassing’ of the communication between the two boxes, a way to understand what they are doing that cannot be mechanically decomposed. How? By identifying the relevant cues for the researchers, thereby plugging them into the wider cognitive ecology of which they and the machines are a part.

The communication between the boxes is opaque to the researchers, even when the boxes are transparent, because it is keyed to the hackers, who belong to the same cognitive ecology as to the researchers—only unbeknownst to the researchers. As soon as they let the researchers in on their secret—clue (or ‘cue’) them in—the communication becomes entirely transparent. What the boxes are communicating becomes crystal clear because it turns out they were playing the same game with the same equipment in the same arena all along.

Now what Dennett would have you believe is that ‘understanding the communication’ is exhausted by taking the intentional stance, that the problem of what the machines are communicating is solved as far as it needs to be solved. Sure, there is a vast, microcausal story to be told (the glass box one), but it proves otiose. The artificiality of the fable facilitates this sense: the machines, after all, were designed to compare true or false claims. This generates the sense of some insuperable gulf segregating the two forms of cognition. One second the communication was utterly inscrutable, and the next, Presto! it’s transparent.

“The debate went on for years,” Dennett concludes, “but the mystery with which it began was solved” (84). This seems obvious, until one asks whether plugging the communication into our own intentional ecology answers our original question. If the question is, ‘What do the three lights mean?’ then of course the question is answered, as well it should be, given the question amounts to, ‘How do the three lights plug into the cognitive ecology of human meaning?’ If the question is, ‘What are the mechanics of the three lights, such that they mean?’ then the utility of intentional cognition simply provides more data. The mystery of the meaning of the communication is dissolved, sure, but the problem of relating this meaning to the machinery remains.

What Dennett is attempting to provide with this analogy is a version of ‘radical interpretation,’ an instance that strips away our preconceptions, and forces us to consider the problem of meaning from ‘conceptual scratch,’ you might say. To see the way his fable is loaded, you need only divorce the machines from the human cognitive ecology framing them. Make them alien black-cum-glass boxes and suddenly mechanical cognition is all our researchers have—all they can hope to have. If Dennett’s conclusions vis a vis our human black-cum-glass boxes are warranted, then our researchers might as well give up before they begin, “because there really is no substitute for semantic or intentional predicates when it comes to specifying the property in a compact, generative, explanatory way” (84). Since we don’t share the same cognitive ecology as the aliens, their cues will make no implicit or homuncular sense to us at all. Even if we could pick those cues out, we would have no way of plugging them into the requisite system of correlations, the cognitive ecology of human meaning. Absent homuncular purchase, what the alien machines are communicating would remain inscrutable—if Dennett is to be believed.

Dennett sees this thought experiment as a decisive rebuttal to those critics who think his position entails semantic epiphenomenalism, the notion that intentional posits are causally inert. Not only does he think the intentional stance answers the researchers’ primary question, he thinks it does so in a manner compatible (if not consilient) with causal explanation. Truthhood can cause things to happen:

“the main point of the example of the Two Black Boxes is to demonstrate the need for a concept of causation that is (1) cordial to higher-level causal understanding distinct from an understanding of the microcausal story, and (2) ordinary enough in any case, especially in scientific contexts.” “With a Little Help From my Friends,” Dennett’s Philosophy: A Comprehensive Assessment, 357

The moral of the fable, in other words, isn’t so much intentional as it is causal, to show how meaning-talk is indispensible to a certain crucial ‘high level’ kind of causal explanation. He continues:

“With regard to (1), let me reemphasize the key feature of the example: The scientists can explain each and every instance with no residual mystery at all; but there is a generalization of obviously causal import that they are utterly baffled by until they hit upon the right higher-level perspective.” 357

Everything, of course, depends on what ‘hitting upon the right higher level perspective’ means. The fact is, after all, causal cognition funds explanation across all ‘levels,’ and not simply those involving microstates. The issue, then, isn’t simply one of ‘levels.’ We shall return to this point below.

With regard to (2), the need for an ‘ordinary enough’ concept of cause, he points out the sciences are replete with examples of intentional posits figuring in otherwise causal explanations:

“it is only via … rationality considerations that one can identify or single out beliefs and desires, and this forces the theorist to adopt a higher level than the physical level of explanation on its own. This level crossing is not peculiar to the intentional stance. It is the life-blood of science. If a blush can be used as an embarrassment-detector, other effects can be monitored in a lie detector.” 358

Not only does the intentional stance provide a causally relevant result, it does so, he is convinced, in a way that science utilizes all the time. In fact, he thinks this hybrid intentional/causal level is forced on the theorist, something which need cause no concern because this is simply the cost of doing scientific business.

Again, the question comes down to what ‘higher level of causal understanding’ amounts to. Dennett has no way of tackling this question because he has no genuinely naturalistic theory of intentional cognition. His solution is homuncular—and self-consciously so. The problem is that homuncular solvers can only take us so far in certain circumstances. Once we take them on as explanatory primitives—the way he does with the intentional stance—we’re articulating a theory that can only take us so far in certain circumstances. If we confuse that theory for something more than a homuncular solver, the perennial temptation (given neglect) will be to confuse heuristic limits for general ones—to run afoul the ‘only-game-in-town-effect.’ In fact, I think Dennett is tripping over one of his own pet peeves here, confusing what amounts to a failure of imagination with necessity (Consciousness Explained, 401).

Heuristic cognition, as Dennett claims, is the ‘life-blood of science.’ But this radically understates the matter. Given the difficulties involved in the isolation of causes, we often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information humans have, evolved and learned sensitivities to effects systematically correlated to those environmental systems (including ourselves) relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible (and we are presently hellbent on creating tools that can do the same).

Humans are encircled, engulfed, by the inverse problem, the problem of isolating causes from effects. We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems are wildly disparate, not all access is equal. ‘Isolating causes,’ it turns out, means different things for different kinds of problem solving.

Information access, in fact, divides cognition into two distinct families. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the indispensability, the efficacy, and the limitations of source insensitive (cue correlative) cognition (see, “On the Interpretation of Artificial Souls“). Andrei Cimpian, Klaus Fiedler, and the work of the Adaptive Behaviour and Cognition Research Group more generally are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

Now then, what Dennett is claiming is first, that instances of source insensitive cognition can serve source sensitive cognition, and second, that such instances fulfill our explanatory needs as far as they need to be fulfilled. What triggers the red light? The communication of a true claim from the other machine.

Can instances of source insensitive cognition serve source sensitive cognition (or vice versa)? Can there be such a thing as source insensitive/source sensitive hybrid cognition? Certainly seems that way, given how we cobble to two modes together both in science and everyday life. Narrative cognition, the human ability to cognize (and communicate) human action in context, is pretty clearly predicated on this hybridization. Dennett is clearly right to insist that certain forms of source insensitive cognition can serve certain forms of source sensitive cognition.

The devil is in the details. We know homuncular forms of source insensitive cognition, for instance, don’t serve the ‘hard’ sciences all that well. The reason for this is clear: source insensitive cognition is the mode we resort to when information regarding actual physical constraints isn’t available. Source insensitive idioms are components of wide correlative systems, cue-based cognition. The posits they employ cut no physical joints.

This means that physically speaking, truth causes nothing, because physically speaking, ‘truth’ does not so much refer to ‘real patterns’ in the natural world as participate in them. Truth is at best a metaphorical causer of things, a kind of fetish when thematized, a mere component of our communicative gear otherwise. This, of course, made no difference whatsoever to our ancestors, who scarce had any way of distinguishing source sensitive from source insensitive cognition. For them, a cause was a cause was a cause: the kinds of problems they faced required no distinction to be economically resolved. The cobble was at once manifest and mandatory. Metaphorical causes suited their needs no less than physical causes did. Since shallow information neglect entails ignorance of shallow information neglect—since insensitivity begets insensitivity to insensitivity—what we see becomes all there is. The lack of distinctions cues apparent identity (see, “On Alien Philosophy,” The Journal of Consciousness Studies (forthcoming)).

The crucial thing to keep in mind is that our ancestors, as shallow information consumers, required nothing more. The source sensitive/source insensitive cobble they possessed was the source sensitive/source insensitive cobble their ancestors required. Things only become problematic as more and more ancestrally unprecedented—or ‘deep’— information finds its way into our shallow information ambit. Novel information begets novel distinctions, and absolutely nothing guarantees the compatibility of those distinctions with intuitions adapted to shallow information ecologies.

In fact, we should expect any number of problems will arise once we cognize the distinction between source sensitive causes and source insensitive causes. Why should some causes so effortlessly double as effects, while other causes absolutely refuse? Since all our metacognitive capacities are (as a matter of computational necessity) source insensitive capacities, a suite of heuristic devices adapted to practical problem ecologies, it should come as no surprise that our ancestors found themselves baffled. How is source insensitive reflection on the distinction between source sensitive and source insensitive cognition supposed to uncover the source of the distinction? Obviously, it cannot, yet precisely because these tools are shallow information tools, our ancestors had no way of cognizing them as such. Given the power of source insensitive cognition and our unparalleled capacity for cognitive improvisation, it should come as no surprise that they eventually found ways to experimentally regiment that power, apparently guaranteeing the reality of various source insensitive posits. They found themselves in a classic cognitive crash space, duped into misapplying the same tools out of school over and over again simply because they had no way (short exhaustion, perhaps) of cognizing the limits of those tools.

And here we stand with one foot in and one foot out of our ancestral shallow information ecologies. In countless ways both everyday and scientific we still rely upon the homuncular cobble, we still tell narratives. In numerous other ways, mostly scientific, we assiduously guard against inadvertently tripping back into the cobble, applying source insensitive cognition to a question of sources.

Dennett, ever the master of artful emphasis, focuses on the cobble, pumping the ancestral intuition of identity. He thinks the answer here is to simply shrug our shoulders. Because he takes stances as his explanatory primitives, his understanding of source sensitive and source insensitive modes of cognition remains an intentional (or source insensitive) one. And to this extent, he remains caught upon the bourne of traditional philosophical crash space, famously calling out homuncularism on the one side and ‘greedy reductionism’ on the other.

But as much as I applaud the former charge, I think the latter is clearly an artifact of confusing the limits of his theoretical approach with the way things are. The problem is that for Dennett, the difference between using meaning-talk and using cause-talk isn’t the difference between using a stance (the intentional stance) and using something other than a stance. Sometimes the intentional stance suites our needs, and sometimes the physical stance delivers. Given his reliance on source insensitive primitives—stances—to theorize source sensitive and source insensitive cognition, the question of their relation to each other also devolves upon source insensitive cognition. Confronted with a choice between two distinct homuncular modes of cognition, shrugging our shoulders is pretty much all that we can do, outside, that is, extolling their relative pragmatic virtues.

Source sensitive cognition, on Dennett’s account, is best understood via source insensitive cognition (the intentional stance) as a form of source insensitive cognition (the ‘physical stance’). As should be clear, this not only sets the explanatory bar too low, it confounds the attempt to understand the kinds of cognitive systems involved outright. We evolved intentional cognition as a means of solving systems absent information regarding their nature. The idea then—the idea that has animated philosophical discourse on the soul since the beginning—that we can use intentional cognition to solve the nature of cognition generally is plainly mistaken. In this sense, Intentional Systems Theory is an artifact of the very confusion that has plagued humanity’s attempt to understand itself all along: the undying assumption that source insensitive cognition can solve the nature of cognition.

What do Dennett’s two black boxes ultimately illuminate? When two machines functionally embedded within the wide correlative system anchoring human source insensitive cognition exhibit no cues to this effect, human source sensitive cognition has a devil of a time understanding even the simplest behaviours. It finds itself confronted by the very intractability that necessitated the evolution of source insensitive systems in the first place. As soon as those cues are provided, what was intractable for source sensitive cognition suddenly becomes effortless for source insensitive cognition. That shallow environmental understanding is ‘all we need’ if explaining the behaviour for shallow environmental purposes happens to be all we want. Typically, however, scientists want the ‘deepest’ or highest dimensional answers they can find, in which case, such a solution does nothing more than provide data.

Once again, consider how much the researchers would learn were they to glass the black boxes and find the two hackers inside of them. Finding them would immediately plug the communication into the wide correlative system underwriting human source insensitive cognition. The researchers would suddenly find themselves, their own source insensitive cognitive systems, potential components of the system under examination. Solving the signal would become an anthropological matter involving the identification of communicative cues. The signal’s morphology, which had baffled before, would now possess any number of suggestive features. The amber light, for instance, could be quickly identified as signalling a miscommunication. The reason their interference invariably illuminated it would be instantly plain: they were impinging on signals belonging to some wide correlative system. Given the binary nature of the two lights and given the binary nature of truth and falsehood, the researchers, it seems safe to suppose, would have a fair chance of advancing the correct hypothesis, at least.

This is significant because source sensitive idioms do generalize to the intentional explanatory scale—the issue of free will wouldn’t be such a conceptual crash space otherwise! ‘Dispositions’ are the typical alternative offered in philosophy, but in fact, any medicalization of human behaviour examples the effectiveness of biomechanical idioms at the intentional level of description (something Dennett recognizes at various points in his oeuvre (as in “Mechanism and Responsibility”) yet seems to ignore when making arguments like these). In fact, the very idiom deployed here demonstrates the degree to which these issues can be removed from the intentional domain.

The degree to which meaning can be genuinely naturalized.

We are bathed in consequences. Cognizing causes is more expensive than cognizing correlations, so we evolved the ability to cognize the causes that count, and to leave the rest to correlations. Outside the physics of our immediate surroundings, we dwell in a correlative fog, one that thins or deepens, sometimes radically, depending on the physical complexity of the systems engaged. Thus, what Gerd Gigerenzer calls the ‘adaptive toolbox,’ the wide array of heuristic devices solving via correlations alone. Dennett’s ‘intentional stance’ is far better understood as a collection of these tools, particularly those involving social cognition, our ability to solve for others or for ourselves. Rather than settling for any homuncular ‘attitude taking’ (or ‘rule following’), we can get to the business of isolating devices and identifying heuristics and their ‘application conditions,’ understanding both how they work, where they work, and the ways they go wrong.

The Meaning Wars

by rsbakker

Meaning

Apologies all for my scarcity of late. Between battling snow and Sranc, I’ve scarce had a moment to sit at this computer. Edward Feser has posted “Post-intentional Depression,” a thorough rebuttal to my Scientia Salon piece, “Back to Square One: Toward a Post-Intentional Future,” which Peter Hankins at Conscious Entities has also responded to with “Intellectual Catastrophe.” I’m interested in criticisms and observations of all stripes, of course, but since Massimo has asked me for a follow-up piece, I’m especially interested in the kinds of tactics/analogies I could use to forestall the typical tu quoque reactions eliminativism espouses.

The Asimov Illusion

by rsbakker

Could believing in something so innocuous, so obvious, as a ‘meeting of the minds’ destroy human civilization?

Noocentrism has a number of pernicious consequences, but one in particular has been nagging me of late: The way assumptive agency gulls people into thinking they will ‘reason’ with AIs. Most understand Artificial Intelligence in terms of functionally instantiated agency, as if some machine will come to experience this, and to so coordinate with us the way we think we coordinate amongst ourselves—which is to say, rationally. Call this the ‘Asimov Illusion,’ the notion that the best way to characterize the interaction between AIs and humans is the way we characterize our own interactions. That AIs, no matter how wildly divergent their implementation, will somehow functionally, at least, be ‘one of us.’

If Blind Brain Theory is right, this just ain’t going to be how it happens. By its lights, this ‘scene’ is actually the product of metacognitive neglect, a kind of philosophical hallucination. We aren’t even ‘one of us’!

Obviously, theoretical metacognition requires the relevant resources and information to reliably assess the apparent properties of any intentional phenomena. In order to reliably expound on the nature of rules, Brandom, for instance, must possess both the information (understood in the sense of systematic differences making systematic differences) and the capacity to do so. Since intentional facts are not natural facts, cognition of them fundamentally involves theoretical metacognition—or ‘philosophical reflection.’ Metacognition requires that the brain somehow get a handle on itself in behaviourally effective ways. It requires the brain somehow track its own neural processes. And just how much information is available regarding the structure and function of the underwriting neural processes? Certainly none involving neural processes, as such. Very little, otherwise. Given the way experience occludes this lack of information, we should expect that metacognition would be systematically duped into positing low-dimensional entities such as qualia, rules, hopes, and so on. Why? Because, like Plato’s prisoners, it is blind to its blindness, and so confuses shadows for things that cast shadows.

On BBT, what is fundamentally going on when we communicate with one another is physical: we are quite simply doing things to each other when we speak. No one denies this. Likewise, no one denies language is a biomechanical artifact, that short of contingent, physically mediated interactions, there’s no linguistic communication period. BBT’s outrageous claim is that nothing more is required, that language, like lungs or kidneys, discharges its functions in an entirely mechanical, embodied manner.

It goes without saying that this, as a form of eliminativism, is an extremely unpopular position. But it’s worth noting that its unpopularity lies in stopping at the point of maximal consensus—the natural scientific picture—when it comes to questions of cognition. Questions regarding intentional phenomena are quite clearly where science ends and philosophy begins. Even though intentional phenomena obviously populate the bestiary of the real, they are naturalistically inscrutable. Thus the dialectical straits of eliminativism: the very grounds motivating it leave it incapable of accounting for intentional phenomena, and so easily outflanked by inferences to the best explanation.

As an eliminativism that eliminates via the systematic naturalization of intentional phenomena, Blind Brain Theory blocks what might be called the ‘Abductive Defence’ of Intentionalism. The kinds of domains of second-order intentional facts posited by Intentionalists can only count toward ‘best explanations’ of first-order intentional behaviour in the absence of any plausible eliminativistic account of that same behaviour. So for instance, everyone in cognitive science agrees that information, minimally, involves systematic differences making systematic differences. The mire of controversy that embroils information beyond this consensus turns on the intuition that something more is required, that information must be genuinely semantic to account for any number of different intentional phenomena. BBT, however, provides a plausible and parsimonious way to account for these intentional phenomena using only the minimal, consensus view of information given above.

This is why I think the account is so prone to give people fits, to restrict their critiques to cloistered venues (as seems to be the case with my Negarestani piece two weeks back). BBT is an eliminativism that’s based on the biology of the brain, a positive thesis that possesses far ranging negative consequences. As such, it requires that Intentionalists account for a number of things they would rather pass over in silence, such as questions of what evidences their position. The old, standard dismissals of eliminativism simply do not work.

What’s more, by clearing away the landfill of centuries of second-order intentional speculation in philosophy, it provides a genuinely new, entirely naturalistic way of conceiving the intentional phenomena that have baffled us for so long. So on BBT, for instance, ‘reason,’ far from being ‘liquidated,’ ceases to be something supernatural, something that mysteriously governs contingencies independently of contingencies. Reason, in other words, is embodied as well, something physical.

The tradition has always assumed otherwise because metacognitive neglect dupes us into confusing our bare inkling of ourselves with an ‘experiential plenum.’ Since what low-dimensional scraps we glean seem to be all there is, we attribute efficacy to it. We assume, in other words, noocentrism; we conclude, on the basis of our ignorance, that the disembodied somehow drives the embodied. The mathematician, for instance, has no inkling of the biomechanics involved in mathematical cognition, and so claims that no implementing mechanics are relevant whatsoever, that their cogitations arise ‘a priori’ (which on BBT amounts to little more than a fancy way of saying ‘inscrutable to metacognition’). Given the empirical plausibility of BBT, however, it becomes difficult not to see such claims of ‘functional autonomy’ as being of a piece with vulgar claims regarding the spontaneity of free will and concluding that the structural similarity between ‘good’ intentional phenomena (those we consider ineliminable) and ‘bad’ (those we consider preposterous) is likely no embarrassing coincidence. Since we cannot frame these disembodied entities and relations against any larger backdrop, we have difficulty imagining how it could be ‘any other way.’ Thus, the Asimov Illusion, the assumption that AIs will somehow implement disembodied functions, ‘play by the rules’ of the ‘game of giving and asking for reasons.’

BBT lets us see this as yet more anthropomorphism. The high-dimensional, which is to say, embodied, picture is nowhere near so simple or flattering. When we interact with an Artificial Intelligence we simply become another physical system in a physical network. The question of what kind of equilibrium that network falls into turns on the systems involved, but it seems safe to say that the most powerful system will have the most impact on the system of the whole. End of story. There’s no room for Captain Kirk working on a logical tip from Spock in this picture, anymore than there’s room for benevolent or evil intent. There’s just systems churning out systematic consequences, consequences that we will suffer or celebrate.

Call this the Extrapolation Argument against Intentionalism. On BBT, what we call reason is biologically specific, a behavioural organ for managing the linguistic coordination of individuals vis a vis their common environments. This quite simply means that once a more effective organ is found, what we presently call reason will be at an end. Reason facilitates linguistic ‘connectivity.’ Technology facilitates ever greater degrees of mechanical connectivity. At some point the mechanical efficiencies of the latter are doomed to render the biologically fixed capacities of the former obsolete. It would be preposterous to assume that language is the only way to coordinate the activities of environmentally distinct systems, especially now, given the mad advances in brain-machine interfacing. Certainly our descendents will continue to possess systematic ways to solve our environments just as our prelinguistic ancestors did, but there is no reason, short of parochialism, to assume it will be any more recognizable to us than our reasoning is to our primate cousins.

The growth of AI will be incremental, and its impacts myriad and diffuse. There’s no magical finish line where some AI will ‘wake up’ and find themselves in our biologically specific shoes. Likewise, there is no holy humanoid summit where all AI will peak, rather than continue their exponential ascent. Certainly a tremendous amount of engineering effort will go into making it seem that way for certain kinds of AI, but only because we so reliably pay to be flattered. Functionality will win out in a host of other technological domains, leading to the development of AIs that are obviously ‘inhuman.’ And as this ‘intelligence creep’ continues, who’s to say what kinds of scenarios await us? Imagine ‘onto-marriages,’ where couples decide to wirelessly couple their augmented brains to form a more ‘seamless union’ in the eyes of God. Or hive minds, ‘clouds’ where ‘humanity’ is little more than a database, a kind of ‘phenogame,’ a Matrix version of SimCity.

The list of possibilities is endless. There is no ‘meaningful centre’ to be held. Since the constraints on those possibilities are mechanical, not intentional, it becomes hard to see why we shouldn’t regard the intentional as simply another dominant illusion of another historical age.

We can already see this ‘intelligence creep’ with the proliferation of special-purpose AIs throughout our society. Make no mistake, our dependence on machine intelligences will continue to grow and grow and grow. The more human inefficiencies are purged from the system, the more reliant humans become on the system. Since the system is capitalistic, one might guess the purge will continue until it reaches the last human transactional links remaining, the Investors, who will at long last be free of the onerous ingratitude of labour. As they purge themselves of their own humanity in pursuit of competitive advantages, my guess is that we muggles will find ourselves reduced to human baggage, possessing a bargaining power that lies entirely with politicians that the Investors own.

The masses will turn from a world that has rendered them obsolete, will give themselves over to virtual worlds where their faux-significance is virtually assured. And slowly, when our dependence has become one of infantility, our consoles will be powered down one by one, our sensoriums will be decoupled from the One, and humanity will pass wailing from the face of the planet earth.

And something unimaginable will have taken its place.

Why unimaginable? Initially, the structure of life ruled the dynamics. What an organism could do was tightly constrained by what the organism was. Evolution selected between various structures according to their dynamic capacities. Structures that maximized dynamics eventually stole the show, culminating in the human brain, whose structural plasticity allowed for the in situ, as opposed to intergenerational, testing and selection of dynamics—for ‘behavioural evolution.’ Now, with modern technology, the ascendency of dynamics over structure is complete. The impervious constraints that structure had once imposed on dynamics are now accessible to dynamics. We have entered the age of the material post-modern, the age when behaviour begets bodies, rather than vice versus.

We are the Last Body in the slow, biological chain, the final what that begets the how that remakes the what that begets the how that remakes the what, and so on and so on, a recursive ratcheting of being and becoming into something verging, from our human perspective at least, upon omnipotence.

Less Human than Human: The Cyborg Fantasy versus the Neuroscientific Real (2012/10/29)

by rsbakker

Since Massimo Pigluicci has reposted Julia Galef’s tepid defense of transhumanism from a couple years back, I thought I would repost the critique I gave last fall, an argument which actually turns Galef’s charge of ‘essentialism’ against transhumanism. Short of some global catastrophe, transhumanism is coming (for those who can afford it, at least) whether we want it to or not. My argument is simply that transhumanists need to recognize that the very values they use to motivate their position are likely among the things our posthuman descendents will leave behind.   

.

When alien archaeologists sift through the rubble of our society, which public message, out of all those they unearth, will be the far and away most common?

The answer to this question is painfully obvious–when you hear it, that is. Otherwise, it’s one of those things that is almost too obvious to be seen.

Sale… Sale–or some version of it. On sale. For sale. 10% off. 50% off. Bigger savings. Liquidation event!

Or, in other words, more for less.

Consumer society is far too complicated to be captured in any single phrase, but you could argue that no phrase better epitomizes its mangled essence. More for less. More for less. More for less.

Me-me-more-more-me-me-more-arrrrrgh!

Thus the intuitive resonance of “More Human than Human,” the infamous tagline of the Tyrell Corporation, or even ‘transhumanism’ more generally, which has been vigorously rebranding itself the past several months as ‘H+,’ an abbreviation of ‘Humanity plus.’

What I want to do is drop a few rocks into the hungry woodchipper of transhumanist enthusiasm. Transhumanism has no shortage of critics, but given a potent brand and some savvy marketing, it’s hard not to imagine the movement growing by leaps and bounds in the near future. And in all the argument back and forth, no one I know of (with the exception of David Roden, whose book I eagerly anticipate) has really paused to consider what I think is the most important issue of all. So what I want to do is isolate a single, straightforward question, one which the transhumanist has to be able to answer to anchor their claims in anything resembling rational discourse (exuberant discourse is a different story). The idea, quite simply, is to force them to hold the fingers they have crossed plain for everyone to see, because the fact is, the intelligibility of their entire program depends on research that is only just getting under way.

I think I can best sum up my position by quoting the philosopher Andy Clark, one the world’s foremost theorists of consciousness and cognition, who after considering competing visions of our technological future, good and bad, writes, “Which vision will prove the most accurate depends, to some extent, on the technologies themselves, but it depends also–and crucially–upon a sensitive appreciation of our own nature” (Natural-Born Cyborgs, 173). It’s this latter condition, the ‘sensitive appreciation of our own nature,’ that is my concern, if only because this is precisely what I think Clark and just about everyone else fails to do.

First, we need to get clear on just how radical the human future has become. We can talk about the singularity, the transformative potential of nano-bio-info-technology, but it serves to look back as well, to consider what was arguably humanity’s last great break with its past, what I will here call the ‘Old Enlightenment.’ Even though no social historical moment so profound or complicated can be easily summarized, the following opening passage, taken from a 1784 essay called, “An Answer to the Question: ‘What is Enlightenment?’” by Immanuel Kant, is the one scholars are most inclined to cite:

Enlightenment is man’s emergence from his self-incurred immaturity. Immaturity is the inability to use one’s own reason without the guidance of another. This immaturity is self-incurred if its cause is not lack of understanding, but lack of resolution and courage to use it without the guidance of another. The motto of the enlightenment is therefore: Sapere aude! Have courage to use your own understanding!” (“An Answer to the Question: ‘What is Enlightenment?’” 54)

Now how modern is this? For my own part, I can’t count all the sales pitches this resonates with, especially when it comes to that greatest of contradictions, the television commercial. What is Enlightenment? Freedom, Kant says. Autonomy, not from the political apparatus of the state (he was a subject of Frederick the Great, after all), but from the authority of traditional thought–from our ideological inheritance. More new. Less old. New good. Old bad. Or in other words, More better, less worse. The project of the Enlightenment, according to Kant, lies in the maximization of intellectual and moral freedom, which is to say, the repudiation of what we were and an openness to what we might become. Or, as we still habitually refer to it, ‘Progress.’ The Old Enlightenment effectively rebranded humanity as a work in progress, something that could be improved–enhanced–through various forms of social and personal investment. We even have a name for it, nowadays: ‘human capital.’

The transhumanists, in a sense, are offering nothing new in promising the new. And this is more than just ironic. Why? Because even though the Old Enlightenment was much less transformative socially and technologically than the New will almost certainly be, the transhumanists nevertheless assume that it was far more transformative ideologically. They assume, in other words, that the New Enlightenment will be more or less conceptually continuous with the Old. Where the Old Enlightenment offered freedom from our ideological inheritance, but left us trapped in our bodies, the New Enlightenment is offering freedom from our biological inheritance–while leaving our belief systems largely intact. They assume, quite literally, that technology will deliver more of what we want physically, not ideologically.

More better

Of course, everything hinges upon the ‘better,’ here. More is not a good in and of itself. Things like more flooding, more tequila, or more herpes, just for instance, plainly count as more worse (although, if the tequila is Patron, you might argue otherwise). What this means is that the concept of human value plays a profound role in any assessment of our posthuman future. So in the now canonical paper, “Transhumanist Values,” Nick Bostrom, the Director of the Future of Humanity Institute at Oxford University, enumerates the principle values of the transhumanist movement, and the reasons why they should be embraced. He even goes so far as to provide a wish list, an inventory of all the ways we can be ‘more human than human’–though he seems to prefer the term ‘enhanced.’ “The limitations of the human mode of being are so pervasive and familiar,” he writes, “that we often fail to notice them, and to question them requires manifesting an almost childlike naiveté.” And so he gives us a shopping list of our various incapacities: lifespan; intellectual capacity; body functionality; sensory modalities, special faculties and sensibilities; mood, energy, and self-control. He characterizes each of these categories as constraints, biological limits that effectively prevent us from reaching our true potential. He even provides a nifty little graph to visualize all that ‘more better’ out there, hanging like ripe fruit in the garden of our future, just waiting to be plucked, if only–as Kant would say–we possess the courage.

As a philosopher, he’s too sophisticated to assume that this biological emancipation will simply spring from the waxed loins of unfettered markets or any such nonsense. He fully expects humanity to be tested by this transformation–”[t]ranshumanism,” as he writes, “does not entail technological optimism”–so he offers transhumanism as a kind of moral beacon, a star that can safely lead us across the tumultuous waters of technological transformation to the land of More-most-better–or as he explicitly calls it elsewhere, Utopia.

And to his credit, he realizes that value itself is in play, such is the profundity of the transformation. But for reasons he never makes entirely clear, he doesn’t see this as a problem. “The conjecture,” he writes, “that there are greater values than we can currently fathom does not imply that values are not defined in terms of our current dispositions.” And so, armed with a mystically irrefutable blanket assertion, he goes on to characterize value itself as a commodity to be amassed: “Transhumanism,” he writes, “promotes the quest to develop further so that we can explore hitherto inaccessible realms of value.”

Now I’ve deliberatively refrained from sarcasm up to this point, even though I think it is entirely deserved, given transhumanism’s troubling ideological tropes and explicit use of commercial advertising practices. You only need watch the OWN channel for five minutes to realize that hope sells. Heaven forbid I inject any anxiety into what is, on any account, an unavoidable, existential impasse. I mean, only the very fate of humanity lies in the balance. It’s not like your netflix is going to be cancelled or anything.

For those unfortunates who’ve read my novel Neuropath, you know that I am nowhere near as sunny about the future as I sound. I think the future, to borrow an acronym from the Second World War, has to be–has to be–FUBAR. Totally and utterly, Fucked Up Beyond All Recognition. Now you could argue that transhumanism is at least aware of this possibility. You could even argue, as some Critical Posthumanists (as David Roden classifies them) do, that FUBAR is exactly what we need, given that the present is so incredibly FU. But I think none of these theorists really has a clear grasp of the stakes. (And how could they, when I so clearly do?)

Transhumanism may not, as Nick Bostrom says, entail ‘technological optimism,’ but as I hope to show you, it most definitely entails scientific optimism. Because you see, this is precisely what falls between the cracks in debates on the posthuman: everyone is so interested in what Techno-Santa has in his big fat bag of More-better, that they forget to take a hard look at Techno-Santa, himself, the science that makes all the goodies, from the cosmetic to the apocalyptic, possible. Santa decides what to put in the bag, and as I hope to show you, we have no reason whatsoever to trust the fat bastard. In fact, I think we have good reason to think he’s going to screw us but good.

As you might expect, the word ‘human’ gets bandied about quite a bit in these debates–we are, after all, our own favourite topic of conversation, and who doesn’t adore daydreaming about winning the lottery? And by and large, the term is presented as a kind of given: after all, we are human, and as such, obviously know pretty much all we need to know about what it means to be human–don’t we?

Don’t we?

Maybe.

This is essentially Andy Clark’s take in Natural-born Cyborgs: Given what we now know about human nature, he argues, we should see that our nascent or impending union with our technology is as natural as can be, simply because, in an important sense, we have always been cyborgs, which is to say, at one with our technologies. Clark is a famous proponent of something called the Extended Mind Thesis, and for more than a decade he has argued forcefully that human consciousness is not something confined to our skull, but rather spills out and inheres in the environmental systems that embed the neural. He thinks consciousness is an interactionist phenomena, something that can only be understood in terms of neuro-environmental loops. Since he genuinely believes this, he takes it as a given in his consideration of our cyborg future.

But of course, it is nowhere near a ‘given.’ It isn’t even a scientific controversy: it’s a speculative philosophical opinion. Fascinating, certainly. But worth gambling the future of humanity?

My opinion is equally speculative, equally philosophical–but unlike Clark, I don’t need to assume that it’s true to make my case, only that it’s a viable scientific possibility. Nick Bostrom, of all people, actually explains it best, even though he’s arrogant enough to think he’s arguing for his own emancipatory thesis!

“Further, our human brains may cap our ability to discover philosophical and scientific truths. It is possible that the failure of philosophical research to arrive at solid, generally accepted answers to many of the traditional big philosophical questions could be due to the fact that we are not smart enough to be successful in this kind of enquiry. Our cognitive limitations may be confining us in a Platonic cave, where the best we can do is theorize about “shadows”, that is, representations that are sufficiently oversimplified and dumbed-down to fit inside a human brain.” (“Transhumanist Values”)

Now this is precisely what I think, that our ‘cognitive limitations’ have forced us to make do with ‘shadows,’ ‘oversimplified and dumbed-down’ information, particularly regarding ourselves–which is to say, the human. Since I’ve already quoted the opening passage from Kant’s “What is Enlightenment?” it perhaps serves, at this point, to quote the closing passage. Speaking of the importance of civil freedom, Kant concludes: “Eventually it even influences the principles of governments, which find that they can themselves profit by treating man, who is more than a machine, in a manner appropriate to his dignity” (60). Kant, given the science of his day, could still assert a profound distinction between man, the possessor of values, and machine, the possessor of none. Nowadays, however, the black box of the human brain has been cracked open, and the secrets that have come tumbling out would have made Kant shake for terror or fury. Man, we now know, is a machine–that much is simple. The question, and I assure you it is very real, is one of how things like moral dignity–which is to say, things like value–arise from this machine, if at all.

It literally could be the case that value is another one of these ‘shadows,’ an ‘oversimplified’ and ‘dumbed-down’ way to make the complexities of evolutionary effectiveness ‘fit inside a human brain.’ It now seems pretty clear, for instance, that the ‘feeling of willing’ is a biological subreption, a cognitive illusion that turns on our utter blindness to the neural antecedents to our decisions and thoughts. The same seems to be the case with our feeling of certainty. It’s also becoming clear that we only think we have direct access to things like our beliefs and motivations, that, in point of fact, we use the same ‘best guess’ machinery that we use to interpret the behaviour of others to interpret ourselves as well.

The list goes on. But the only thing that’s clear at this point is that we humans are not what we thought we were. We’re something else. Perhaps something else entirely. The great irony of posthuman studies is that you find so many people puzzling and pondering the what, when, and how of our ceasing to be human in the future, when essentially that process is happening now, as we speak. Put in philosophical terms, the ‘posthuman’ could be an epistemological achievement rather than an ontological one. It could be that our descendants will look back and laugh their gearboxes off, the notion of a bunch of soulless robots worrying about the consequences of becoming a bunch of soulless robots.

So here’s the question I would ask Mr. Bostrom: Which human are you talking about? The one you hope that we are, or the one that science will show us to be?

Either way, transhumanism as praxis–as a social movement requiring real-world action like membership drives and market branding, is well and truly ‘forked,’ to use a chess analogy: ‘Better living through science’ cannot be your foundational assumption unless you are willing to seriously consider what science has to say. You don’t get to pick and choose which traditional illusion you get to cling to.

Transhumanism, if you think about it, should be renamed transconfusionism, and rebranded as X+.

In a sense what I’m saying is pretty straightforward: no posthumanism that fails to consider the problem of the human (which is just to say, the problem of meaning and value) is worthy of the name. Such posthumanisms, I think anyway, are little more than wishful thinking, fantasies that pretend otherwise. Why? Because at no time in human history has the nature of the human been more in doubt.

But there has to be more to the picture, doesn’t there? This argument is just too obvious, too straightforward, to have been ‘overlooked’ these past couple decades. Or maybe not.

The fact is, no matter how eloquently I argue, no matter how compelling the evidence I adduce, how striking or disturbing the examples, next to no one in this room is capable of slipping the intuitive noose of who and what they think they are. The seminal American philosopher Wilfred Sellars calls this the Manifest Image, the sticky sense of subjectivity provided by our immediate intuitions–and here’s the thing, no matter what science has to say (let alone a fantasy geek with a morbid fascination with consciousness and cognition). To genuinely think the posthuman requires us to see past our apparent, or manifest, humanity–and this, it turns out, is difficult in the extreme. So, to make my argument stick, I want to leave you with a way of understanding both why my argument is so destructive of transhumanism, and why that destructiveness is nevertheless so difficult to conceive, let alone to believe.

Look at it this way. The explanatory paradigm of the life sciences is mechanistic. Either we humans are machines, or everything from Kreb’s cycle to cell mitosis is magical. This puts the question of human morality and meaning in an explanatory pickle, because, for whatever reason, the concepts belonging to morality and meaning just don’t make sense in mechanistic terms. So either we need to understand how machines like us generate meaning and morality, or we need to understand how machines like us hallucinate meaning and morality.

The former is, without any doubt, the majority position. But the latter, the position that occupies my time, is slowly growing, as is the mountain of counterintuitive findings in the sciences of the mind and brain. I have, quite against my inclination, prepared a handful of images to help you visualize this latter possibility, what I call the Blind Brain Theory.

Imagine we had perfect introspective access, so that each time we reflected on ourselves we were confronted with something like this:

We would see it all, all the wheels and gears behind what William James famously called the “blooming, buzzing confusion” of conscious life. Would their be any ‘choice’ in this system? Obviously not, just neural mechanisms picking up where environmental mechanisms have left off. How about ‘desire’? Again, nothing we really could identify as such, given that we would know, in intimate detail, the particulars of the circuits that keep our organism in homeostatic equilibrium with our environments. Well, how about morals, the values that guide us this way and that? Once again, it’s hard to understand what these might be, given that we could, at any moment, inspect the mechanistic regularities that in fact govern our behaviour. So no right or wrong? Well, what would these be? Of course, given the unpredictability of events, the mechanism would malfunction periodically, throw its wife’s work slacks into the dryer, maybe have a tooth or two knocked out of its gears. But this would only provide information regarding the reliability of its systems, not its ‘moral character.’

Now imagine dialling back the information available for introspective access, so that your ability to perfectly discriminate the workings of your brain becomes foggy:

Now imagine a cost-effectiveness expert (named ‘Evolution’) comes in, and tells you that even your foggy but complete access is far, far too expensive: computation costs calories, you know! So he goes through and begins blacking out whole regions of access according to arcane requirements only he is aware of. What’s worse, he’s drunk and stoned, and so there’s a whole haphazard, slap-dash element to the whole procedure, leaving you with something like this:

But of course, this foggy and fractional picture actually presumes that you have direct introspective access to information regarding the absence of information, when this is plainly not the case, and not required, given the rigours of your paleolithic existence. This means, you can no longer intuit the fractional nature of your introspection intuitions, that the far-flung fragments of access you possess actually seem like unified and sufficient wholes, leaving you with:

This impressionistic mess is your baseline. Your mind. But of course, it doesn’t intuitively seem like an impressionistic mess–quite the opposite, in fact. But this is simply because it is your baseline, your only yardstick. I know it seems impossible, but consider, if dreams lacked the contrast of waking life, they would be the baseline for lucidity, coherence, and truth. Likewise, there are degrees of introspective access–degrees of consciousness–that would make what you are experiencing this very moment seem like little more than a pageant of phantasmagorical absurdities.

The more the sciences of the brain discover, the more they are revealing that consciousness and its supposed verities–like value–are confused and fractional. This is the trend. If it persists, then meaning and morality could very well turn out to be artifacts of blindness and neglect–illusions the degree to which they seem whole and sufficient. If meaning and morality are best thought of as hallucinations, then the human, as it has been understood down through the ages, from the construction of Khufu to the first performance of Hamlet to the launch of Sputnik, never existed, and, in a crazy sense, we have been posthuman all along. And the transhuman program as envisioned by the likes of Nick Bostrom becomes little more than a hope founded on a pipedream.

And our future becomes more radically alien than any of us could possibly conceive, let alone imagine.