Three Pound Brain

No bells, just whistling in the dark…

Category: PHILOSOPHY

Framing “On Alien Philosophy”…

by rsbakker

dubbit

Peter Hankins of Conscious Entities fame has a piece considering “On Alien Philosophy.” The debate is just getting started, but I thought it worthwhile explaining why I think this particular paper of mine amounts to more than yet just another interpretation to heap onto the intractable problem of ourselves.

Consider the four following claims:

1) We have biologically constrained (in terms of information access and processing resources) metacognitive capacities ancestrally tuned to the solution of various practical problem ecologies, and capable of exaptation to various other problems.

2) ‘Philosophical reflection’ constitutes such an exaptation.

3) All heuristic exaptations inherit, to some extent, the problem-solving limitations of the heuristic exapted.

4) ‘Philosophical reflection’ inherits the problem-solving limitations of deliberative metacognition.

Now I don’t think there’s much anything controversial about any of these claims (though, to be certain, there’s a great many devils lurking in the details adduced). So note what happens when we add the following:

5) We should expect human philosophical practice will express, in a variety of ways, the problem-solving limitations of deliberative metacognition.

Which seems equally safe. But note how the terrain of the philosophical debate regarding the nature of the soul has changed. Any claim purporting the exceptional nature of this or that intentional phenomena now needs to run the gauntlet of (5). Why assume we cognize something ontologically exceptional when we know we are bound to be duped somehow? All things being equal, mediocre explanations will always trump exceptional ones, after all.

The challenge of (5) has been around for quite some time, but if you read (precritical) eliminativists like Churchland, Stich, or Rosenberg, this is where the battle grinds to a standstill. Why? Because they have no general account of how the inevitable problem-solving limitations of deliberative metacognition would be expressed in human philosophical practice, let alone how they would generate the appearance of intentional phenomena. Since all they have are promissory notes and suggestive gestures, ontologically exceptional accounts remain the only game in town. So, despite the power of (5), the only way to speak of intentional phenomena remains the traditional, philosophical one. Science is blind without theory, so absent any eliminativist account of intentional phenomena, it has no clear way to proceed with their investigation. So it hews to exceptional posits, trusting in their local efficacy, and assuming they will be demystified by discoveries to come.

Thus the challenge posed by Alien Philosophy. By giving real, abductive teeth to (5), my account overturns the argumentative terrain between eliminativism and intentionalism by transforming the explanatory stakes. It shows us how stupidity, understood ecologically, provides everything we need to understand our otherwise baffling intuitions regarding intentional phenomena. “On Alien Philosophy” challenges the Intentionalist to explain more with less (the very thing, of course, he or she cannot do).

Now I think I’ve solved the problem, that I have a way to genuinely naturalize meaning and cognition. The science will sort my pretensions in due course, but in the meantime, the heuristic neglect account of intentionality, given its combination of mediocrity and explanatory power, has to be regarded as a serious contender.

It Is What It Is (Until Notified Otherwise)

by rsbakker

wynnwood-brilliance

 

The thing to always remember when one finds oneself in the middle of some historically intractable philosophical debate is that path-dependency is somehow to blame. This is simply to say that the problem is historical in that squabbles regarding theoretical natures always arises from some background of relatively problem-free practical application. At some point, some turn is taken and things that seem trivially obvious suddenly seem stupendously mysterious. St. Augustine, in addition to giving us one of the most famous quotes in philosophy, gives us a wonderful example of this in The Confessions when he writes:

“What, then, is time? If no one asks of me, I know; if I wish to explain to him who asks, I know not.” XI, XIV, 17

But the rather sobering fact is that this is the case with a great number of the second order questions we can pose. What is mathematics? What’s a rule? What’s meaning? What’s cause? And of course, what is phenomenal consciousness?

So what is it with second order interrogations? Why is ‘time talk’ so easy and effortlessly used even though we find ourselves gobsmacked each and every time someone asks what time qua time is? It seems pretty clear that either we lack the information required or the capacity required or some nefarious combination of both. If framing the problem like this sounds like a no-brainer, that’s because it is a no-brainer. The remarkable thing lies in the way it recasts the issue at stake, because as it turns out, the question of the information and capacity we have available is a biological one, and this provides a cognitive ecological means of tackling the problem. Since practical solving for time (‘timing’) is obviously central to survival, it makes sense that we would possess the information access and cognitive capacity required to solve a wide variety of timing issues. Given that theoretical solving for time (qua-time) isn’t central to survival (no species does it and only our species attempts it), it makes sense that we wouldn’t possess the information access and cognitive capacity required, that we would suffer time-qua-time blindness.

From a cognitive ecological perspective, in other words, St. Augustine’s perplexity should come as no surprise at all. Of course solving time-qua-time is mystifying: we evolved the access and capacity required for solving the practical problems of timing, and not the theoretical problem of time. Now I admit if the cognitive ecological approach ground to a halt here it wouldn’t be terribly illuminating, but there’s quite a bit more to be said: it turns out cognitive ecology is highly suggestive of the different ways we might expect our attempts to solve things like time-qua-time to break down.

What would it be like to reach the problem-solving limits of some practically oriented problem-solving mode? Well, we should expect our assumptions/intuitions to stop delivering answers. My daughter is presently going through a ‘cootie-catcher’ phase and is continually instructing me to ask questions, then upbraiding me when my queries don’t fit the matrix of possible ‘answers’ provided by the cootie-catcher (yes, no, and versions of maybe). Sometimes she catches these ill-posed questions immediately, and sometimes she doesn’t catch them until the cootie-catcher generates a nonsensical response.

cootie-catcher-2

Now imagine your child never revealed their cootie-catcher to you: you asked questions, then picked colours or numbers or animals, and it turned out some were intelligibly answered, and some were not. Very quickly you would suss out the kinds of questions that could be asked, and the kinds that could not. Now imagine unbeknownst to you that your child replaced their cootie-catcher with a computer running two separately tasked, distributed AlphaGo type programs, the first trained to provide well-formed (if not necessarily true) answers to basic questions regarding causality and nothing else, the second trained to provide well-formed (if not necessarily true) answers to basic questions regarding goals and intent. What kind of conclusions would you draw, or more importantly, assume? Over time you would come to suss out the questions generating ill-formed answers versus questions generating well-formed ones. But you would have no way of knowing that two functionally distinct systems were responsible for the well-formed answers: causal and purposive modes would seem the product of one cognitive system. In the absence of distinctions you would presume unity.

Think of the difference between Plato likening memory to an aviary in the Theaetetus and the fractionate, generative memory we now know to be the case. The fact that Plato assumed as much, unity and retrieval, shouts something incredibly important once placed in a cognitive ecological context. What it suggests is that purely deliberative attempts to solve second-order problems, to ask questions like what is memory-qua-memory, will almost certainly run afoul the problem of default identity, the identification that comes about for the want of distinctions. To return to our cootie-catcher example, it’s not simply that we would report unity regarding our child’s two AlphaGo type programs the way Plato did with memory, it’s that information involving its dual structure would play no role in our cognitive economy whatsoever. Unity, you could say, is the assumption built into the system. (And this applies as much to AI as it does to human beings. The first ‘driverless fatality’ died because his Tesla Model S failed to distinguish a truck trailer from the sky.)

Default identity, I think, can play havoc with even the most careful philosophical interrogations—such as the one Eric Schwitzgebel gives in the course of rebutting Keith Frankish, both on his blog and in his response in The Journal of Consciousness Studies, “Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage.”

According to Eric, “Illusionism as a Theory of Consciousness” presents the phenomenal realist with a dilemma: either they commit to puzzling ontological features such as simple, ineffable, intrinsic, or so on, or they commit to explaining those features away, which is to say, some variety of Illusionism. Since Eric both believes that phenomenal consciousness is real, and that the extraordinary properties attributed to it are likely not real, he proposes a third way, a formulation of phenomenal experience that neither inflates it into something untenable, nor deflates into something that is plainly not phenomenal experience. “The best way to meet Frankish’s challenge,” he writes, “is to provide something that the field of consciousness studies in any case needs: a clear definition of phenomenal consciousness, a definition that targets a phenomenon that is both substantively interesting in the way that phenomenal consciousness is widely thought to be interesting but also innocent of problematic metaphysical and epistemological assumptions” (2).

It’s worth noting the upshot of what Eric is saying here: the scientific study of phenomenal consciousness cannot, as yet, even formulate their primary explanandum. The trick, as he sees it, is to find some conceptual way to avoid the baggage, while holding onto some semblance of a wardrobe. And his solution, you might say, is to wear as many outfits as he possibly can. He proposes that definition by example is uniquely suited to anchor an ontologically and epistemologically innocent concept of phenomenal consciousness.

He has but one caveat: any adequate formulation of phenomenal consciousness has to account or allow for what Eric terms its ‘wonderfulness’:

If the reduction of phenomenal consciousness to something physical or functional or “easy” is possible, it should take some work. It should not be obviously so, just on the surface of the definition. We should be able to wonder how consciousness could possibly arise from functional mechanisms and matter in motion. Call this the wonderfulness condition. 3

He concedes the traditional properties ascribed to phenomenal experience outrun naturalistic credulity, but the feature of begging belief remains to be explained. This is the part of Eric’s position to keep an eye on because it means his key defense against eliminativism is abductive. Whatever phenomenal consciousness is, it seems safe to say it is not something easily solved. Any account purporting to solve phenomenal consciousness that leaves the wonderfulness condition unsatisfied is likely missing phenomenal consciousness altogether.

And so Eric provides a list of positive examples including sensory and somatic experiences, conscious imagery, emotional experience, thinking and desiring, dreams, and even other people, insofar as we continually attribute these very same kinds of experiences to them. By way of negative examples, he mentions a variety of intimate, yet obviously not phenomenally conscious processes, such as fingernail growth, intestinal lipid absorption, and so on.

He writes:

Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack. I do think that there is one very obvious feature that ties together sensory experiences, imagery experiences, emotional experiences, dream experiences, and conscious thoughts and desires. They’re all conscious experiences. None of the other stuff is experienced (lipid absorption, the tactile smoothness of your desk, etc.). I hope it feels to you like I have belabored an obvious point. Indeed, my argumentative strategy relies upon this obviousness. 8

Intuition, the apparent obviousness of his examples, is what he stresses here. The beauty of definition by example is that offering instances of the phenomenon at issue allows you to remain agnostic regarding the properties possessed by that phenomenon. It actually seems to deliver the very metaphysical and epistemological innocence Eric needs to stave off the charge of inflation. It really does allow him to ditch the baggage and travel wearing all his clothes, or so it seems.

Meanwhile the wonderfulness condition, though determining the phenomenon, does so indirectly, via the obvious impact it has on human attempts to cognize experience-qua-experience. Whatever phenomenal consciousness is, contemplating it provokes wonder.

And so the argument is laid out, as spare and elegant as all of Eric’s arguments. It’s pretty clear these are examples of whatever it is we call phenomenal consciousness. Of course, there’s something about them that we find downright stupefying. Surely, he asks, we can be phenomenal realists in this austere respect?

For all its intuitive appeal, the problem with this approach is that it almost certainly presumes a simplicity that human cognition does not possess. Conceptually, we can bring this out with a single question: Is phenomenal consciousness the most folk psychologically obvious thing or feature the examples share, or is it obvious in some other respect? Eric’s claim amounts to saying the recognition of phenomenal consciousness as such belongs to everyday cognition. But is this the case? Typically, recognition of experience-qua-experience is thought to be an intellectual achievement of some kind, a first step toward the ‘philosophical’ or ‘reflective’ or ‘contemplative’ attitude. Shouldn’t we say, rather, that phenomenal consciousness is the most obvious thing or feature these examples share upon reflection, which is to say, philosophically?

This alternative need only be raised to drag Eric’s formulation back into the mire of conceptual definition, I think. But on a cognitive ecological picture, we can actually reframe this conceptual problematization in path-dependent terms, and so more forcefully insist on a distinction of modes and therefore a distinction in problem-solving ecologies. Recall Augustine, how we understand time without difficulty until we ask the question of time qua time. Our cognitive systems have no serious difficulty with timing, but then abruptly break down when we ask the question of time as such. Even though we had the information and capacity required to solve any number of practical issues involving time, as soon as we pose the question of time-qua-time that fluency evaporates and we find ourselves out-and-out mystified.

Eric’s definition by example, as an explicitly conceptual exercise, clearly involves something more than everyday applications of experience talk. The answer intuitively feels as natural as can be—there must be some property X these instances share or exclude, certainly!—but the question strikes most everyone as exceptional, at least until they grow accustomed to it. Raising the question, as Augustine shows us, is precisely where the problem begins, and as my daughter would be quick to remind Eric, cootie-catchers only work if we ask the right question. Human cognition is fractionate and heuristic, after all.

cootie-catcher

All organisms are immersed in potential information, difference making differences that could spell the difference between life and death. Given the difficulties involved in the isolation of causes, they often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information organisms have, evolved and learned sensitivities to effects systematically correlated to those environmental systems relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers adapted to deep information environments, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible.

We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems and the resources required to solve them are wildly disparate, not all access is equal.

Information access, I think, divides cognition into two distinct forms, two different families of ‘AlphaGo type’ programs. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the necessity, the efficacy, and the limitations of source insensitive (cue correlative) cognition. Andrei Cimpian’s lab and the work of Klaus Fiedler (as well as that of the Adaptive Behaviour and Cognition Research Group more generally) are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

So what are we to make of Eric’s attempt to innocently (folk psychologically) pose the question of experience-qua-experience in light of this rudimentary distinction?

If one takes the brain’s ability to cognize its own cognitive functions as a condition of ‘experience talk,’ it becomes very clear very quickly that experience talk belongs to a source insensitive cognitive regime, a system adapted to exploit correlations between the information consumed (cues) and the vastly complicated systems (oneself and others) requiring solution. This suggests that Eric’s definition by example is anything but theoretically innocent, assuming, as it does, that our source insensitive, experience-talk systems pick out something in the domain of source sensitive cognition… something ‘real.’ Defining by example cues our experience-talk system, which produces indubitable instances of recognition. Phenomenal consciousness becomes, apparently, an indubitable something. Given our inability to distinguish between our own cognitive systems (given ‘cognition-qua-cognition blindness’), default identity prevails; suddenly it seems obvious that phenomenal experience somehow, minimally, belongs to the order of the real. And once again, we find ourselves attempting to square ‘posits’ belonging to sourceless modes of cognition with a world where everything has a source.

We can now see how the wonderfulness condition, which Eric sees working in concert with his definition by example, actually cuts against it. Experience-qua-experience provokes wonder precisely because it delivers us to crash space, the point where heuristic misapplication leads our intuitions astray. Simply by asking this question, we have taken a component from a source insensitive cognitive system relying (qua heuristic) on strategic correlations to the systems requiring solution to solve, and asked a completely different, source sensitive system to make sense of it. Philosophical reflection is a ‘cultural achievement’ precisely because it involves using our brains in new ways, applying ancient tools to novel questions. Doing so, however, inevitably leaves us stumbling around in a darkness we cannot see, running afoul confounds we have no way of intuiting, simply because they impacted our ancestors not at all. Small wonder ‘phenomenal consciousness’ provokes wonder. How could the most obvious thing possess so few degrees of cognitive freedom? How could light itself deliver us to darkness?

I appreciate the counterintuitive nature of the view I’m presenting here, the way it requires seeing conceptual moves in terms of physical path-dependencies, as belonging to a heuristic gearbox where our numbness to the grinding perpetually convinces us that this time, at long last, we have slipped from neutral into drive. But recall the case of memory, the way blindness to its neurocognitive intricacies led Plato to assume it simple. Only now can we run our (exceedingly dim) metacognitive impressions of memory through the gamut of what we know, see it as a garden of forking paths. The suggestion here is that posing the question of experience-qua-experience poses a crucial fork in the consciousness studies road, the point where a component of source-insensitive cognition, ‘experience,’ finds itself dragged into the court of source sensitivity, and productive inquiry grinds to a general halt.

When I employ experience talk in a practical, first-order way, I have a great deal of confidence in that talk. But when I employ experience talk in a theoretical, second-order way, I have next to no confidence in that talk. Why would I? Why would anyone, given the near-certainty of chronic underdetermination? Even more, I can see of no way (short magic) for our brain to have anything other than radically opportunistic and heuristic contact with its own functions. Either specialized, simple heuristics comprise deliberative metacognition or deliberative metacognition does not exist. In other words, I see no way of avoiding experience-qua-experience blindness.

This flat out means that on a high dimensional view (one open to as much relevant physical information as possible), there is just no such thing as ‘phenomenal consciousness.’ I am forced to rely on experience related talk in theoretical contexts all the time, as do scientists in countless lines of research. There is no doubt whatsoever that experience-talk draws water from far more than just ‘folk psychological’ wells. But this just means that various forms of heuristic cognition can be adapted to various experimentally regimented cognitive ecologies—experience-talk can be operationalized. It would be strange if this weren’t the case, and it does nothing to alleviate the fact that solving for experience-qua-experience delivers us, time and again, to crash space.

One does not have to believe in the reality of phenomenal consciousness to believe in the reality of the systems employing experience-talk. As we are beginning to discover, the puzzle has never been one of figuring out what phenomenal experiences could possibly be, but rather figuring out the biological systems that employ them. The greater our understanding of this, the greater our understanding of the confounds characterizing that perennial crash space we call philosophy.

Breakneck: Review and Critical Commentary of Whiplash: How to Survive our Faster Future by Joi Ito and Jeff Howe

by rsbakker

whiplash-cover

The thesis I would like to explore here is that Whiplash by Joi Ito and Jeff Howe is at once a local survival guide and a global suicide manual. Their goal “is no less ambitious than to provide a user’s manual to the twenty-first century” (246), a “system of mythologies” (108) embodying the accumulated wisdom of the storied MIT Media Lab. Since this runs parallel to my own project, I applaud their attempt. Like them, I think understanding the consequences of the ongoing technological revolution demands “an entirely new mode of thinking—a cognitive evolution on the scale of a quadruped learning to stand on its hind feet” (247). I just think we need to recall the number of extinctions that particular evolutionary feat required.

Whiplash was a genuine delight for me to read, and not simply because I’m a sucker for technoscientific anecdotes. At so many points I identified with the collection of misfits and outsiders that populate their tales. So, as an individual who fairly embodies the values promulgated in this book, I offer my own amendments to Ito and Howe’s heuristic source code, what I think is a more elegant and scientifically consilient way to understand not only our present dilemma, but the kinds of heuristics we will need to survive them…

Insofar as that is possible.

 

Emergence over Authority

General Idea: Pace of change assures normative obsolescence, which in turn requires openness to ‘emergence.’

“Emergent systems presume that every individual within that system possesses unique intelligence that would benefit the group.” 47

“Unlike authoritarian systems, which enable only incremental change, emergent systems foster the kind of nonlinear innovation that can react quickly to the kind of change of rapid changes that characterize the network age.” 48

Problems: Insensitive to the complexities of the accelerating social and technical landscape. The moral here should be, Does this heuristic still apply?

The quote above also points to the larger problem, which becomes clear by simply rephrasing it to read, ‘emergent systems foster the kind of nonlinear transformation that can react quickly to the kind of nonlinear transformations that characterize the network age.’ The problem, in other words, is also the solution. Call this the Putting Out Fire with Gasoline Problem. I wish Ito and Howe would have spent some more time considering it since it really is the heart of their strategy: How do we cope with accelerating innovation? We become as quick and innovative as we can.

 

Pull over Push

General Idea: Command and control over warehoused resources lacks the sensitivity to solve many modern problems, which are far better resolved by allowing the problems themselves to attract the solvers.

“In the upside-down, bizarre universe created by the Internet, the very assets on your balance sheet—from printing presses to lines of code—are now liabilities from the perspective of agility. Instead, we should try to use resources that can be utilized just in time, for just that time necessary, then relinquished.” 69

“As the cost of innovation continues to fall, entire communities that have been sidelined by those in power will be able to organize themselves and become active participants in society and government. The culture emergent innovation will allow everyone to feel a sense of both ownership and responsibility to each other and to the rest of the world, which will empower them to create more lasting change that the authorities who write policy and law.” 71

Problems: In one sense, I think this chapter speaks to the narrow focus of the book, the degree it views the world through IT glasses. Trump examples the power of Pull. ISIS examples the power of Pull. ‘Empowerment’ is usually charged with positive connotations, until one applies it to criminals, authoritarian governments and so on. It’s important to realize that ‘pull’ runs any which way, rather than directly toward better.

 

Compasses over Maps

General Idea: Sensitivity to ongoing ‘facts on the ground’ generally trumps reliance on high-altitude appraisals of yesterday’s landscape.

“Of all the nine principles in the book, compasses over maps has the greatest potential for misunderstanding. It’s actually very straightforward: a map implies a detailed knowledge of the terrain, and the existence of an optimum route; the compass is a far more flexible tool and requires the user to employ creativity and autonomy in discovering his or her own path.” 89

Problems: I actually agree that this principle is the most apt to be misunderstood because I’m inclined to think Ito and Howe themselves might be misunderstanding it! Once again, we need to see the issue in terms of cognitive ecology: Our ancestors, you could say, suffered a shallow present and enjoyed a deep future. Because the mechanics of their world eluded them, they had no way of re-engineering them, and so they could trust the machinery to trundle along the way it always had. We find ourselves in the opposite predicament: As we master more and more of the mechanics of our world, we discover an ever-expanding array of ways to re-engineering them, meaning we can no longer rely on the established machinery the way our ancestors—and here’s the important bit—evolved to. We are shallow present, deep future creatures living in a deep present, shallow future world.

This, I think, is what Ito and Howe are driving at: just as the old rules (authorities) no longer apply, the old representations (maps) no longer apply either, forcing us to gerrymander (orienteer) our path.

 

Risk over Safety

General Idea: The cost of experimentation has plummeted to such an extent that being wrong no longer has the catastrophic market consequences it once had.

“The new rule, then, is to embrace risk. There may be nowhere else in this book that exemplifies how far our collective brains have fallen behind our technology.” 116

“Seventy million years ago it was great to be a dinosaur. You were a complete package; big, thick-skinned, sharp-toothed, cold-blooded, long-lived. And it was great for a long, long time. Then, suddenly… it wasn’t so great. Because of your size, you needed an awful lot of calories. And you needed an awful lot of room. So you died. You know who outlived you? The frog.” 120

Problems: Essentially the argument is that risky ventures in the old economy are now safe, and that safe ventures are now risky, which means the argument is actually a ‘safety over risk’ one. I find this particular maxim so interesting because I think it really throws their lack of any theory of the problem they take themselves to be solving/ameliorating into relief. Really the moral here is experimentation pays.


This means the cognitive ecology Ito and Howe are both describing and advocating is in some sense antithetical—and therefore alienating—to our ancestral ways of making sense of ourselves.


 

Disobedience over Compliance

General Idea: Traditional forms of development stifle the very creativity institutions require to adapt to the accelerating pace of technological change.

“Since the 1970’s, social scientists have recognized the positive impact of “positive deviants,” people whose unorthodox behavior improves their lives and has the potential to improve their communities if it’s adopted more widely.” 141

“The people who will be the most successful in this environment will be the ones who ask questions, trust their instincts, and refuse to follow the rules when the rules get in their way.” 141

Problems: Disobedience is not critique, and Ito and Howe are careful to point this out, but they fail to mention what role, if any, criticality plays in their list of principles. Another problem has to do with the obvious exception bias at work in their account. Sure, being positive deviants has served Ito and Howe and the generally successful people they count as their ingroup, but what about the rest of us? This is why I cringe every time I hear Oscar acceptance speeches urging young wannabe thespians to ‘never give up on their dream,’ because winners—who are winners by virtue of being the exception—see themselves as proof positive that it can be done if you just try-try-try… This stuff is what powers the great dream smashing factory called Hollywood—as well as Silicon Valley. All things being equal, I think being a ‘positive deviant’ is bound to generate far more grief than success.

And this, I think, underscores the fundamental problem with the book, which is the question of application. I like to think of myself as a ‘positive deviant,’ but I’m aware that I am often identified as a ‘contrarian flake’ in the various academic silos I piss in now and again. By opening research ingroups to the wider world, the web immediately requires members to vet communications in a manner they never had to before. The world, as it turns out, is filled with contrarian flakes, so the problem becomes one of sorting positive deviants (like myself (maybe)), extra-institutional individuals with positive contributions to make, from all those contrarian flakes (like myself (maybe)).

Likewise, given that every communal enterprise possesses wilful, impassioned, but unimaginative employees, how does a manager sort the ‘positive deviant’ out?

When does disobedience over compliance apply? This is where the rubber hits the road, I think. The whole point of the (generally fascinating) anecdotes is to address this very issue, but aside from some gut estimation of analogical sufficiency between cases, we really have nothing to go on.

 

Practice over Theory

General Idea: Traditional forms of education and production emphasize planning before and learning outside the relevant context of applications, when humans are simply not wired for this, and when those contexts are transforming so quickly.

“Putting practice over theory means recognizing that in a faster future, in which change has become a new constant, there is often a higher cost to waiting and planning that there is to doing and improvising.” 159

“The Media Lab is focussed on interest-driven, passion-driven learning through doing. It is also trying to understand and deploy this form of creative learning into a society that will increasingly need more creative learners and fewer human beings who can solve problems better tackled by robots and computers.” 170

Problems: Humans are the gerrymandering species par excellence, leveraging technical skills into more and more forms of environmental mastery. In this respect it’s hard to argue against Ito and Howe’s point, given the caveats they are careful to provide.

The problem lies in the supercomplex environmental consequences of that environmental mastery: Whiplash is advertised as a how-to environmentally master the consequences of environmental mastery manual, so obviously, environmental mastery, technical innovation, ‘progress’—whatever you want to call it—has become a life and death matter, something to be ‘survived.’

The thing people really need to realize in these kinds of discussions is just how far we have sailed into uncharted waters, and just how fast the wind is about to grow.

 

Diversity over Ability

General Idea: Crowdsourcing, basically, the term Jeff Howe coined referring to the way large numbers of people from a wide variety of backgrounds can generate solutions eluding experts.

“We’re inclined to believe the smartest, best trained people in a given discipline—the experts—are the best qualified to a solve a problem in their specialty. And indeed, they often are. When they fail, as they will from time to time, our unquestioning faith in the principle of ‘ability’ leads us to imagine that we need to find a better solver: other experts with similarly high levels of training. But it is in the nature of high ability to reproduce itself—the new team of experts, it turns out, trained at the same amazing schools, institutes, and companies as the previous experts. Similarly brilliant, out two sets of experts can be relied on to apply the same methods to the problem, and share as well the same biases, blind spots, and unconscious tendencies.” 183

Problems: Again I find myself troubled not so much by the moral as by the articulation. If you switch the register from ‘ability’ to competence and consider the way ingroup adjudications of competence systematically perceive outgroup contributions to be incompetent, then you have a better model to work with here, I think. Each of us carry a supercomputer in our heads and all cognition exhibits path-dependency and is therefore vulnerable to blind alleys, so the power of distributed problem solving should come as no surprise. The problem, here, rather, is one of seeing though our ingroup blinders, and coming to understand how we instinctively identify competence forecloses on distributed cognitive resources (which can take innumerable forms).

Institutionalizing diversity seems like a good first step. But what about overcoming ingroup biases more generally? And what about the blind-alley problem (which could be called the ‘double-blind alley problem,’ given the way reviewing the steps taken tends to confirm the necessity of the path taken)? Is there a way to suss out the more pernicious consequences of cognitive path-dependency?

 

Resilience over Strength

General Idea: The reed versus the tree.

Problems: It’s hard to bitch about a chapter beginning with a supercool Thulsa Doom quote.

Strike that—impossible.

 

Systems over Objects

General Idea: Unravelling contemporary problems means unravelling complex problems necessitating adoption of the systems view.

“These new problems, whether we’re talking about curing Alzheimer’s or learning to predict volatile weather systems, seem to be fundamentally different, in that they seem to require the discovery of all the building blocks in a complex system.” 220

“Systems over objects recognizes that responsible innovation requires more than speed and efficiency. It also requires a constant focus on the overall impact of new technologies, and an understanding of the connections between people, their communities, and their environments.” 224

Problems: Since so much of Three Pound Brain is dedicated to understanding human experience and cognition in naturally continuous terms, I tend to think that ‘Systems over Subjects’ offers a more penetrating approach. The idea that things and events cannot be understood or appreciated in isolation is already firmly rooted in our institutional DNA, I think. The challenge, here, lies in squaring this way of thinking with everyday cognition, with our default ways of making sense of each other and ourselves. We are hardwired to see simple essences and sourceless causes everywhere we look. This means the cognitive ecology Ito and Howe are both describing and advocating is in some sense antithetical—and therefore alienating—to our ancestral ways of making sense of ourselves.


Algorithms are set to flood this space, to begin cuing social cognition to solve biological brains in the absence of any biological brains.


 

Conclusion

When I decided to post a review on this book, I opened an MSWord doc the way I usually do and began jotting down jumbled thoughts and impressions, including the reminder to “Bring up the problem of theorizing politics absent any account of human nature.” I had just finished reading the introduction by that point, so I read the bulk of Whiplash with this niggling thought in the back of my mind. Ito and Howe take care to avoid explicit political references, but as I’m sure they will admit, their project is political through and through. Politics has always involved science fiction; after all, how do you improve a future you can’t predict? Knowing human nature, their need to eat, to secure prestige, to mate, to procreate, and so on, is the only thing that allows us to predict human futures at all. Dystopias beg Utopias beg knowing what makes us tick.

In a time of radical, exponential social and environmental transformation, the primary question regarding human nature has to involve adaptability, our ability to cope with social and environmental transformation. The more we learn about human cognition, however, the more we discover that the human capacity to solve new problems is modular as opposed to monolithic, complex as opposed to simple. This in turn means that transforming different elements in our environments (the way technology does) can have surprising results.

So for example, given the ancestral stability of group sizes, it makes sense to suppose we would assess the risk of victimization against a fixed baseline whenever we encountered information regarding violence. Our ability to intuitively assess threats, in other words, depends upon a specific cognitive ecology, one where the information available is commensurate with the small communities of farmers and/or hunter-gatherers. This suggests the provision of ‘deep’ (ancestrally unavailable) threat information, such as that provided by the web or the evening news, would play havoc with our threat intuitions—as indeed seems to be the case.

Human cognition is heuristic, through and through, which is to say dependent on environmental invariances, the ancestral stability of different relevant backgrounds. The relation between group size and threat information is but one of countless default assumptions informing our daily lives. The more technology transforms our cognitive ecologies, the more we should expect our intuitions to misfire, to prompt ineffective problem-solving behaviour like voting for ‘tough-on-crime’ political candidates. The fact is technology makes things easy that were never ‘meant’ to be easy. Consider how humans depended on all the people they knew before the industrial concentration of production, and so were forced to compromise, to see themselves as requiring friends and neighbours. You could source your clothes, your food, even your stories and religion to some familiar face. You grew up in an atmosphere of ambient, ingroup gratitude that continually counterbalanced your selfish impulses. After the industrial concentration of production, the material dependencies enforcing cooperation evaporated, allowing humans to indulge egocentric intuitions, the sweet-tooth of themselves, and ‘individualism’ was born, and with it all the varieties of social isolation comprising the ‘modern malaise.’

This cognitive ecological lens is the reason why I’ve been warning that the web was likely to aggravate processes of group identification and counter-identification, why I’ve argued that the tactics of 20th century progressivism had actually become more pernicious than efficacious, and suggested that forms of political atavism, even the rise of demagoguery, would become bigger and bigger problems. Where most of the world saw the Arab Spring as a forceful example of the web’s capacity to emancipate, I saw it as an example of ‘flash civil unrest,’ the ability of populations to spontaneously organize and overthrow existing institutional orders period, and only incidentally ‘for the better.’

If you entertained extremist impulses before the internet, you had no choice but to air your views with your friends and neighbours, where, all things being equal, the preponderance of views would be more moderate. The network constraints imposed by geography, I surmised, had the effect of ameliorating extremist tendencies. Absent the difficulty of organizing about our darker instincts, rationalizing and advertising them, I think we have good reason to fear. Humans are tribal through and through, as prone to acts of outgroup violence as ingroup self-sacrifice. On the cognitive ecological picture, it just so happens that technological progress and moral/political progress have marched hand in hand thus far. The bulk of our prosocial, democratic institutions were developed—at horrendous cost, no less—to maximize the ‘better angels’ of our natures and to minimize the worst, to engineer the kind of cognitive ecologies we required to flourish in the new social and technical environments—such as the industrial concentration of material dependency—falling out of the Renaissance and Enlightenment.

I readily acknowledge that better accounts can be found for the social phenomena considered above: what I contend is that all of those accounts will involve some nuanced understanding of the heuristic nature of human cognition and the kinds of ecological invariance they take for granted. My further contention is that any adequate understanding of that heuristic nature raises the likelihood, perhaps even the inevitability, that human social cognition will effectively breakdown altogether. The problem lies in the radically heuristic nature of the cognitive modes we use to understand each other and ourselves. Since the complexity of our biocomputational nature renders it intractable, we had to develop ways of predicting/explaining/manipulating behaviour that have nothing to do with the brains behind that behaviour, and everything to do with its impact on our reproductive fortunes. Social problem-solving, in other words, depends on the stability of a very specific cognitive ecology, one entirely innocent to the possibility of AI.

For me, the most significant revelation from the Ashley Madison scandal was the ease with which men were fooled into thinking they were attracting female interest. And this just wasn’t an artifact of the venue: Ito’s MIT colleague Sherry Turkle, in addition to systematically describing the impact of technology on interpersonal relationships, often warns of the ease with which “Darwinian buttons” can be pushed. What makes simple heuristics so powerful is precisely what renders them so vulnerable (and it’s no accident that AI is struggling to overcome this issue now): they turn on cues physically correlated to the systems they track. Break those correlations, and those cues are connected to nothing at all, and we enter Crash Space, the kind of catastrophic cognitive ecological failure that warns away everyone but philosophers.

Virtual and Augmented Reality, or even Vegas magic acts, provide excellent visual analogues. Whether one looks at stereoscopic 3-D systems like Occulus Rift, or the much-ballyhooed ‘biomimetics’ of Magic Leap, or the illusions of David Copperfield, the idea is to cue visual environments that do not exist as effectively and as economically as possible. Goerztal and Levesque and others can keep pounding at the gates of general cognition (which may exist, who knows), but research like that of the late Clifford Nass is laying bare the landscape of cues comprising human social cognition, and given the relative resources required, it seems all but inevitable that the ‘taking to be’ approach, designing AIs focused not so much on being a genuine agent (whatever that is) as cuing the cognition of one, will sweep the field. Why build Disney World when you can project it? Developers will focus on the illusion, which they will refine and refine until the show becomes (Turing?) indistinguishable from the real thing—from the standpoint of consumers.

The differences being, 1) that the illusion will be perspectivally robust (we will have no easy way of seeing through it); and 2) the illusion will be a sociocognitive one. As AI colonizes more and more facets of our lives, our sociocognitive intuitions will become increasingly unreliable. This prediction, I think, is every bit as reliable as the prediction that the world’s ecosystems will be increasingly disrupted as human activity colonizes more and more of the world. Human social cognition turns access to cues into behaviour solving otherwise intractable biological brains—this is a fact. Algorithms are set to flood this space, to begin cuing social cognition to solve biological brains in the absence of any biological brains. Neil Lawrence likens the consequences to the creation of ‘System Zero,’ an artificial substratum for the System 1 (automatic, unconscious) and System 2 (deliberate, conscious) organization of human cognition. He writes:

“System Zero will come to understand us so fully because we expose to it our inner most thoughts and whims. System Zero will exploit massive interconnection. System Zero will be data rich. And just like an elephant, System Zero will never forget.”

Even as we continue attempting to solve it with systems we evolved to solve one another—a task which is going to remain as difficult as it always has, and will likely grow less attractive as fantasy surrogates become increasingly available. Talk about Systems over Subjects! The ecology of human meaning, the shared background allowing us to resolve conflict and to trust, will be progressively exploited and degraded—like every other ancestral ecology on this planet. When I wax grandiloquent (I am a crazy fantasy writer after all), I call this the semantic apocalypse.

I see no way out. Everyone thinks otherwise, but only because the way that human cognition neglects cognitive ecology generates the illusion of unlimited, unconstrained cognitive capacity. And this, I think, is precisely the illusion informing Ito and Howe’s theory of human nature…

Speaking of which, as I said, I found myself wondering what this theory might be as I read the book. I understood I wasn’t the target audience of the book, so I didn’t see its absence as a failing so much as unfortunate for readers like me, always angling for the hard questions. And so it niggled and niggled, until finally, I reached the last paragraph of the last page and encountered this:

“Human beings are fundamentally adaptable. We created a society that was more focussed on our productivity than our adaptability. These principles will help you prepare to be flexible and able to learn the new roles and to discard them when they don’t work anymore. If society can survive the initial whiplash when we trade our running shoes for a supersonic jet, we may yet find that the view from the jet is just what we’ve been looking for.” 250

This first claim, uplifting as it sounds, is simply not true. Human beings, considered individually or collectively, are not capable of adapting to any circumstance. Intuitions systematically misfire all the time. I appreciate how believing as much balms the conscience of those in the innovation game, but it is simply not true. And how could it be, when it entails that humans somehow transcend ecology, which is a far different claim than saying humans, relative to other organisms, are capable of spanning a wide-variety of ecologies. So long as human cognition is heuristic it depends on environmental invariances, like everything else biological. Humans are not capable of transcending system, which is precisely why we need to think the human in systematic terms, and to look at the impact of AI ecologically.

What makes Whiplash such a valuable book (aside from the entertainment factor) is that it is ecologically savvy. Ito and Howe’s dominant metaphor is that of adaptation and ecology. The old business habitat, they argue, has collapsed, leaving old business animals in the ecological lurch. The solution they offer is heuristic, a set of maxims meant to transform (at a sub-ideological level no less!) old business animals into newer, more adaptable ones. The way to solve the problem of innovation uncertainty is to contribute to that problem in the right way—be more innovative. But they fail to consider the ecological dimensions of this imperative, to see how feeding acceleration amounts to the inevitable destruction of cognitive ecologies, how the old meaning habitat is already collapsing, leaving old meaning animals in the ecological lurch, grasping for lies because those, at least, they can recognize.

They fail to see how their local survival guide likely doubles as a global suicide manual.


The meta-heuristics they offer, the new guiding mythologies, are meant to encapsulate the practical bases of evolvability itself… They’re teaching ferns how to grow flowers.


 

PS: The Big Picture

“In the past twenty-five years,” Ito and Howe write, “we have moved from a world dominated by simple systems to a world beset and baffled by complex systems” (246). This claim caught my attention because it is both true and untrue, depending how you look at it. We are pretty much the most complicated thing we know of in the universe, so it’s certainly not the case that we’ve ever dwelt in a world dominated by simple systems. What Ito and Howe are referring to, of course, is our tools. We are moving from a world dominated by simple tools to a world beset and baffled by complex ones. Since these tools facilitate tool-making, we find the great ratchet that lifted us out of the hominid fog clicking faster and faster and faster.

One of these ‘simple tools’ is what we call a ‘company’ or ‘business,’ an institution itself turning on the systematic application of simple tools, ones that intrinsically value authority over emergence, push over pull, maps over compasses, safety over risk, compliance over disobedience, theory over practice, ability over diversity, strength over resilience, and objects over systems. In the same way the simplicity of our physical implements limited the damage they could do to our physical ecologies, the simplicity of our cognitive tools limited the damage they could do to our cognitive ecology. It’s important to understand that the simplicity of these tools is what underwrites the stability of the underlying cognitive ecology. As the growing complexity and power of our physical tools intensified the damage done to our physical ecologies, the growing complexity and power of our cognitive tools is intensifying the damage done to our cognitive ecologies.

Now, two things. First, this analogy suggests that not all is hopeless, that the same way we can use the complexity and power of our physical tools to manage and prevent the destruction of our physical environment, we should be able to use the complexity and power of our cognitive tools to do the same. I concede the possibility, but I think the illusion of noocentrism (the cognitive version of geocentrism) is simply too profound. I think people will endlessly insist on the freedom to concede their autonomy. System Zero will succeed because it will pander ever so much better than a cranky old philosopher could ever hope to.

Second, notice how this analogy transforms the nature of the problem confronting that old animal, business, in the light of radical ecological change. Ancestral human cognitive ecology possessed a shallow present and a deep future. For all his ignorance, a yeoman chewing his calluses in the field five hundred years ago could predict that his son would possess a life very much resembling his own. All the obsolete items that Ito and Howe consider are artifacts of a shallow present. When the world is a black box, when you have no institutions like science bent on the systematic exploration of solution space, the solutions happened upon are generally lucky ones. You hold onto the tools you trust, because it’s all guesswork otherwise and the consequences are terminal. Authority, Push, Compliance, and so on are all heuristics in their own right, all ways of dealing with supercomplicated systems (bunches of humans), but selected for cognitive ecologies where solutions were both precious and abiding.

Oh, how things have changed. Ambient information sensitivity, the ability to draw on everything from internet search engines, to Big Data, to scientific knowledge more generally, means that businesses have what I referred to earlier as a deep present, a vast amount of information and capacity to utilize in problem solving. This allows them to solve systems as systems (the way science does) and abandon the limitations of not only object thinking, but (and this is the creepy part) subject thinking as well. It allows them to correct for faulty path-dependencies by distributing problem-solving among a diverse array of individuals. It allows them to rationalize other resources as well, to pull what they need when they need it rather than pushing warehoused resources.

Growing ambient information sensitivity means growing problem-solving economy—the problem is that this economy means accelerating cognitive ecological transformation. The cheaper optimization becomes, the more transient it becomes, simply because each and every new optimization transforms, in ways large or small but generally unpredictable, the ecology (the network of correlations) prior heuristic optimizations require to be effective. Call this the Optimization Spiral.

This is the process Ito and Howe are urging the business world to climb aboard, to become what might be called meta-ecological institutions, entities designed in the first instance, not to build cars or to mediate social relations or to find information on the web, but to evolve. As an institutionalized bundle of heuristics, a business’s ability to climb the Optimization Spiral, to survive accelerating ecological change, turns on its ability to relinquish the old while continually mimicking, tinkering, and birthing with the new. Thus the value of disobedience and resilience and practical learning: what Ito and Howe are advocating is more akin to the Precambrian Explosion or the rise of Angiosperms than simply surviving extinction. The meta-heuristics they offer, the new guiding mythologies, are meant to encapsulate the practical bases of evolvability itself… They’re teaching ferns how to grow flowers.

And stepping back to take the systems view they advocate, one cannot but feel an admixture of awe and terror, and wonder if they aren’t sketching the blueprint for an entirely unfathomable order of life, something simultaneously corporate and corporeal.

Real Systems

by rsbakker

THE ORDER WHICH IS THERE

Now I’ve never had any mentors; my path has been too idiosyncratic, for the better, since I think it’s the lack of institutional constraints that has allowed me to experiment the way I have. But if I were pressed to name any spiritual mentor, Daniel Dennett would be the first name to cross my lips—without the least hesitation. Nevertheless, I see the theoretical jewel of his project, the intentional stance, as the last gasp of what will one day, I think, count as one of humanity’s great confusions… and perhaps the final one to succumb to science.

A great many disagree, of course, and because I’ve been told so many times to go back to “Real Patterns” to discover the error of my ways, I’ve decided I would use it to make my critical case.

Defenders of Dennett (including Dennett himself) are so quick to cite “Real Patterns,” I think, because it represents his most sustained attempt to situate his position relative to his fellow philosophical travelers. At issue is the reality of ‘intentional states,’ and how the traditional insistence on some clear cut binary answer to this question—real/unreal—radically underestimates the ontological complexity charactering both everyday life and the sciences. What he proposes is “an intermediate doctrine” (29), a way of understanding intentional states as real patterns.

I have claimed that beliefs are best considered to be abstract objects rather like centers of gravity. Smith considers centers of gravity to be useful fictions while Dretske considers them to be useful (and hence?) real abstractions, and each takes his view to constitute a criticism of my position. The optimistic assessment of these opposite criticisms is that they cancel each other out; my analogy must have hit the nail on the head. The pessimistic assessment is that more needs to be said to convince philosophers that a mild and intermediate sort of realism is a positively attractive position, and not just the desperate dodge of ontological responsibility it has sometimes been taken to be. I have just such a case to present, a generalization and extension of my earlier attempts, via the concept of a pattern. My aim on this occasion is not so much to prove that my intermediate doctrine about the reality of psychologcal states is right, but just that it is quite possibly right, because a parallel doctrine is demonstrably right about some simpler cases. 29

So what does he mean by ‘real patterns’? Dennett begins by considering a diagram with six rows of five black boxes each characterized by varying degrees of noise, so extreme in some cases as completely obscure the boxes. He then, following the grain of his characteristic genius, provides a battery of different ways these series might find themselves used.

This crass way of putting things-in terms of betting and getting rich-is simply a vivid way of drawing attention to a real, and far from crass, trade-off that is ubiquitous in nature, and hence in folk psychology. Would we prefer an extremely compact pattern description with a high noise ratio or a less compact pattern description with a lower noise ratio? Our decision may depend on how swiftly and reliably we can discern the simple pattern, how dangerous errors are, how much of our resources we can afford to allocate to detection and calculation. These “design decisions” are typically not left to us to make by individual and deliberate choices; they are incorporated into the design of our sense organs by genetic evolution, and into our culture by cultural evolution. The product of this design evolution process is what Wilfrid Sellars calls our manifest image, and it is composed of folk physics, folk psychology, and the other pattern-making perspectives we have on the buzzing blooming confusion that bombards us with data. The ontology generated by the manifest image has thus a deeply pragmatic source. 36

The moral is straightforward: the kinds of patterns that data sets yield are both perspectival and pragmatic. In each case, the pattern recognized is quite real, but bound upon some potentially idiosyncratic perspective possessing some potentially idiosyncratic needs.

He then takes this moral to Conway’s Game of Life, a computer program where cells in a grid are switched on or off in successive turns depending on the number of adjacent cells switched on. The marvelous thing about this program lies in the kinds of dynamic complexities arising from this simple template and single rule, subsystems persisting from turn to turn, encountering other subsystems with predictable results. Despite the determinism of this system, patterns emerge that only the design stance seems to adequately capture, a level possessing “it’s own language, a transparent foreshortening of the tedious descriptions one could give at the physical level” (39).

For Dennett, the fact that one can successfully predict via the design stance clearly demonstrates that it’s picking out real patterns somehow. He asks us to imagine transforming the Game into a supersystem played out on a screen miles wide and using the patterns picked out to design a Turing Machine playing chess against itself. Here, Dennett argues, the determinacy of the microphysical picture is either intractable or impracticable, yet we need only take up a chess stance or a computational stance to make, from a naive perspective, stunning predictions as to what will happen next.

And this is of course as true of life life as it is the Game of Life: “Predicting that someone will duck if you throw a brick at him is easy from the folk-psychological stance; it is and will always be intractable if you have to trace the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth” (42). His supersized Game of Life, in other words, makes plain the power and the limitations of heuristic cognition.

This brings him to his stated aim of clarifying his position vis a vis his confreres and Fodor. As he points out, everyone agrees there’s some kind of underlying “order which is there,” as Anscombe puts it in Intention. The million dollar question, of course, is what this order amounts to:

Fodor and others have claimed that an interior language of thought is the best explanation of the hard edges visible in “propositional attitude psychology.” Churchland and I have offered an alternative explanation of these edges… The process that produces the data of folk psychology, we claim, is one in which the multidimensional complexities of the underlying processes are projected through linguistic behavior, which creates an appearance of definiteness and precision, thanks to the discreteness of words. 44-45

So for traditional realists, like Fodor, the structure beliefs evince in reflection and discourse expresses the structure beliefs must possess in the head. For Dennett, on the other hand, the structure beliefs evince in reflection and discourse expresses, among other things, the structure of reflection and discourse. How could it be otherwise, he asks, given the ‘stupendous scale of compression’ (42) involved?

As Haugeland points out in “Pattern and Being,” this saddles Dennett’s account of patterns with a pretty significant ambiguity: if the patterns characteristic of intentional states express the structure of reflection and discourse, then the ‘order which is there’ must be here as well. Of course, this much is implicit in Dennett’s preamble: the salience of certain patterns depends on the perspective we possess on them. But even though this implicit ‘here-there holism’ becomes all but explicit when Dennett turns to Radical Translation and the distinction between his and Davidson’s views, his emphasis nevertheless remains on the order out there. As he writes:

Davidson and I both like Churchland’s alternative idea of propositional-attitude statements as indirect “measurements” of a reality diffused in the behavioral dispositions of the brain (and body). We think beliefs are quite real enough to call real just so long as belief talk measures these complex behavior-disposing organs as predictively as it does. 45-46

Rhetorically (even diagrammatically if one takes Dennett’s illustrations into account), the emphasis is on the order there, while here is merely implied as a kind of enabling condition. Call this the ‘epistemic-ontological ambiguity’ (EOA). On the one hand, it seems to make eminent sense to speak of patterns visible only from certain perspectives and to construe them as something there, independent of any perspective we might take on them. But on the other hand, it seems to make jolly good sense to speak of patterns visible only from certain perspectives and to construe them as around here, as something entirely dependent on the perspective we find ourselves taking. Because of this, it seems pretty fair to ask Dennett which kind of pattern he has in mind here. To speak of beliefs as dispositions diffused in the brain seems to pretty clearly imply the first. To speak of beliefs as low dimensional, communicative projections, on the other hand, seems to clearly imply the latter.

Why this ambiguity? Do the patterns underwriting belief obtain in individual believers, dispositionally diffused as he says, or do they obtain in the communicative conjunction of witnesses and believers? Dennett promised to give us ‘parallel examples’ warranting his ‘intermediate realism,’ but by simply asking the whereabouts of the patterns, whether we will find them primarily out there as opposed to around here, we quickly realize his examples merely recapitulate the issue they were supposed to resolve.

 

THE ORDER AROUND HERE

Welcome to crash space. If I’m right then you presently find yourself strolling through a cognitive illusion generated by the application of heuristic capacities outside their effective problem ecology.

Think of how curious the EOA is. The familiarity of it should be nothing short of gobsmacking: here, once again we find ourselves stymied by the same old dichotomies: here versus there, inside versus outside, knowing versus known. Here, once again we find ourselves trapped in the orbit of the great blindspot that still, after thousands of years, stumps the wise of the world.

What the hell could be going on?

Think of the challenge facing our ancestors attempting cognize their environmental relationships for the purposes of communication and deliberate problem-solving. The industrial scale of our ongoing attempt to understand as much demonstrates the intractability of that relationship. Apart from our brute causal interactions, our ability to cognize our cognitive relationships is source insensitive through and through. When a brick is thrown at us, “the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth” (42) all go without saying. In other words, the whole system enabling cognition of the brick throwing is neglected, and only information relevant to ancestral problem-solving—in this case, brick throwing—finds its way to conscious broadcast.

In ancestral cognitive ecologies, our high-dimensional (physical) continuity with nature mattered as much as it matters now, but it quite simply did not exist for them. They belonged to any number of natural circuits across any number of scales, and all they had to go on was the information that mattered (disposed them to repeat and optimize behaviours) given the resources they possessed. Just as Dennett argues, human cognition is heuristic through and through. We have no way of cognizing our position within any number of the superordinate systems science has revealed in nature, so we have to make do with hacks, subsystems allowing us to communicate and troubleshoot our relation to the environment while remaining almost entirely blind to it. About talk belongs to just such a subsystem, a kluge communicating and troubleshooting our relation to our environments absent cognition of our position in larger systems. As I like to say, we’re natural in such a way as to be incapable of cognizing ourselves as natural.

About talk facilitates cognition and communication of our worldly relation absent any access to the physical details of that relation. And as it turns out, we are that occluded relation’s most complicated component—we are the primary thing neglected in applications of about talk. As the thing most neglected, we are the thing most presumed, the invariant background guaranteeing the reliability of about talk (this is why homuncular arguments are so empty). This combination of cognitive insensitivity to and functional dependence upon the machinations of cognition (what I sometimes refer to as medial neglect) suggests that about talk would be ideally suited to communicating and troubleshooting functionally independent systems, processes generally insensitive to our attempts to cognize them. This is because the details of cognition make no difference to the details cognized: the automatic distinction about talk poses between cognizing system and the system cognized poses no impediment to understanding functionally independent systems. As a result, we should expect about talk to be relatively unproblematic when it comes to communicating and troubleshooting things ‘out there.’

Conversely, we should expect about talk to generate problems when it comes to communicating and troubleshooting functionally dependent systems, processes somehow sensitive to our attempts to cognize them. Consider ‘observer effects,’ the problem researchers themselves pose when their presence or their tools/techniques interfere with the process they are attempting to study. Given medial neglect, the researchers themselves always constitute a black box. In the case of systems functionally sensitive to the activity of cognition, as is often the case in psychology and particle physics, understanding the system requires we somehow obviate our impact on the system. As the interactive, behavioural components of cognition show, we are in fact quite good (though far from perfect) at inserting and subtracting our interventions in processes. But since we remain a black box, since our position in the superordinate systems formed by our investigations remains occluded, our inability to extricate ourselves, to gerrymander functional independence, say, undermines cognition.

Even if we necessarily neglect our positions in superordinate systems, we need some way of managing the resulting vulnerabilities, to appreciate that patterns may be artifacts of our position. This suggests one reason, at least, for the affinity of mechanical cognition and ‘reality.’ The more our black box functions impact the system to be cognized, the less cognizable that system becomes in source sensitive terms. We become an inescapable source of noise. Thus our intuitive appreciation of the need for ‘perspective,’ to ‘rise above the fray’: The degree to which a cognitive mode preserves (via gerrymandering if not outright passivity) the functional independence of a system is the degree to which that cognitive mode enables reliable source sensitive cognition is the degree to which about talk can be effectively applied.

The deeper our entanglements, on the other hand, the more we need to rely on source insensitive modes of cognition to cognize target systems. Even if our impact renders the isolation of source signals impossible, our entanglement remains nonetheless systematic, meaning that any number of cues correlated in any number of ways to the target system can be isolated (which is really all ‘radical translation’ amounts to). Given that metacognition is functionally entangled by definition, it becomes easy to see why the theoretical question of cognition causes about talk to crash the spectacular ways it does: our ability to neglect the machinations of cognition (the ‘order which is here’) is a boundary condition for the effective application of ‘orders which are there’—or seeing things as real. Systems adapted to work around the intractability of our cognitive nature find themselves compulsively applied to the problem of our cognitive nature. We end up creating a bestiary of sourceless things, things that, thanks to the misapplication of the aboutness heuristic, have to belong to some ‘order out there,’ and yet cannot be sourced like anything else out there… as if they were unreal.

The question of reality cues the application of about talk, our source insensitive means of communicating and troubleshooting our cognitive relation to the world. For our ancient ancestors, who lacked the means to distinguish between source sensitive and source insensitive modes of cognition, asking, ‘Are beliefs real?’ would have sounded insane. HNT, in fact, provides a straightforward explanation for what might be called our ‘default dogmatism,’ our reflex for naive realism: not only do we lack any sensitivity to the mechanics of cognition, we lack any sensitivity to this insensitivity. This generates the persistent illusion of sufficiency, the assumption (regularly observed in different psychological phenomena) that the information provided is all the information there is.

Cognition of cognitive insufficiency always requires more resources, more information. Sufficiency is the default. This is what makes the novel application of some potentially ‘good trick,’ as Dennett would say, such tricky business. Consider philosophy. At some point, human culture acquired the trick of recruiting existing metacognitive capacities to explain the visible in terms of the invisible in unprecedented (theoretical) ways. Since those metacognitive capacities are radically heuristic, specialized consumers of select information, we can suppose retasking those capacities to solve novel problems—as philosophers do when they, for instance, ‘ponder the nature of knowledge’—would run afoul some pretty profound problems. Even if those specialized metacognitive consumers possessed the capacity to signal cognitive insufficiency, we can be certain the insufficiency flagged would be relative to some adaptive problem-ecology. Blind to the heuristic structure of cognition, the first philosophers took the sufficiency of their applications for granted, much as very many do now, despite the millennia of prior failure.

Philosophy inherited our cognitive innocence and transformed it, I would argue, into a morass of competing cognitive fantasies. But if it failed to grasp the heuristic nature of much cognition, it did allow, as if by delayed exposure, a wide variety of distinctions to blacken the photographic plate of philosophical reflection—that between is and ought, fact and value, among them. The question, ‘Are beliefs real?’ became more a bona fide challenge than a declaration of insanity. Given insensitivity to the source insensitive nature of belief talk, however, the nature of the problem entirely escaped them. Since the question of reality cues the application of about talk, source insensitive modes of cognition struck them as the only game in town. Merely posing the question springs the trap (for as Dennett says, selecting cues is “typically not left to us to make by individual and deliberate choices” (36)). And so they found themselves attempting to solve the hidden nature of cognition via the application of devices adapted to ignore hidden natures.

Dennett runs into the epistemic-ontological ambiguity because the question of the reality of intentional states cues the about heuristic out of school, cedes the debate to systems dedicated to gerrymandering solutions absent high-dimensional information regarding our cognitive predicament—our position within superordinate systems. Either beliefs are out there, real, or they’re in here, merely, an enabling figment of some kind. And as it turns out, IST is entirely amenable to this misapplication, in that ‘taking the intentional stance’ involves cuing the about heuristic, thus neglecting our high-dimensional cognitive predicament. On Dennett’s view, recall, an intentional system is any system that can be predicted/explained/manipulated via the intentional stance. Though the hidden patterns can only be recognized from the proper perspective, they are there nonetheless, enough, Dennett thinks, to concede them reality as intentional systems.

Heuristic Neglect Theory allows us to see how this amounts to mistaking a CPU for a PC. On HNT, the trick is to never let the superordinate systems enabling and necessitating intentional cognition out of view. Recall the example of the gaze heuristic from my prior post, how fielders essentially insert—functionally entangle—themselves into the pop fly system to let the ball itself guide them in. The same applies to beliefs. When your tech repairs your computer, you have no access to her personal history, the way thousands of hours have knapped her trouble-shooting capacities, and even less access to her evolutionary history, the way continual exposure to problematic environments has sculpted her biological problem-solving capacities. You have no access, in other words, to the vast systems of quite natural relata enabling her repair. The source sensitive story is unavailable, so you call her ‘knowledgeable’ instead; you presume she possesses something—a fetish, in effect—possessing the sourceless efficacy explaining her almost miraculous ability to make your PC run: a mass of true beliefs (representations), regarding personal computer repair. You opt for a source insensitive means that correlates with her capacities well enough to neglect the high-dimensional facts—the natural and personal histories—underwriting her ability.

So then where does the ‘real pattern’ gainsaying the reality of belief lie? The realist would say in the tech herself. This is certainly what our (heuristic) intuitions tell us in the first instance. But as we saw above, squaring sourceless entities in a world where most everything has a source is no easy task. The instrumentalist would say in your practices. This certainly lets us explain away some of the peculiarities crashing our realist intuitions, but at the cost of other, equally perplexing problems (this is crash space, after all). As one might expect, substituting the use heuristic for the about heuristic merely passes the hot potato of source insensitivity. ‘Pragmatic functions’ are no less difficult to square with the high-dimensional than beliefs.

But it should be clear by now that the simple act of pairing beliefs with patterns amounts to jumping the same ancient shark. The question, ‘Are beliefs real?’ was a no-brainer for our preliterate ancestors simply because they lived in a seamless shallow information cognitive ecology. Outside their local physics, the sources of things eluded them altogether. ‘Of course beliefs are real!’ The question was a challenge for our philosophical ancestors because they lived in a fractured shallow information ecology. They could see enough between the cracks to appreciate the potential extent and troubling implications of mechanical cognition, it’s penchant to crash our shallow (ancestral) intuitions. ‘It has to be real!’

With Dennett, entire expanses of our shallow information ecology have been laid low and we get, ‘It’s as real as it needs to be.’ He understands the power of the about heuristic, how ‘order out there’ thinking effects any number of communicative solutions—thus his rebuttal of Rorty. He understands, likewise, the power of the use heuristic, how ‘order around here’ thinking effects any number of communicative solutions—thus his rebuttal of Fodor. And most importantly, he understands the error of assuming the universal applicability of either. And so he concludes:

Now, once again, is the view I am defending here a sort of instrumentalism or a sort of realism? I think that the view itself is clearer than either of the labels, so I shall leave that question to anyone who stills find [sic] illumination in them. 51

What he doesn’t understand is how it all fits together—and how could he, when IST strands him with an intentional theorization of intentional cognition, a homuncular or black box understanding of our contemporary cognitive predicament? This is why “Real Patterns” both begins and ends with EOA, why we are no closer to understanding why such ambiguity obtains at all. How are we supposed to understand how his position falls between the ‘ontological dichotomy’ of realism and instrumentalism when we have no account of this dichotomy in the first place? Why the peculiar ‘bi-stable’ structure? Why the incompatibility between them? How can the same subject matter evince both? Why does each seem to inferentially beg the other?

 

THE ORDER

The fact is, Dennett was entirely right to eschew outright realism or outright instrumentalism. This hunch of his, like so many others, was downright prescient. But the intentional stance only allows him to swap between perspectives. As a one-time adherent I know first-hand the theoretical versatility IST provides, but the problem is that explanation is what is required here.

HNT argues that simply interrogating the high-dimensional reality of belief, the degree to which it exists out there, covers over the very real system—the cognitive ecology—explaining the nature of belief talk. Once again, our ancestors needed some way of communicating their cognitive relations absent source-sensitive information regarding those relations. The homunculus is a black box precisely because it cannot source its own functions, merely track their consequences. The peculiar ‘here dim’ versus ‘there bright’ character of naive ontological or dogmatic cognition is a function of medial neglect, our gross insensitivity to the structure and dynamics of our cognitive capacities. Epistemic or instrumental cognition comes with learning from the untoward consequences of naive ontological cognition—the inevitable breakdowns. Emerging from our ancestral, shallow information ecologies, the world was an ‘order there’ world simply because humanity lacked the ability to discriminate the impact of ‘around here.’ The discrimination of cognitive complexity begets intuitions of cognitive activity, undermines our default ‘out there’ intuitions. But since ‘order there’ is the default and ‘around here’ the cognitive achievement, we find ourselves in the peculiar position of apparently presuming ‘order there’ when making ‘around here’ claims. Since ‘order there’ intuitions remain effective when applied in their adaptive problem-ecologies, we find speculation splitting along ‘realist’ versus ‘anti-realist’ lines. Because no one has any inkling of any of this, we find ourselves flipping back and forth between these poles, taking versions of the same obvious steps to trod the same ancient circles. Every application is occluded, and so ‘transparent,’ as well as an activity possessing consequences.

Thus EOA… as well as an endless parade of philosophical chimera.

Isn’t this the real mystery of “Real Patterns,” the question of how and why philosophers find themselves trapped on this rickety old teeter-totter? “It is amusing to note,” Dennett writes, “that my analogizing beliefs to centers of gravity has been attacked from both sides of the ontological dichotomy, by philosophers who think it is simply obvious that centers of gravity are useful fictions, and by philosophers who think it is simply obvious that centers of gravity are perfectly real” (27). Well, perhaps not so amusing: Short of solving this mystery, Dennett has no way of finding the magic middle he seeks in this article—the middle of what? IST merely provides him with the means to recapitulate EOA and gesture to the possibility of some middle, a way to conceive all these issues that doesn’t deliver us to more of the same. His instincts, I think, were on the money, but his theoretical resources could not take him where he wanted to go, which is why, from the standpoint of his critics, he just seems to want to have it both ways.

On HNT we can see, quite clearly, I think, the problem with the question, ‘Are beliefs real?’ absent an adequate account of the relevant cognitive ecology. The bitter pill lies in understanding that the application conditions of ‘real’ have real limits. Dennett provides examples where those application conditions pretty clearly seem to obtain, then suggests more than argues that these examples are ‘parallel’ in all the structurally relevant respects to the situation with belief. But to distinguish his brand from Fodor’s ‘industrial strength’ realism, he has no choice but to ‘go instrumental’ in some respect, thus exposing the ambiguity falling out of IST.

It’s safe to say belief talk is real. It seems safe to say that beliefs are ‘real enough’ for the purposes of practical problem-solving—that is, for shallow (or source insensitive) cognitive ecologies. But it also seems safe to say that beliefs are not real at all when it comes to solving high-dimensional cognitive ecologies. The degree to which scientific inquiry is committed to finding the deepest (as opposed to the most expedient) account, should be the degree to which it views belief talk as components of real systems and views ‘belief’ as a source insensitive posit, a way to communicate and troubleshoot both oneself and one’s fellows.

This is crash space, so I appreciate the kinds of counter-intuitiveness involved in this view I’m advancing. But since tramping intuitive tracks has hitherto only served to entrench our controversies and confusions, we have good reason to choose explanatory power over intuitive appeal. We should expect synthesis in the cognitive sciences will prove every bit as alienating to traditional presumption as it was in biology. There’s more than a little conceit involved in thinking we had any special inside track on our own nature. In fact, it would be a miracle if humanity had not found itself in some version of this very dilemma. Given only source insensitive means to troubleshoot cognition, to understand ourselves and each other, we were all but doomed to be stumped by the flood of source sensitive cognition unleashed by science. (In fact, given some degree of interstellar evolutionary convergence, I think one can wager that extraterrestrial intelligences will have suffered their own source insensitive versus source sensitive cognitive crash spaces. See my, “On Alien Philosophy,” The Journal of Consciousness Studies, (forthcoming))

IST brings us to the deflationary limit of intentional philosophy. HNT offers a way to ratchet ourselves beyond, a form of critical eliminativism that can actually explain, as opposed to simply dispute, the traditional claims of intentionality. Dennett, of course, reserves his final criticism for eliminativism, perhaps because so many critics see it as the upshot of his interpretivism. He acknowledges the possibility that “that neuroscience will eventually-perhaps even soon-discover a pattern that is so clearly superior to the noisy pattern of folk psychology that everyone will readily abandon the former for the latter (50),” but he thinks it unlikely:

For it is not enough for Churchland to suppose that in principle, neuroscientific levels of description will explain more of the variance, predict more of the “noise” that bedevils higher levels. This is, of course, bound to be true in the limit-if we descend all the way to the neurophysiological “bit map.” But as we have seen, the trade-off between ease of use and immunity from error for such a cumbersome system may make it profoundly unattractive. If the “pattern” is scarcely an improvement over the bit map, talk of eliminative materialism will fall on deaf ears-just as it does when radical eliminativists urge us to abandon our ontological commitments to tables and chairs. A truly general-purpose, robust system of pattern description more valuable than the intentional stance is not an impossibility, but anyone who wants to bet on it might care to talk to me about the odds they will take. 51

The elimination of theoretical intentional idiom requires, Dennett correctly points out, some other kind of idiom. Given the operationalization of intentional idioms across a wide variety of research contexts, they are not about to be abandoned anytime soon, and not at all if the eliminativist has nothing to offer in their stead. The challenge faced by the eliminativist, Dennett recognizes, is primarily abductive. If you want to race at psychological tracks, you either enter intentional horses or something that can run as fast or faster. He thinks this unlikely because he thinks no causally consilient (source sensitive) theory can hope to rival the combination of power and generality provided by the intentional stance. Why might this be? Here he alludes to ‘levels,’ suggest that any causally consilient account would remain trapped at the microphysical level, and so remain hopelessly cumbersome. But elsewhere, as in his discussion of ‘creeping depersonalization’ in “Mechanism and Responsibility,” he readily acknowledges our ability to treat with one another as machines.

And again, we see how the limited resources of IST have backed him into a philosophical corner—and a traditional one at that. On HNT, his claim amounts to saying that no source sensitive theory can hope to supplant the bundle of source insensitive modes comprising intentional cognition. On HNT, in other words, we already find ourselves on the ‘level’ of intentional explanation, already find ourselves with a theory possessing the combination of power and generality required to eliminate a particle of intentional theorization: namely, the intentional stance. A way to depersonalize cognitive science.

Because IST primarily provides a versatile way to deploy and manage intentionality in theoretical contexts rather than any understanding of its nature, the disanalogy between ‘center of gravity’ and ‘beliefs’ remains invisible. In each case you seem to have an entity that resists any clear relation to the order which is there, and yet finds itself regularly and usefully employed in legitimate scientific contexts. Our brains are basically short-cut machines, so it should come as no surprise that we find heuristics everywhere, in perception as much as cognition (insofar as they are distinct). It also should come as no surprise that they comprise a bestiary, as with most all things biological. Dennett is comparing heuristic apples and oranges, here. Centers of gravity are easily anchored to the order which is there because they economize otherwise available information. They can be sourced. Such is not the case with beliefs, belonging as they do to a system gerrymandering for the want of information.

So what is the ultimate picture offered here? What could reality amount to outside our heuristic regimes? Hard to think, as it damn well should be. Our species’ history posed no evolutionary challenges requiring the ability to intuitively grasp the facts of our cognitive predicament. It gave us a lot of idiosyncratic tools to solve high impact practical problems, and as a result, Homo sapiens fell through the sieve in such a way as to be dumbfounded when it began experimenting in earnest with its interrogative capacities. We stumbled across a good number of tools along the way, to be certain, but we remain just as profoundly stumped about ourselves. On HNT, the ‘big picture view’ is crash space, in ways perhaps similar to the subatomic, a domain where our biologically parochial capacities actually interfere with our ability to understand. But it offers a way of understanding the structure and dynamics of intentional cognition in source sensitive terms, and in so doing, explains why crashing our ancestral cognitive modes was inevitable. Just consider the way ‘outside heuristic regimes’ suggests something ‘noumenal,’ some uber-reality lost at the instant of transcendental application. The degree to which this answer strikes you as natural or ‘obvious’ is the degree you have been conditioned to apply that very regime out of school. With HNT we can demand those who want to stuff us into this or that intellectual Klein bottles define their application conditions, convince us this isn’t just more crash space mischief.

It’s trivial to say some information isn’t available, so why not leave well enough alone? Perhaps the time has come to abandon the old, granular dichotomies and speak in terms of dimensions of information available and cognitive capacities possessed. Imagine that

Moving on.

Dennett’s Black Boxes (Or, Meaning Naturalized)

by rsbakker

“Dennett’s basic insight is that there are under-explored possibilities implicit in contemporary scientific ideas about human nature that are, for various well understood reasons, difficult for brains like ours to grasp. However, there is a familiar remedy for this situation: as our species has done throughout its history when restrained by the cognitive limitations of the human brain, the solution is to engineer new cognitive tools that enable us to transcend these limitations. ”

—T. W. Zadwidzki, “As close to the definitive Dennett as we’re going to get.”

So the challenge confronting cognitive science, as I see it, is to find some kind of theoretical lingua franca, a way to understand different research paradigms relative to one another. This is the function that Darwin’s theory of evolution plays in the biological sciences, that of a common star chart, a way for myriad disciplines to chart their courses vis a vis one another.

Taking a cognitive version of ‘modern synthesis’ as the challenge, you can read Dennett’s “Two Black Boxes: a Fable” as an argument against the need for such a synthesis. What I would like to show is the way his fable can be carved along different joints to reach a far different conclusion. Beguiled by his own simplifications, Dennett trips into the same cognitive ‘crash space’ that has trapped traditional speculation on the nature of cognition more generally, fooling him into asserting explanatory limits that are apparent only.

Dennett’s fable tells the story (originally found in Darwin’s Dangerous Idea, 412-27) of a group of researchers stranded with two black boxes, each containing a supercomputer with a database of ‘true facts’ about the world, one in English, the other in Swedish. One box has two buttons labeled alpha and beta, while the second box has three lights coloured yellow, red, and green. Unbeknownst to the researchers, the button box simply transmits a true statement from the one supercomputer when the alpha button is pushed, which the other supercomputer acknowledges by lighting the red bulb for agreement, and a false statement when the beta button is pushed, which the bulb box acknowledges by lighting the green bulb for disagreement. The yellow bulb illuminates only when the bulb box can make no sense of the transmission, which is always the case when the researcher disconnect the boxes and, being entirely ignorant of any of these details, substitute signals of their own.

The intuitive power of the fable turns on the ignorance of the researchers, who begin by noting the manifest relations above, how pushing alpha illuminates red, pushing beta illuminates green, and how interfering with the signal between the boxes invariably illuminates yellow. Until the two hackers who built the supercomputers arrive, they have no way of explaining why the three actions—alpha pushing, beta pushing, and signal interfering—illuminate the lights they do. Even when they crack open the boxes and begin reverse engineering the supercomputers within, they find themselves no closer to solving the problem. This is what makes their ignorance so striking: not even the sustained, systematic application of mechanical cognition paradigmatic of science can solve the problem. Certainly a mechanical account of all the downstream consequences of pushing alpha or beta or interfering with the signal is possible, but this inevitably cumbersome account nevertheless fails to explain the significance of what is going on.

Dennett’s black boxes, in other words, are actually made of glass. They can be cracked open and mechanically understood. It’s their communication that remains inscrutable, the fact that no matter what resources the researchers throw at the problem, they have no way of knowing what is being communicated. The only way to do this, Dennett wants to argue, is to adopt the ‘intentional stance.’ This is exactly what Al and Bo, the two hackers responsible for designing and building the black boxes, provide when they finally let the researchers in on their game.

Now Dennett argues that the explanatory problem is the same whether or not the hackers simply hide themselves in the black boxes, Al in one and Bo in the other, but you don’t have to buy into the mythical distinction between derived and original intentionality to see this simply cannot be the case. The fact that the hackers are required to resolve the research conundrum pretty clearly suggests they cannot simply be swapped out with their machines. As soon as the researchers crack open the boxes and find two human beings are behind the communication the whole nature of the research enterprise is radically transformed, much as it is when they show up to explain their ‘philosophical toy.’

This underscores a crucial point: Only the fact that Al and Bo share a vast background of contingencies with the researchers allows for the ‘semantic demystification’ of the signals passing between the boxes. If anything, cognitive ecology is the real black box at work in this fable. If Al and Bo had been aliens, their appearance would have simply constituted an extension of the problem. As it is, they deliver a powerful, but ultimately heuristic, understanding of what the two boxes are doing. They provide, in other words, a black box understanding of the signals passing between our two glass boxes.

The key feature of heuristic cognition is evinced in the now widely cited gaze heuristic, the way fielders fix the ball in their visual field while running to keep the ball in place. The most economical way to catch pop flies isn’t to calculate angles and velocities but to simply ‘lock onto’ the target, orient locomotion to maintain its visual position, and let the ball guide you in. Heuristic cognition solves problems not via modelling systems, but via correlation, by comporting us to cues, features systematically correlated to the systems requiring solution. IIR heat-seeking missiles, for instance, need understand nothing of the targets they track and destroy. Heuristic cognition allows us to solve environmental systems (including ourselves) without the need to model those systems. It enables, in other words, the solution of environmental black boxes, systems possessing unknown causal structures, via known environmental regularities correlated to those structures.

This is why Al and Bo’s revelation has the effect of mooting most all of the work the researchers had done thus far. The boxes might as well be black, given the heuristic nature of their explanation. The arrival of the hackers provides a black box (homuncular) ‘glassing’ of the communication between the two boxes, a way to understand what they are doing that cannot be mechanically decomposed. How? By identifying the relevant cues for the researchers, thereby plugging them into the wider cognitive ecology of which they and the machines are a part.

The communication between the boxes is opaque to the researchers, even when the boxes are transparent, because it is keyed to the hackers, who belong to the same cognitive ecology as to the researchers—only unbeknownst to the researchers. As soon as they let the researchers in on their secret—clue (or ‘cue’) them in—the communication becomes entirely transparent. What the boxes are communicating becomes crystal clear because it turns out they were playing the same game with the same equipment in the same arena all along.

Now what Dennett would have you believe is that ‘understanding the communication’ is exhausted by taking the intentional stance, that the problem of what the machines are communicating is solved as far as it needs to be solved. Sure, there is a vast, microcausal story to be told (the glass box one), but it proves otiose. The artificiality of the fable facilitates this sense: the machines, after all, were designed to compare true or false claims. This generates the sense of some insuperable gulf segregating the two forms of cognition. One second the communication was utterly inscrutable, and the next, Presto! it’s transparent.

“The debate went on for years,” Dennett concludes, “but the mystery with which it began was solved” (84). This seems obvious, until one asks whether plugging the communication into our own intentional ecology answers our original question. If the question is, ‘What do the three lights mean?’ then of course the question is answered, as well it should be, given the question amounts to, ‘How do the three lights plug into the cognitive ecology of human meaning?’ If the question is, ‘What are the mechanics of the three lights, such that they mean?’ then the utility of intentional cognition simply provides more data. The mystery of the meaning of the communication is dissolved, sure, but the problem of relating this meaning to the machinery remains.

What Dennett is attempting to provide with this analogy is a version of ‘radical interpretation,’ an instance that strips away our preconceptions, and forces us to consider the problem of meaning from ‘conceptual scratch,’ you might say. To see the way his fable is loaded, you need only divorce the machines from the human cognitive ecology framing them. Make them alien black-cum-glass boxes and suddenly mechanical cognition is all our researchers have—all they can hope to have. If Dennett’s conclusions vis a vis our human black-cum-glass boxes are warranted, then our researchers might as well give up before they begin, “because there really is no substitute for semantic or intentional predicates when it comes to specifying the property in a compact, generative, explanatory way” (84). Since we don’t share the same cognitive ecology as the aliens, their cues will make no implicit or homuncular sense to us at all. Even if we could pick those cues out, we would have no way of plugging them into the requisite system of correlations, the cognitive ecology of human meaning. Absent homuncular purchase, what the alien machines are communicating would remain inscrutable—if Dennett is to be believed.

Dennett sees this thought experiment as a decisive rebuttal to those critics who think his position entails semantic epiphenomenalism, the notion that intentional posits are causally inert. Not only does he think the intentional stance answers the researchers’ primary question, he thinks it does so in a manner compatible (if not consilient) with causal explanation. Truthhood can cause things to happen:

“the main point of the example of the Two Black Boxes is to demonstrate the need for a concept of causation that is (1) cordial to higher-level causal understanding distinct from an understanding of the microcausal story, and (2) ordinary enough in any case, especially in scientific contexts.” “With a Little Help From my Friends,” Dennett’s Philosophy: A Comprehensive Assessment, 357

The moral of the fable, in other words, isn’t so much intentional as it is causal, to show how meaning-talk is indispensible to a certain crucial ‘high level’ kind of causal explanation. He continues:

“With regard to (1), let me reemphasize the key feature of the example: The scientists can explain each and every instance with no residual mystery at all; but there is a generalization of obviously causal import that they are utterly baffled by until they hit upon the right higher-level perspective.” 357

Everything, of course, depends on what ‘hitting upon the right higher level perspective’ means. The fact is, after all, causal cognition funds explanation across all ‘levels,’ and not simply those involving microstates. The issue, then, isn’t simply one of ‘levels.’ We shall return to this point below.

With regard to (2), the need for an ‘ordinary enough’ concept of cause, he points out the sciences are replete with examples of intentional posits figuring in otherwise causal explanations:

“it is only via … rationality considerations that one can identify or single out beliefs and desires, and this forces the theorist to adopt a higher level than the physical level of explanation on its own. This level crossing is not peculiar to the intentional stance. It is the life-blood of science. If a blush can be used as an embarrassment-detector, other effects can be monitored in a lie detector.” 358

Not only does the intentional stance provide a causally relevant result, it does so, he is convinced, in a way that science utilizes all the time. In fact, he thinks this hybrid intentional/causal level is forced on the theorist, something which need cause no concern because this is simply the cost of doing scientific business.

Again, the question comes down to what ‘higher level of causal understanding’ amounts to. Dennett has no way of tackling this question because he has no genuinely naturalistic theory of intentional cognition. His solution is homuncular—and self-consciously so. The problem is that homuncular solvers can only take us so far in certain circumstances. Once we take them on as explanatory primitives—the way he does with the intentional stance—we’re articulating a theory that can only take us so far in certain circumstances. If we confuse that theory for something more than a homuncular solver, the perennial temptation (given neglect) will be to confuse heuristic limits for general ones—to run afoul the ‘only-game-in-town-effect.’ In fact, I think Dennett is tripping over one of his own pet peeves here, confusing what amounts to a failure of imagination with necessity (Consciousness Explained, 401).

Heuristic cognition, as Dennett claims, is the ‘life-blood of science.’ But this radically understates the matter. Given the difficulties involved in the isolation of causes, we often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information humans have, evolved and learned sensitivities to effects systematically correlated to those environmental systems (including ourselves) relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible (and we are presently hellbent on creating tools that can do the same).

Humans are encircled, engulfed, by the inverse problem, the problem of isolating causes from effects. We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems are wildly disparate, not all access is equal. ‘Isolating causes,’ it turns out, means different things for different kinds of problem solving.

Information access, in fact, divides cognition into two distinct families. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the indispensability, the efficacy, and the limitations of source insensitive (cue correlative) cognition (see, “On the Interpretation of Artificial Souls“). Andrei Cimpian, Klaus Fiedler, and the work of the Adaptive Behaviour and Cognition Research Group more generally are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

Now then, what Dennett is claiming is first, that instances of source insensitive cognition can serve source sensitive cognition, and second, that such instances fulfill our explanatory needs as far as they need to be fulfilled. What triggers the red light? The communication of a true claim from the other machine.

Can instances of source insensitive cognition serve source sensitive cognition (or vice versa)? Can there be such a thing as source insensitive/source sensitive hybrid cognition? Certainly seems that way, given how we cobble to two modes together both in science and everyday life. Narrative cognition, the human ability to cognize (and communicate) human action in context, is pretty clearly predicated on this hybridization. Dennett is clearly right to insist that certain forms of source insensitive cognition can serve certain forms of source sensitive cognition.

The devil is in the details. We know homuncular forms of source insensitive cognition, for instance, don’t serve the ‘hard’ sciences all that well. The reason for this is clear: source insensitive cognition is the mode we resort to when information regarding actual physical constraints isn’t available. Source insensitive idioms are components of wide correlative systems, cue-based cognition. The posits they employ cut no physical joints.

This means that physically speaking, truth causes nothing, because physically speaking, ‘truth’ does not so much refer to ‘real patterns’ in the natural world as participate in them. Truth is at best a metaphorical causer of things, a kind of fetish when thematized, a mere component of our communicative gear otherwise. This, of course, made no difference whatsoever to our ancestors, who scarce had any way of distinguishing source sensitive from source insensitive cognition. For them, a cause was a cause was a cause: the kinds of problems they faced required no distinction to be economically resolved. The cobble was at once manifest and mandatory. Metaphorical causes suited their needs no less than physical causes did. Since shallow information neglect entails ignorance of shallow information neglect—since insensitivity begets insensitivity to insensitivity—what we see becomes all there is. The lack of distinctions cues apparent identity (see, “On Alien Philosophy,” The Journal of Consciousness Studies (forthcoming)).

The crucial thing to keep in mind is that our ancestors, as shallow information consumers, required nothing more. The source sensitive/source insensitive cobble they possessed was the source sensitive/source insensitive cobble their ancestors required. Things only become problematic as more and more ancestrally unprecedented—or ‘deep’— information finds its way into our shallow information ambit. Novel information begets novel distinctions, and absolutely nothing guarantees the compatibility of those distinctions with intuitions adapted to shallow information ecologies.

In fact, we should expect any number of problems will arise once we cognize the distinction between source sensitive causes and source insensitive causes. Why should some causes so effortlessly double as effects, while other causes absolutely refuse? Since all our metacognitive capacities are (as a matter of computational necessity) source insensitive capacities, a suite of heuristic devices adapted to practical problem ecologies, it should come as no surprise that our ancestors found themselves baffled. How is source insensitive reflection on the distinction between source sensitive and source insensitive cognition supposed to uncover the source of the distinction? Obviously, it cannot, yet precisely because these tools are shallow information tools, our ancestors had no way of cognizing them as such. Given the power of source insensitive cognition and our unparalleled capacity for cognitive improvisation, it should come as no surprise that they eventually found ways to experimentally regiment that power, apparently guaranteeing the reality of various source insensitive posits. They found themselves in a classic cognitive crash space, duped into misapplying the same tools out of school over and over again simply because they had no way (short exhaustion, perhaps) of cognizing the limits of those tools.

And here we stand with one foot in and one foot out of our ancestral shallow information ecologies. In countless ways both everyday and scientific we still rely upon the homuncular cobble, we still tell narratives. In numerous other ways, mostly scientific, we assiduously guard against inadvertently tripping back into the cobble, applying source insensitive cognition to a question of sources.

Dennett, ever the master of artful emphasis, focuses on the cobble, pumping the ancestral intuition of identity. He thinks the answer here is to simply shrug our shoulders. Because he takes stances as his explanatory primitives, his understanding of source sensitive and source insensitive modes of cognition remains an intentional (or source insensitive) one. And to this extent, he remains caught upon the bourne of traditional philosophical crash space, famously calling out homuncularism on the one side and ‘greedy reductionism’ on the other.

But as much as I applaud the former charge, I think the latter is clearly an artifact of confusing the limits of his theoretical approach with the way things are. The problem is that for Dennett, the difference between using meaning-talk and using cause-talk isn’t the difference between using a stance (the intentional stance) and using something other than a stance. Sometimes the intentional stance suites our needs, and sometimes the physical stance delivers. Given his reliance on source insensitive primitives—stances—to theorize source sensitive and source insensitive cognition, the question of their relation to each other also devolves upon source insensitive cognition. Confronted with a choice between two distinct homuncular modes of cognition, shrugging our shoulders is pretty much all that we can do, outside, that is, extolling their relative pragmatic virtues.

Source sensitive cognition, on Dennett’s account, is best understood via source insensitive cognition (the intentional stance) as a form of source insensitive cognition (the ‘physical stance’). As should be clear, this not only sets the explanatory bar too low, it confounds the attempt to understand the kinds of cognitive systems involved outright. We evolved intentional cognition as a means of solving systems absent information regarding their nature. The idea then—the idea that has animated philosophical discourse on the soul since the beginning—that we can use intentional cognition to solve the nature of cognition generally is plainly mistaken. In this sense, Intentional Systems Theory is an artifact of the very confusion that has plagued humanity’s attempt to understand itself all along: the undying assumption that source insensitive cognition can solve the nature of cognition.

What do Dennett’s two black boxes ultimately illuminate? When two machines functionally embedded within the wide correlative system anchoring human source insensitive cognition exhibit no cues to this effect, human source sensitive cognition has a devil of a time understanding even the simplest behaviours. It finds itself confronted by the very intractability that necessitated the evolution of source insensitive systems in the first place. As soon as those cues are provided, what was intractable for source sensitive cognition suddenly becomes effortless for source insensitive cognition. That shallow environmental understanding is ‘all we need’ if explaining the behaviour for shallow environmental purposes happens to be all we want. Typically, however, scientists want the ‘deepest’ or highest dimensional answers they can find, in which case, such a solution does nothing more than provide data.

Once again, consider how much the researchers would learn were they to glass the black boxes and find the two hackers inside of them. Finding them would immediately plug the communication into the wide correlative system underwriting human source insensitive cognition. The researchers would suddenly find themselves, their own source insensitive cognitive systems, potential components of the system under examination. Solving the signal would become an anthropological matter involving the identification of communicative cues. The signal’s morphology, which had baffled before, would now possess any number of suggestive features. The amber light, for instance, could be quickly identified as signalling a miscommunication. The reason their interference invariably illuminated it would be instantly plain: they were impinging on signals belonging to some wide correlative system. Given the binary nature of the two lights and given the binary nature of truth and falsehood, the researchers, it seems safe to suppose, would have a fair chance of advancing the correct hypothesis, at least.

This is significant because source sensitive idioms do generalize to the intentional explanatory scale—the issue of free will wouldn’t be such a conceptual crash space otherwise! ‘Dispositions’ are the typical alternative offered in philosophy, but in fact, any medicalization of human behaviour examples the effectiveness of biomechanical idioms at the intentional level of description (something Dennett recognizes at various points in his oeuvre (as in “Mechanism and Responsibility”) yet seems to ignore when making arguments like these). In fact, the very idiom deployed here demonstrates the degree to which these issues can be removed from the intentional domain.

The degree to which meaning can be genuinely naturalized.

We are bathed in consequences. Cognizing causes is more expensive than cognizing correlations, so we evolved the ability to cognize the causes that count, and to leave the rest to correlations. Outside the physics of our immediate surroundings, we dwell in a correlative fog, one that thins or deepens, sometimes radically, depending on the physical complexity of the systems engaged. Thus, what Gerd Gigerenzer calls the ‘adaptive toolbox,’ the wide array of heuristic devices solving via correlations alone. Dennett’s ‘intentional stance’ is far better understood as a collection of these tools, particularly those involving social cognition, our ability to solve for others or for ourselves. Rather than settling for any homuncular ‘attitude taking’ (or ‘rule following’), we can get to the business of isolating devices and identifying heuristics and their ‘application conditions,’ understanding both how they work, where they work, and the ways they go wrong.

The Zuckerberg Illusion

by rsbakker

obama-wired

So the special issue of Wired Magazine edited by Barack Obama has just come out, and I wanted to draw attention to Mark Zuckerberg’s response to the President’s challenge to “ensure that artificial intelligence helps rather than hurts us.” Somehow, someway, this issue has to move away from the ‘superintelligence’ debate and toward a collective conversation on the impact of AI on human cognitive ecology. Zuckerberg’s response betrays a tragic lack of understanding from the man who, arguably, has already transformed our social cognitive ecologies more radically than any other individual in the history of the human race. Anyone knowing some way of delivering this message from steerage up to the bridge, forward the bloody thing, because the combination of this naivete with the growing ubiquity of AI is becoming, ahem, a little scary. The more baked-in the existing trends become, the harder the hard decisions will become.

Zuckerberg begins his response to Obama’s challenge sounding very much like a typical American industrialist: only the peculiarity of his product makes his claim remarkable.

“People have always used technology as a lever to improve lives and increase productivity. But at the beginning of every cycle of invention, there’s a temptation to focus on the risks that come with a new technology instead of the benefits it will bring.

Today we are seeing that happen with artificial intelligence.”

What he wants to do in this short piece is allay the fears that have arisen regarding AI. His strategy for doing so is to show how our anxieties are the same overblown anxieties that always occasion the introduction of some new technology. These too, he assures us, will pass in time. Ultimately, he writes:

“When people come up with doomsday scenarios about AI, it’s important to remember that these are hypothetical. There’s little basis outside science fiction to believe they will come true.”

Of course, one need only swap out ‘AI’ with ‘industrialization’ to appreciate that not all ‘doomsday scenarios’ are equal. By any comparison, the Anthropocene already counts as one of the great extinction events to befall the planet, an accomplished ‘doomsday’ for numerous different species, and an ongoing one for many others. The reason for this ongoing extinction has to do with the supercomplicated systems of interdependency comprising our environments. Everything is adapted to everything else. Like pouring sand into a gas tank, introducing unprecedented substances and behaviours (such as farming) into existing ecologies progressively perturbs these systems, until eventually they collapse, often taking down other systems depending on them.

Malthus is the first on record predicting the possibility of natural environmental collapse in the 18th century, but the environmental movement only really got underway as the consequences of industrialization became evident in the 19th century. The term pollution, which during the Middle-ages meant defilement, took on its present meaning as “unnatural substance in natural systems” at the turn of the 20th century.

Which begs the question: Why were our ancestors so long in seeing the peril presented by industrialization? Well, for one, the systems comprising ecologies are all, in some way or another, survivors of prior ecological collapses. Ecologies are themselves adaptive systems, exhibiting remarkable resilience in many cases—until they don’t. The supercomplicated networks of interdependence constituting environments only became obvious to our forebears when they began really breaking down. Once one understands the ecological dimension of natural environments, the potentially deleterious impact of ecologically unprecedented behaviours and materials becomes obvious. If the environmental accumulation of industrial by-products constitutes an accelerating trend, then far from a science fiction premise, the prospect of accelerating ecological degradation of environments becomes a near certainty, and the management of ecological consequences an absolute necessity.

Which begs a different, less obvious question: Why would these networks of ecological interdependence only become visible to our ancestors after they began breaking down? Why should humans initially atomize their environments, and only develop complex, relational schemes after long, hard experience? The answer lies in the ecological nature of human cognition, the fact that we evolved to take as much ‘for granted’ as possible. The sheer complexity of the deep connectivity underwriting our surrounding environments renders them computationally intractable, and thus utterly invisible to us. (This is why the ecology question probably seemed like such an odd thing to ask: it quite literally goes without saying that we had to discover natural ecology). So cognition exploits the systematic correlations between what information is available and the systems requiring solution to derive ecologically effective behaviours. The human penchant for atomizing and essentializing their environments enables them to cognize ecology despite remaining blind to it.

What does any of this have to do with Zuckerberg’s optimistic argument for plowing more resources into the development of AI? Well, because I think it’s pretty clear he’s labouring under the very same illusion as the early industrialists, the illusion of acting in a grand, vacant arena, a place where unintended consequences magically dissipate instead of radiate.

The question, recall, is whether doomsday scenarios about AI warrant widespread alarm. It seems pretty clear, and I’m sure Zuckerberg would agree, that doomsday scenarios about industrialization do warrant widespread alarm. So what if what Zuckerberg and everyone else is calling ‘AI’ actually constitutes a form of cognitive industrialization? What will be the cognitive ecological impact of such an event?

We know that human cognition is thoroughly heuristic, so we know that human cognition is thoroughly ecological. The reason Sherry Turkle and Deidre Barrett and others worry about the ease with which human social cognition can be hacked turns on the fact that human social cognition is ecological through and through, dependent on the stable networks of interdependence. The fact is human sociocognition evolved to cope with other human intelligences, to solve on the basis of cues systematically correlated to other human brains, not to supercomputers mining vast data sets.  Take our love of flattery. We evolved in ecologies where our love for flattery is balanced against the inevitability of criticism. Ancestrally, pursuing flattery amounts to overcoming—i.e., answering—criticism. We generally hate criticism, but given our cognitive ecology, we had no choice but ‘to take our medicine.’

And this is but one of countless examples.

The irony is that Zuckerberg is deeply invested in researching human cognitive ecology: computer scientists (like Hector Levesque) can rail against ‘bag of tricks’ approaches to cognition, but they will continue to be pursued because behaviour cuing behaviour is all that’s required (for humans or machines, I think). Now Zuckerberg, I’m sure, sees himself exclusively in the business of providing value for consumers, but he needs to understand how his dedication to enable and delight automatically doubles as a ruthless quest to demolish human cognitive ecology. Rewriting environments ‘to make the user experience more enjoyable’ is the foundation all industrial enterprise, all ecological destruction, and the AI onslaught is nothing if not industrial.

Deploying systems designed to cue human social cognition in the absence of humans is pretty clearly a form of deception. Soon, every corporate website will be a friend… soulful, sympathetic,  utterly devoted to our satisfaction, as well as inhuman, designed to exploit, and knowing us better than any human could hope to, including ourselves. And as these inhuman friends become cheaper and cheaper, we will be deluged by them, ‘junk intelligences,’ each of them so much wittier, so much wiser, than any mundane human can hope to appear.

“At a very basic level, I think AI is good and not something we should be afraid of,” Zuckerberg concludes. “We’re already seeing examples of how AI can unlock value and improve the world. If we can choose hope over fear—and if we advance the fundamental science behind AI—then this is only the beginning.”

Indeed.

obama-worried

Snuffing the Spark: A Nihilistic Account of Moral Progress

by rsbakker

sparkman

 

If we define moral progress in brute terms of more and more individuals cooperating, then I think we can cook up a pretty compelling naturalistic explanation for its appearance.

So we know that our basic capacity to form ingroups is adapted to prehistoric ecologies characterized by resource scarcity and intense intergroup competition.

We also know that we possess a high degree of ingroup flexibility: we can easily add to our teams.

We also know moral and scientific progress are related. For some reason, modern prosocial trends track scientific and technological advance. Any theory attempting to explain moral progress should explain this connection.

We know that technology drastically increases information availability.

It seems modest to suppose that bigger is better in group competition. Cultural selection theory, meanwhile, pretty clearly seems to be onto something.

It seems modest to suppose that ingroup cuing turns on information availability.

Technology, as the homily goes, ‘brings us closer’ across a variety of cognitive dimensions. Moral progress, then, can be understood as the sustained effect of deep (or ancestrally unavailable) social information cuing various ingroup responses–people recognizing fractions of themselves (procedural if not emotional bits) in those their grandfathers would have killed.  The competitive benefits pertaining to cooperation suggest that ingroup trending cultures would gradually displace those trending otherwise.

Certainly there’s a far, far more complicated picture to be told here—a bottomless one, you might argue—but the above set of generalizations strike me as pretty solid. The normativist would cry foul, for instance, claiming that some account of the normative nature of the institutions underpinning such a process is necessary to understanding ‘moral progress.’ For them, moral progress has to involve autonomy, agency, and a variety of other posits perpetually lacking decisive formulation. Heuristic neglect allows us to sidestep this extravagance as the very kind of dead-end we should expect to confound us. At the same time, however, reflection on moral cognition has doubtless had a decisive impact on moral cognition. The problem of explaining ‘norm-talk’ remains. The difference is we now recognize the folly of using normative cognition to theoretically solve the nature of normative cognition. How can systems adapted to solving absent information regarding the nature of normative cognition reveal the nature of normative cognition? Relieved of these inexplicable posits, the generalizations above become unproblematic. We can set aside the notion of some irreducible ‘human spark’ impinging on the process in a manner that makes them empirically inexplicable.

If only our ‘deepest intuitions’ could be trusted.

The important thing about this way of looking at things is that it reveals the degree to which moral progress depends upon its information environments. So far, the technical modification of our environments has allowed our suite of social instincts, combined with institutionally regimented social training, to progressively ratchet the expansion of the franchise. But accepting the contingency of moral progress means accepting vulnerability to radical transformations in our information environment. Nothing guarantees moral progress outside the coincidence of certain capacities in certain conditions. Change those conditions, and you change the very function of human moral cognition.

So, for instance, what if something as apparently insignificant as the ‘online disinhibition effect’ has the gradual, aggregate effect of intensifying adversarial group identifications? What if the network possibilities of the web gradually organizes those possessing authoritarian dispositions, renders them more socially cohesive, while having the opposite impact on those possessing anti-authoritarian dispositions?

Anything can happen here, folks.

One can be a ‘nihilist’ and yet be all for ‘moral progress.’ The difference is that you are advocating for cooperation, for hewing to heuristics that promote prosocial behaviour. More importantly, you have no delusions of somehow standing outside contingency, of ‘rational’ immunity to radical transformations in your cognitive environments. You don’t have the luxury of burning magical holes through actual problems with your human spark. You see the ecology of things, and so you intervene.

Visions of the Semantic Apocalypse: A Critical Review of Yuval Noah Harari’s Homo Deus

by rsbakker

homo-deus-na

“Studying history aims to loosen the grip of the past,” Yuval Noah Harari writes. “It enables us to turn our heads this way and that, and to begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine” (59). Thus does the bestselling author of Sapiens: A Brief History of Humankind rationalize his thoroughly historical approach to question of our technological future in his fascinating follow-up, Homo Deus: A Brief History of Tomorrow. And so does he identify himself as a humanist, committed to freeing us from what Kant would have called, ‘our tutelary natures.’ Like Kant, Harari believes knowledge will set us free.

Although by the end of the book it becomes difficult to understand what ‘free’ might mean here.

As Harari himself admits, “once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new process will begin, which people like you and me cannot comprehend” (46). Now if you’re interested in mapping the conceptual boundaries of comprehending the posthuman, I heartily recommend David Roden’s skeptical tour de force, Posthuman Life: Philosophy at the Edge of the Human. Homo Deus, on the other hand, is primarily a book chronicling the rise and fall of contemporary humanism against the backdrop of apparent ‘progress.’ The most glaring question, of course, is whether Harari’s academic humanism possesses the resources required to diagnose the problems posed by the collapse of popular humanism. This challenge—the problem of using obsolescent vocabularies to theorize, not only the obsolescence of those vocabularies, but the successor vocabularies to come—provides an instructive frame through which to understand the successes and failures of this ambitious and fascinating book.

How good is Homo Deus? Well, for years people have been asking me for a lay point of entry for the themes explored here on Three Pound Brain and in my novels, and I’ve always been at a loss. No longer. Anyone surfing for reviews of the book are certain to find individuals carping about Harari not possessing the expertise to comment on x or y, but these critics never get around to explaining how any human could master all the silos involved in such an issue (while remaining accessible to a general audience, no less). Such criticisms amount to advocating no one dare interrogate what could be the greatest challenge to ever confront humanity. In addition to erudition, Harari has the courage to concede ugly possibilities, the sensitivity to grasp complexities (as well as the limits they pose), and the creativity to derive something communicable. Even though I think his residual humanism conceals the true profundity of the disaster awaiting us, he glimpses more than enough to alert millions of readers to the shape of the Semantic Apocalypse. People need to know human progress likely has a horizon, a limit, that doesn’t involve environmental catastrophe or creating some AI God.

The problem is far more insidious and retail than most yet realize.

The grand tale Harari tells is a vaguely Western Marxist one, wherein culture (following Lukacs) is seen as a primary enabler of relations of power, a fundamental component of the ‘social apriori.’ The primary narrative conceit of such approaches belongs to the ancient Greeks: “[T]he rise of humanism also contains the seeds of its downfall,” Harari writes. “While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws” (65). For all its power, humanism possesses intrinsic flaws, blindnesses and vulnerabilities, that will eventually lead it to ruin. In a sense, Harari is offering us a ‘big history’ version of negative dialectic, attempting to show how the internal logic of humanism runs afoul the very power it enables.

But that logic is also the very logic animating Harari’s encyclopedic account. For all its syncretic innovations, Homo Deus uses the vocabularies of academic or theoretical humanism to chronicle the rise and fall of popular or practical humanism. In this sense, the difference between Harari’s approach to the problem of the future and my own could not be more pronounced. On my account, academic humanism, far from enjoying critical or analytical immunity, is best seen as a crumbling bastion of pre-scientific belief, the last gasp of traditional apologia, the cognitive enterprise most directly imperilled by the rising technological tide, while we can expect popular humanism to linger for some time to come (if not indefinitely).

Homo Deus, in fact, exemplifies the quandary presently confronting humanists such as Harari, how the ‘creeping delegitimization’ of their theoretical vocabularies is slowly robbing them of any credible discursive voice. Harari sees the problem, acknowledging that “[w]e won’t be able to grasp the full implication of novel technologies such as artificial intelligence if we don’t know what minds are” (107). But the fact remains that “science knows surprisingly little about minds and consciousness” (107). We presently have no consensus-commanding, natural account of thought and experience—in fact, we can’t even agree on how best to formulate semantic and phenomenal explananda.

Humanity as yet lacks any workable, thoroughly naturalistic, theory of meaning or experience. For Harari this means the bastion of academic humanism, though besieged, remains intact, at least enough for him to advance his visions of the future. Despite the perplexity and controversies occasioned by our traditional vocabularies, they remain the only game in town, the very foundation of countless cognitive activities. “[T]he whole edifice of modern politics and ethics is built upon subjective experiences,” Harari writes, “and few ethical dilemmas can be solved by referring strictly to brain activities” (116). Even though his posits lie nowhere in the natural world, they nevertheless remain subjective realities, the necessary condition of solving countless problems. “If any scientist wants to argue that subjective experiences are irrelevant,” Harari writes, “their challenge is to explain why torture or rape are wrong without reference to any subjective experience” (116).

This is the classic humanistic challenge posed to naturalistic accounts, of course, the demand that they discharge the specialized functions of intentional cognition the same way intentional cognition does. This demand amounts to little more than a canard, of course, once we appreciate the heuristic nature of intentional cognition. The challenge intentional cognition poses to natural cognition is to explain, not replicate, its structure and dynamics. We clearly evolved our intentional cognitive capacities, after all, to solve problems natural cognition could not reliably solve. This combination of power, economy, and specificity is the very thing that a genuinely naturalistic theory of meaning (such as my own) must explain.

 

“… fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.”

 

So moving forward it is important to understand how his theoretical approach elides the very possibility of a genuinely post-intentional future. Because he has no natural theory of meaning, he has no choice but to take the theoretical adequacy of his intentional idioms for granted. But if his intentional idioms possess the resources he requires to theorize the future, they must somehow remain out of play; his discursive ‘subject position’ must possess some kind of immunity to the scientific tsunami climbing our horizons. His very choice of tools limits the radicality of the story he tells. No matter how profound, how encompassing, the transformational deluge, Harari must somehow remain dry upon his theoretical ark. And this, as we shall see, is what ultimately swamps his conclusions.

But if the Hard Problem exempts his theoretical brand of intentionality, one might ask why it doesn’t exempt all intentionality from scientific delegitimation. What makes the scientific knowledge of nature so tremendously disruptive to humanity is the fact that human nature is, when all is said and down, just more nature. Conceding general exceptionalism, the thesis that humans possess something miraculous distinguishing them from nature more generally, would undermine the very premise of his project.

Without any way out of this bind, Harari fudges, basically. He remains silent on his own intentional (even humanistic) theoretical commitments, while attacking exceptionalism by expanding the franchise of meaning and consciousness to include animals: whatever intentional phenomena consist in, they are ultimately natural to the extent that animals are natural.

But now the problem has shifted. If humans dwell on a continuum with nature more generally, then what explains the Anthropocene, our boggling dominion of the earth? Why do humans stand so drastically apart from nature? The capacity that most distinguishes humans from their nonhuman kin, Harari claims (in line with contemporary theories), is the capacity to cooperate. He writes:

“the crucial factor in our conquest of the world was our ability to connect many humans to one another. Humans nowadays completely dominate the planet not because the individual human is far more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of cooperating flexibly in large numbers.” 131

He poses a ‘shared fictions’ theory of mass social coordination (unfortunately, he doesn’t engage research on groupishness, which would have provided him with some useful, naturalistic tools, I think). He posits an intermediate level of existence between the objective and subjective, the ‘intersubjective,’ consisting of our shared beliefs in imaginary orders, which serve to distribute authority and organize our societies. “Sapiens rule the world,” he writes, “because only they can weave an intersubjective web of meaning; a web of laws, forces, entities and places that exist purely in their common imagination” (149). This ‘intersubjective web’ provides him with theoretical level of description he thinks crucial to understanding our troubled cultural future.

He continues:

“During the twenty-first century the border between history and biology is likely to blur not because we will discover biological explanations for historical events, but rather because ideological fictions will rewrite DNA strands; political and economic interests will redesign the climate; and the geography of mountains and rivers will give way to cyberspace. As human fictions are translated into genetic and electronic codes, the intersubjective reality will swallow up the objective reality and biology will merge with history. In the twenty-first century fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.” 151

The way Harari sees it, ideology, far from being relegated to prescientific theoretical midden, is set to become all powerful, a consumer of worlds. This launches his extensive intellectual history of humanity, beginning with the algorithmic advantages afforded by numeracy, literacy, and currency, how these “broke the data-processing limitations of the human brain” (158). Where our hunter-gathering ancestors could at best coordinate small groups, “[w]riting and money made it possible to start collecting taxes from hundreds of thousands of people, to organise complex bureaucracies and to establish vast kingdoms” (158).

Harari then turns to the question of how science fits in with this view of fictions, the nature of the ‘odd couple,’ as he puts it:

“Modern science certainly changed the rules of the game, but it did not simply replace myths with facts. Myths continue to dominate humankind. Science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before.” 179

Science is what renders objective reality compliant to human desire. Storytelling is what renders individual human desires compliant to collective human expectations, which is to say, intersubjective reality. Harari understands that the relationship between science and religious ideology is not one of straightforward antagonism: “science always needs religious assistance in order to create viable human institutions,” he writes. “Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave” (188). Though science has plenty of resources for answering means type questions—what you ought to do to lose weight, for instance—it lacks resources to fix the ends that rationalize those means. Science, Harari argues, requires religion to the extent that it cannot ground the all important fictions enabling human cooperation (197).

Insofar as science is a cooperative, human enterprise, it can only destroy one form of meaning on the back of some other meaning. By revealing the anthropomorphism underwriting our traditional, religious accounts of the natural world, science essentially ‘killed God’—which is to say, removed any divine constraint on our actions or aspirations. “The cosmic plan gave meaning to human life, but also restricted human power” (199). Like stage-actors, we had a plan, but our role was fixed. Unfixing that role, killing God, made meaning into something each of us has to find for ourselves. Harari writes:

“Since there is no script, and since humans fulfill no role in any great drama, terrible things might befall us and no power will come to save us, or give meaning to our suffering. There won’t be a happy ending or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens.’” 200

The absence of a script, however, means that anything goes; we can play any role we want to. With the modern freedom from cosmic constraint comes postmodern anomie.

“The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level, modern life consists of a constant pursuit of power within a universe devoid of meaning.” 201

Or to give it the Adornian spin it receives here on Three Pound Brain: the madness of a society that has rendered means, knowledge and capital, its primary end. Thus the modern obsession with the accumulation of the power to accumulate. And thus the Faustian nature of our present predicament (though Harari, curiously, never references Faust), the fact that “[w]e think we are smart enough to enjoy the full benefits of the modern deal without paying the price” (201). Even though physical resources such as material and energy are finite, no such limit pertains to knowledge. This is why “[t]he greatest scientific discovery was the discovery of ignorance.” (212): it spurred the development of systematic inquiry, and therefore the accumulation of knowledge, and therefore the accumulation of power, which, Harari argues, cuts against objective or cosmic meaning. The question is simply whether we can hope to sustain this process—defer payment—indefinitely.

“Modernity is a deal,” he writes, and for all its apparent complexities, it is very straightforward: “The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power” (199). For me the best way of thinking this process of exchanging power for meaning is in terms of what Weber called disenchantment: the very science that dispels our anthropomorphic fantasy worlds is the science that delivers technological power over the real world. This real world power is what drives traditional delegitimation: even believers acknowledge the vast bulk of the scientific worldview, as do the courts and (ideally at least) all governing institutions outside religion. Science is a recursive institutional ratchet (‘self-correcting’), leveraging the capacity to leverage ever more capacity. Now, after centuries of sheltering behind walls of complexity, human nature finds itself the intersection of multiple domains of scientific inquiry. Since we’re nothing special, just more nature, we should expect our burgeoning technological power over ourselves to increasingly delegitimate traditional discourses.

Humanism, on this account, amounts to an adaptation to the ways science transformed our ancestral ‘neglect structure,’ the landscape of ‘unknown unknowns’ confronting our prehistorical forebears. Our social instrumentalization of natural environments—our inclination to anthropomorphize the cosmos—is the product of our ancestral inability to intuit the actual nature of those environments. Information beyond the pale of human access makes no difference to human cognition. Cosmic meaning requires that the cosmos remain a black box: the more transparent science rendered that box, the more our rationales retreated to the black box of ourselves. The subjectivization of authority turns on how intentional cognition (our capacity to cognize authority) requires the absence of natural accounts to discharge ancestral functions. Humanism isn’t so much a grand revolution in thought as the result of the human remaining the last scientifically inscrutable domain standing. The rationalizations had to land somewhere. Since human meaning likewise requires that the human remain a black box, the vast industrial research enterprise presently dedicated to solving our nature does not bode well.

But this approach, economical as it is, isn’t available to Harari since he needs some enchantment to get his theoretical apparatus off the ground. As the necessary condition for human cooperation, meaning has to be efficacious. The ‘Humanist Revolution,’ as Harari sees it, consists in the migration of cooperative efficacy (authority) from the cosmic to the human. “This is the primary commandment humanism has given us: create meaning for a meaningless world” (221). Rather than scripture, human experience becomes the metric for what is right or wrong, and the universe, once the canvas of the priest, is conceded to the scientist. Harari writes:

“As the source of meaning and authority was relocated from the sky to human feelings, the nature of the entire cosmos changed. The exterior universe—hitherto teeming with gods, muses, fairies and ghouls—became empty space. The interior world—hitherto an insignificant enclave of crude passions—became deep and rich beyond measure” 234

This re-sourcing of meaning, Harari insists, is true whether or not one still believes in some omnipotent God, insofar as all the salient anchors of that belief lie within the believer, rather than elsewhere. God may still be ‘cosmic,’ but he now dwells beyond the canvas as nature, somewhere in the occluded frame, a place where only religious experience can access Him.

Man becomes ‘man the meaning maker,’ the trope that now utterly dominates contemporary culture:

“Exactly the same lesson is learned by Captain Kirk and Captain Jean-Luc Picard as they travel the galaxy in the starship Enterprise, by Huckleberry Finn and Jim as they sail down the Mississippi, by Wyatt and Billy as they ride their Harley Davidson’s in Easy Rider, and by countless other characters in myriad other road movies who leave their home town in Pennsylvannia (or perhaps New South Wales), travel in an old convertible (or perhaps a bus), pass through various life-changing experiences, get in touch with themselves, talk about their feelings, and eventually reach San Francisco (or perhaps Alice Springs) as better and wiser individuals.” 241

Not only is experience the new scripture, it is a scripture that is being continually revised and rewritten, a meaning that arises out of the process of lived life (yet somehow always managing to conserve the status quo). In story after story, the protagonist must find some ‘individual’ way to derive their own personal meaning out of an apparently meaningless world. This is a primary philosophical motivation behind The Second Apocalypse, the reason why I think epic fantasy provides such an ideal narrative vehicle for the critique of modernity and meaning. Fantasy worlds are fantastic, especially fictional, because they assert the objectivity of what we now (implicitly or explicitly) acknowledge to be anthropomorphic projections. The idea has always been to invert the modernist paradigm Harari sketches above, to follow a meaningless character through a meaningful world, using Kellhus to recapitulate the very dilemma Harari sees confronting us now:

“What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?” 277

And so Harari segues to the future and the question of the ultimate fate of human meaning; this is where I find his steadfast refusal to entertain humanistic conceit most impressive. One need not ponder ‘designer experiences’ for long, I think, to get a sense of the fundamental rupture with the past it represents. These once speculative issues are becoming ongoing practical concerns: “These are not just hypotheses of philosophical speculations,” simply because ‘algorithmic man’ is becoming a technological reality (284). Harari provides a whirlwind tour of unnerving experiments clearly implying trouble for our intuitions, a discussion that transitions into a consideration of the ways we can already mechanically attenuate our experiences. A good number of the examples he adduces have been considered here, all of them underscoring the same, inescapable moral: “Free will exists in the imaginary stories we humans have invented” (283). No matter what your philosophical persuasion, our continuity with the natural world is an established scientific fact. Humanity is not exempt from the laws of nature. If humanity is not exempt from the laws of nature, then the human mastery of nature amounts to the human mastery of humanity.

He turns, at this point, to Gazzaniga’s research showing the confabulatory nature of human rationalization (via split brain patients), and Daniel Kahneman’s account of ‘duration neglect’—another favourite of mine. He offers an expanded version of Kahneman’s distinction between the ‘experiencing self,’ that part of us that actually undergoes events, and the ‘narrating self,’ the part of us that communicates—derives meaning from—these experiences, essentially using the dichotomy as an emblem for the dual process models of cognition presently dominating cognitive psychological research. He writes:

“most people identify with their narrating self. When they say, ‘I,’ the mean the story in their head, not the stream of experiences they undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is filled with lies and lacunas, and that it is rewritten again and again, so that today’s story flatly contradicts yesterday’s; the important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps from even beyond the grave). This gives rise to the questionable liberal belief that I am an individual, and that I possess a consistent and clear inner voice, which provides meaning for the entire universe.” 299

Humanism, Harari argues, turns on our capacity for self-deception, the ability to commit to our shared fictions unto madness, if need be. He writes:

“Medieval crusaders believed that God and heaven provided their lives with meaning. Modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.” 305

Social self-deception is our birthright, the ability to believe what we need to believe to secure our interests. This is why the science, though shaking humanistic theory to the core, has done so little to interfere with the practices rationalized by that theory. As history shows, we are quite capable of shovelling millions into the abattoir of social fantasy. This delivers Harari to yet another big theme explored both here and Neuropath: the problems raised by the technological concretization of these scientific findings. As Harari puts it:

“However, once heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we—or our heirs—will probably require a brand new package of religious beliefs and political institutions. At the beginning of the third millennium, liberalism [the dominant variant of humanism] is threatened not by the philosophical idea that there are no free individuals but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Can democracy, the free market and human rights survive this flood?” 305-6

harari

The first problem, as Harari sees it, is one of diminishing returns. Humanism didn’t become the dominant world ideology because it was true, it overran the collective imagination of humanity because it enabled. Humanistic values, Harari explains, afforded our recent ancestors with a wide variety of social utilities, efficiencies turning on the technologies of the day. Those technologies, it turns out, require human intelligence and the consciousness that comes with it. To depart from Harari, they are what David Krakauer calls ‘complementary technologies,’ tools that extend human capacity, as opposed to ‘competitive technologies,’ which render human capacities redundant).

Making humans redundant, of course, means making experience redundant, something which portends the systematic devaluation of human experience, or the collapse of humanism. Harari calls this process the ‘Great Decoupling’:

“Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.” 311

He’s quick to acknowledge all the problems yet confronting AI researchers, insisting that the trend unambiguously points toward every expanding capacities As he writes, “these technical problems—however difficult—need only be solved once” (317). The ratchet never stops clicking.

He’s also quick to block the assumption that humans are somehow exceptional: “The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking” (319). He provides the (I think) terrifying example of David Cope, the University of California at Santa Cruz musicologist who has developed algorithms whose compositions strike listeners as more authentically human than compositions by humans such as J.S. Bach.

The second problem is the challenge of what (to once again depart from Harari) Neil Lawrence calls ‘System Zero,’ the question of what happens when our machines begin to know us better than we know ourselves. As Harari notes, this is already the case: “The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane choices” (345). Facebook can now guess your preferences better than your friends, your family, your spouse—and in some instances better than you yourself! He warns the day is coming when political candidates can receive real-time feedback via social media, when people can hear everything said about them always and everywhere. Projecting this trend leads him to envision something very close to Integration, where we become so embalmed in our information environments that “[d]isconnection will mean death” (344).

He writes:

“The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality and promise to provide medicine, education and entertainment customized to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their workings with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.” 345

This is my own suspicion, and I think the process of subpersonalization—the neuroscientifically informed decomposition of consumers into economically relevant behaviours—is well underway. But I think it’s important to realize that as data accumulates, and researchers and their AIs find more and more ways to instrumentalize those data sets, what we’re really talking about are proliferating heuristic hacks (that happen to turn on neuroscientific knowledge). They need decipher us only so far as we comply. Also, the potential noise generated by a plethora of competing subpersonal communications seems to constitute an important structural wrinkle. It could be the point most targeted by subpersonal hacking will at least preserve the old borders of the ‘self,’ fantasy that it was. Post-intentional ‘freedom’ could come to reside in the noise generated by commercial competition.

The third problem he sees for humanism lies in the almost certainly unequal distribution of the dividends of technology, a trope so well worn in narrative that we scarce need consider it here. It follows that liberal humanism, as an ideology committed to the equal value of all individuals, has scant hope of squaring the interests of the redundant masses against those of a technologically enhanced superhuman elite.

 

… this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour.

 

Under pretty much any plausible scenario you can imagine, the shared fiction of popular humanism is doomed. But as Harari has already argued, shared fictions are the necessary condition of social coordination. If humanism collapses, some kind of shared fiction has to take its place. And alas, this is where my shared journey with Harari ends. From this point forward, I think his analysis is largely an artifact of his own, incipient humanism.

Harari uses the metaphor of ‘vacuum,’ implying that humans cannot but generate some kind of collective narrative, some way of making their lives not simply meaningful to themselves, but more importantly, meaningful to one another. It is the mass resemblance of our narrative selves, remember, that makes our mass cooperation possible. [This is what misleads him, the assumption that ‘mass cooperation’ need be human at all by this point.] So he goes on to consider what new fiction might arise to fill the void left by humanism. The first alternative is ‘technohumanism’ (transhumanism, basically), which is bent on emancipating humanity from the authority of nature much as humanism was bent on emancipating humanity from the authority of tradition. Where humanists are free to think anything in their quest to actualize their desires, technohumanists are free to be anything in their quest to actualize their desires.

The problem is that the freedom to be anything amounts to the freedom to reengineer desire. So where the objective meaning, following one’s god (socialization), gave way to subjective meaning, following one’s heart (socialization), it remains entirely unclear what the technohumanist hopes to follow or to actualize. As soon as we gain power over our cognitive being the question becomes, ‘Follow which heart?’

Or as Harari puts it,

“Techno-humanism faces an impossible dilemma here. It considers human will the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign our will. After all, it’s tempting to gain control over the most important thing in the world. Yet once we have such control, techno-humanism will not know what to do with it, because the sacred human will would become just another designer product.” 366

Which is to say, something arbitrary. Where humanism aims ‘to loosen the grip of the past,’ transhumanism aims to loosen the grip of biology. We really see the limits of Harari’s interpretative approach here, I think, as well as why he falls short a definitive account of the Semantic Apocalypse. The reason that ‘following your heart’ can substitute for ‘following the god’ is that they amount to the very same claim, ‘trust your socialization,’ which is to say, your pre-existing dispositions to behave in certain ways in certain contexts. The problem posed by the kind of enhancement extolled by transhumanists isn’t that shared fictions must be ‘sacred’ to be binding, but that something neglected must be shared. Synchronization requires trust, the ability to simultaneously neglect others (and thus dedicate behaviour to collective problem solving) and yet predict their behaviour nonetheless. Absent this shared background, trust is impossible, and therefore synchronization is impossible. Cohesive, collective action, in other words, turns on a vast amount of evolutionary and educational stage-setting, common cognitive systems stamped with common forms of training, all of it ancestrally impervious to direct manipulation. Insofar as transhumanism promises to place the material basis of individual desire within the compass of individual desire, it promises to throw our shared background to the winds of whimsy. Transhumanism is predicated on the ever-deepening distortion of our ancestral ecologies of meaning.

Harari reads transhumanism as a reductio of humanism, the point where the religion of individual empowerment unravels the very agency it purports to empower. Since he remains, at least residually, a humanist, he places ideology—what he calls the ‘intersubjective’ level of reality—at the foundation of his analysis. It is the mover and shaker here, what Harari believes will stamp objective reality and subjective reality both in its own image.

And the fact of the matter is, he really has no choice, given he has no other way of generalizing over the processes underwriting the growing Whirlwind that has us in its grasp. So when he turns to digitalism (or what he calls ‘Dataism’), it appears to him to be the last option standing:

“What might replace desires and experiences as the source of all meaning and authority? As of 2016, only one candidate is sitting in history’s reception room waiting for the job interview. This candidate is information.” 366

Meaning has to be found somewhere. Why? Because synchronization requires trust requires shared commitments to shared fictions, stories expressing those values we hold in common. As we have seen, science cannot determine ends, only means to those ends. Something has to fix our collective behaviour, and if science cannot, we will perforce turn to be some kind of religion…

But what if we were to automate collective behaviour? There’s a second candidate that Harari overlooks, one which I think is far, far more obvious than digitalism (which remains, for all its notoriety, an intellectual position—and a confused one at that, insofar as it has no workable theory of meaning/cognition). What will replace humanism? Atavism… Fantasy. For all the care Harari places in his analyses, he overlooks how investing AI with ever increasing social decision-making power simultaneously divests humans of that power, thus progressively relieving us of the need for shared values. The more we trust to AI, the less trust we require of one another. We need only have faith in the efficacy of our technical (and very objective) intermediaries; the system synchronizes us automatically in ways we need not bother knowing. Ideology ceases to become a condition of collective action. We need not have any stories regarding our automated social ecologies whatsoever, so long as we mind the diminishing explicit constraints the system requires of us.

Outside our dwindling observances, we are free to pursue whatever story we want. Screw our neighbours. And what stories will those be? Well, the kinds of stories we evolved to tell, which is to say, the kinds of stories our ancestors told to each other. Fantastic stories… such as those told by George R. R. Martin, Donald Trump, myself, or the Islamic state. Radical changes in hardware require radical changes in software, unless one has some kind of emulator in place. You have to be sensible to social change to ideologically adapt to it. “Islamic fundamentalists may repeat the mantra that ‘Islam is the answer,’” Harari writes, “but religions that lose touch with the technological realities of the day lose their ability even to understand the questions being asked” (269). But why should incomprehension or any kind of irrationality disqualify the appeal of Islam, if the basis of the appeal primarily lies in some optimization of our intentional cognitive capacities?

Humans are shallow information consumers by dint of evolution, and deep information consumers by dint of modern necessity. As that necessity recedes, it stands to reason our patterns of consumption will recede with it, that we will turn away from the malaise of perpetual crash space and find solace in ever more sophisticated simulations of worlds designed to appease our ancestral inclinations. As Harari himself notes, “Sapiens evolved in the African savannah tens of thousands of years ago, and their algorithms are just not built to handle twenty-first century data flows” (388). And here we come to the key to understanding the profundity, and perhaps even the inevitability of the Semantic Apocalypse: intentional cognition turns on cues which turn on ecological invariants that technology is even now rendering plastic. The issue here, in other words, isn’t so much a matter of ideological obsolescence as cognitive habitat destruction, the total rewiring of the neglected background upon which intentional cognition depends.

The thing people considering the future impact of technology need to pause and consider is that this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour. Suddenly a system that leveraged cognitive capacity via natural selection will be leveraging that capacity via neural selection—behaviourally. A change so fundamental pretty clearly spells the end of all ancestral ecologies, including the cognitive. Humanism is ‘disintegrating from within’ because intentional cognition itself is beginning to founder. The tsunami of information thundering above the shores of humanism is all deep information, information regarding what we evolved to ignore—and therefore trust. Small wonder, then, that it scuttles intentional problem-solving, generates discursive crash spaces that only philosophers once tripped into.

The more the mechanisms behind learning impediments are laid bare, the less the teacher can attribute performance to character, the more they are forced to adopt a clinical attitude. What happens when every impediment to learning is laid bare? Unprecedented causal information is flooding our institutions, removing more and more behaviour from the domain of character, why? Because character judgments always presume individuals could have done otherwise, and presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour. Harari brushes this thought on a handful occasions, writing, most notably:

“In the eighteenth century Homo sapiens was like a mysterious black box, whose inner workings were beyond our grasp. Hence when scholars asked why a man drew a knife and stabbed another to death, an acceptable answer said: ‘Because he chose to…” 282

But he fails to see the systematic nature of the neglect involved, and therefore the explanatory power it affords. Our ignorance of ourselves, in other words, determines not simply the applicability, but the solvency of intentional cognition as well. Intentional cognition allowed our ancestors to navigate opaque or ‘black box’ social ecologies. The role causal information plays in triggering intuitions of exemption is tuned to the efficacy of this system overall. By and large our ancestors exempted those individuals in those circumstances that best served their tribe as a whole. However haphazardly, moral intuitions involving causality served some kind of ancestral optimization. So when actionable causal information regarding our behaviour becomes available, we have no choice but to exempt those behaviours, no matter what kind of large scale distortions result. Why? Because it is the only moral thing to do.

Welcome to crash space. We know this is crash space as opposed to, say, scientifically informed enlightenment (the way it generally feels) simply by asking what happens when actionable causal information regarding our every behaviour becomes available. Will moral judgment become entirely inapplicable? For me, the free will debate has always been a paradigmatic philosophical crash space, a place where some capacity always seems to apply, yet consistently fails to deliver solutions because it does not. We evolved to communicate behaviour absent information regarding the biological sources of behaviour: is it any wonder that our cause-neglecting workarounds cannot square with the causes they work around? The growing institutional challenges arising out of the medicalization of character turns on the same cognitive short-circuit. How can someone who has no choice be held responsible?

Even as we drain the ignorance intentional cognition requires from our cognitive ecologies, we are flooding them with AI, what promises to be a deluge of algorithms trained to cue intentional cognition, impersonate persons, in effect. The evidence is unequivocal: our intentional cognitive capacities are easily cued out of school—in a sense, this is the cornerstone of their power, the ability to assume so much on the basis of so little information. But in ecologies designed to exploit intentional intuitions, this power and versatility becomes a tremendous liability. Even now litigators and lawmakers find themselves beset with the question of how intentional cognition should solve for environments flooded with artifacts designed to cue human intentional cognition to better extract various commercial utilities. The problems of the philosophers dwell in ivory towers no more.

First we cloud the water, then we lay the bait—we are doing this to ourselves, after all. We are taking our first stumbling steps into what is becoming a global social crash space. Intentional cognition is heuristic cognition. Since heuristic cognition turns on shallow information cues, we have good reason to assume that our basic means of understanding ourselves and our projects will be incompatible with deep information accounts. The more we learn about cognition, the more apparent this becomes, the more our intentional modes of problem-solving will break down. I’m not sure there’s anything much to be done at this point save getting the word out, empowering some critical mass of people with a notion of what’s going on around them. This is what Harari does to a remarkable extent with Homo Deus, something which we may all have cause to thank him.

Science is steadily revealing the very sources intentional cognition evolved to neglect. Technology is exploiting these revelations, busily engineering emulators to pander to our desires, allowing us to shelter more and more skin from the risk and toil of natural and social reality. Designer experience is designer meaning. Thus the likely irony: the end of meaning will appear to be its greatest blooming, the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations.

To make us weep and laugh for meaning, never knowing whether we are together or alone.

Derrida as Neurophenomenologist

by rsbakker

derrida

For the longest time I thought that unravelling the paradoxical nature of the now, understanding how it could be at once the same now and yet a different now entirely, was the key to resolving the problem of meaning and experience. The reason for this turned on my early philosophical love affair with Jacques Derrida, the famed French post-structuralist philosopher, who was very fond of writing passages such this tidbit from “Differance”:

An interval must separate the present from what it is not in order for the present to be itself, but this interval that constitutes it as present must, by the same token, divide the present in and of itself, thereby also dividing, along with the present, everything that is thought on the basis of the present, that is, in our metaphysical language, every being, and singularly substance or the subject. In constituting itself, in dividing itself dynamically, this interval is what might be called spacing, the becoming-space of time or the becoming-time of space (temporization). And it is this constitution of the present, as an ‘originary’ and irreducibly nonsimple (and therefore, stricto sensu nonoriginary) synthesis of marks, or traces of retentions and protentions (to reproduce analogically and provisionally a phenomenological and transcendental language that soon will reveal itself to be inadequate), that I propose to call archi-writing, archi-traces, or differance. Which (is) (simultaneously) spacing (and) temporization. Margins of Philosophy, 13

One of the big problems faced by phenomenology has to do with time. The problem in a nutshell is that any phenomena attended to is a present phenomena, and as such dependent upon absent enormities—namely the past and the future. The phenomenologist suffers from what is sometimes referred to as a ‘keyhole problem,’ the question of whether the information available—‘experience’—warrants the kinds of claims phenomenologists are prone to make about the truth of experience. Does the so-called ‘phenomenological attitude’ possess the access phenomenology needs to ground its analyses? How could they given so slight a keyhole as the present? Phenomenologists typically respond to the problem by invoking horizons, the idea that nonpresent contextual enormities nevertheless remain experientially accessible—present—as implicit features of the phenomenon at issue. You could argue that horizons scaffold the whole of reportable experience, insofar as so little, if anything, is available to us in our entirety at any given moment. We see and experience coffee cups, not perspectival slices of coffee cups. So in Husserl’s analysis of ‘time-consciousness,’ for instance, the past and future become intrinsic components of our experience of temporality as ‘retention’ and ‘protention.’ Even though absent, they nevertheless decisively structure phenomena. As such, they constitute important domains of phenomenological investigation in their own right.

From the standpoint of the keyhole problem, however, this answer simply doubles down on the initial question. Our experience of coffee cups is one thing, after all, and our experience of ourselves is quite another. How do we know we possess the information required to credibly theorize—make explicit—our implicit experience of the past as retention, say? After-all, as Derrida says, retention is always present retention. There are, as he famously argues, two pasts, the one experienced, and the one outrunning the very possibility of experience (as its condition of possibility). Our experience of the present does not arise ‘from nowhere,’ nor does it arise in our present experience of the past, since that experience is also present. Thus what he calls the ‘trace,’ which might be understood as a ‘meta-horizon,’ or a ‘super-implicit,’ the absent enormity responsible for horizons that seem to shape content. The apparently sufficient, unitary structure of present experience contains a structurally occluded origin, a difference making difference, that can in no way appear within experience.

One way to put Derrida’s point is that there is always some occluded context, always some integral part of the background, driving phenomenology. From an Anglo-American, pragmatic viewpoint, his point is obvious, yet abstrusely and extravagantly made: Nothing is given, least of all meaning and experience. What Derrida is doing, however, is making this point within the phenomenological idiom, ‘reproducing’ it, as he says in the quote. The phenomenology itself reveals its discursive impossibility. His argument is ontological, not epistemic, and so requires speculative commitments regarding what is, rather than critical commitments regarding what can be known. Derrida is providing what might be called a ‘hyper-phenomenology,’ or even better, what David Roden terms dark phenomenology, showing how the apparently originary, self-sustaining, character of experience is a product of its derivative nature. The keyhole of the phenomenological attitude only appears theoretically decisive, discursively sufficient, because experience possesses horizons without a far side, meta-horizons—limits that cannot appear as such, and so appears otherwise, as something unlimited. Apodictic.

But since Derrida, like the phenomenologist, has only the self-same keyhole, he does what humans always do in conditions of radical low-dimensionality: he confuses the extent of his ignorance for a new and special kind of principle. Even worse, his theory of meaning is a semantic one: as an intentionalist philosopher, he works with all the unexplained explainers, all the classic theoretical posits, handed down by the philosophical tradition. And like most intentionalists, he doesn’t think there’s anyway to escape those posits save by going through them. The deflecting, deferring, displacing outside, for Derrida, cannot appear inside as something ‘outer.’ Representation continually seals us in, relegating evidence of ‘differance’ to indirect observations of the kinds of semantic deformations that only it seems to explain, to the actual work of theoretical interpretation.

Now I’m sure this sounds like hokum to most souls reading this post, something artifactual. It should. Despite all my years as a Derridean, I now think of it as a discursive blight, something far more often used to avoid asking hard questions of the tradition than to pose them. But there is a kernel of neurophenomenological truth in his position. As I’ve argued in greater detail elsewhere, Derrida and deconstruction can be seen as an attempt to theorize the significance of source neglect in philosophical reflection generally, and phenomenology more specifically.

So far as ‘horizons’ belong to experience, they presuppose the availability of information required to behave in a manner sensitive to the recent past. So far as experience is ecological, we can suppose the information rendered will be geared to the solution of ancestral problem ecologies. We can suppose, in other words, that horizons are ecological, that the information rendered will be adequate to the problem-solving needs of our evolutionary ancestors. Now consider the mass-industrial character of the cognitive sciences, the sheer amount of resources, toil, and ingenuity dedicated to solving our own nature. This should convey a sense of the technical challenges any CNS faces attempting to cognize its own nature, and the reason why our keyhole has to be radically heuristic, a fractionate bundle of glimpses, each peering off in different directions to different purposes. The myriad problems this fact poses can be distilled into a single question: How much of the information rendered should we presume warrants theoretical generalizations regarding the nature of meaning and experience? This is the question upon which the whole of traditional philosophy presently teeters.

What renders the situation so dire is the inevitability of keyhole neglect, systematic insensitivity to the radically heuristic nature of the systems employed by philosophical reflection. Think of darkness, which like pastness, lays out the limits of experience in experience as a ‘horizon.’ To say we suffer keyhole neglect is to say our experience of cognition lacks horizons, that we are doomed to confuse what little we see for everything there is. In the absence of darkness (or any other experiential marker of loss or impediment), unrestricted visibility is the automatic assumption. Short sensitivity to information cuing insufficiency, sufficiency is the default. What Heidegger and the continental tradition calls the ‘Metaphysics of Presence’ can be seen as an attempt to tackle the problems posed by sufficiency in intentional terms. And likewise, Derrida’s purported oblique curative to the apparent inevitability of running afoul the Metaphysics of Presence can be seen as a way of understanding the ‘sufficiency effects’ plaguing philosophical reflection in intentional terms.

The human brain suffers medial neglect, the congenital inability to track its own high-dimensional (material) processes. This means the human brain is insensitive to its own irreflexive materiality as such, and so presumes no such irreflexive materiality underwrites its own operations—even though, as anyone who has spent a great deal of time in stroke recovery wards can tell you, everything turns upon them. What we call ‘philosophical reflection’ is simply an artifact of this ecological limitation, a brain attempting to solve its nature with tools adapted to solve absent any information regarding that nature. Differance, trace, spacing: these are the ways Derrida theorizes the inevitability of irreflexive contingency from the far side of default sufficiency. I read Derrida as tracking the material shadow of thought via semantic terms. By occluding all antecedents, source neglect dooms reflection to the illusion of sufficiency when no such sufficiency exists. In this sense, positions like Derrida’s theory of meaning can be seen as impressionistic interpretations of what is a real biomechanical feature of consciousness. Attend to the metacognitive impression and meaning abides, and representation seems inescapable. The neuromechanical is occluded, so sourceless differentiation is all we seem to have, the magic of a now that is forever changing, yet miraculously abides.

On the Interpretation of Artificial Souls

by rsbakker

black-box-2

In “Is Artificial Intelligence Permanently Inscrutable?” Aaron M. Bornstein surveys the field of artificial neural networks, claiming that “[a]s exciting as their performance gains have been… there’s a troubling fact about modern neural networks: Nobody knows quite how they work.” The article is fascinating in its own right, and Peter over at Consciousness Entities provides an excellent overview, but I would like to use it to flex a little theoretical muscle, and show the way the neural network ‘Inscrutability Problem’ turns on the same basic dynamics underwriting the apparent ‘hard problem’ of intentionality. Once you have a workable, thoroughly naturalistic account of cognition, you can begin to see why computer science finds itself bedevilled with strange parallels of the problems one finds in the philosophy of mind.

This parallel is evident in what Bornstein identifies as the primary issue, interpretability. The problem with artificial neural networks is that they are both contingent and incredibly complex. Recurrent neural networks operate by producing outputs conditioned by a selective history of previous conditionings, one captured in the weighting of (typically) millions of artificial neurons arranged in multiple processing layers. Since  discrepancies in output serve as the primary constraint, and since the process of deriving new outputs is driven by the contingencies of the system (to the point where even electromagnetic field effects can become significant), the complexity means that searching for the explanation—or canonical interpretation—of the system is akin to searching for a needle in a haystack.

And as Bornstein points out, this has forced researchers to borrow “techniques from biological research that peer inside networks after the fashion of neuroscientists peering into brains: probing individual components, cataloguing how their internals respond to small changes in inputs, and even removing pieces to see how others compensate.” Unfortunately, importing neuroscientific techniques has resulted in importing neuroscience-like interpretative controversies as well. In “Could a neuroscientist understand a microprocessor?” Eric Jonas and Konrad Kording show how taking the opposite approach, using neuroscientific data analysis methods to understand the computational functions behind games like Donkey Kong and Space Invaders, fails no matter how much data they have available. The authors even go so far as to reference artificial neural network inscrutability as the problem, stating that “our difficulty at understanding deep learning may suggest that the brain is hard to understand if it uses anything like gradient descent on a cost function” (11).

Neural networks, artificial or natural, could very well be essential black boxes, systems that will always resist synoptic verbal explanation. Functional inscrutability in neuroscience is a pressing problem for obvious reasons. The capacity to explain how a given artificial neural network solves a given problem, meanwhile, remains crucial simply because “if you don’t know how it works, you don’t know how it will fail.” One of the widely acknowledged shortcomings of artificial neural networks is “that the machines are so tightly tuned to the data they are fed,” data that always falls woefully short the variability and complexity of the real world. As Bornstein points out, “trained machines are exquisitely well suited to their environment—and ill-adapted to any other.” As AI creeps into more and more real world ecological niches, this ‘brittleness,’ as Bornstein terms it, becomes more of a real world concern. Interpretability means lives in AI potentially no less than in neuroscience.

All this provokes Bornstein to pose the philosophical question: What is interpretability?

He references Marvin Minsky’s “suitcase words,” the legendary computer scientist’s analogy for many of the terms—such as “consciousness” or “emotion”—we use when we talk about our sentience and sapience. These words, he proposes, reflect the workings of many different underlying processes, which are locked inside the “suitcase.” As long as we keep investigating these words as stand-ins for more fundamental concepts, our insight will be limited by our language. In the study of intelligence, could interpretability itself be such a suitcase word?

Bornstein finds himself delivered to one of the fundamental issues in the philosophy of mind: the question of how to understand intentional idioms—Minsky’s ‘suitcase words.’ The only way to move forward on the issue of interpretability, it seems, is to solve nothing less than the cognitive (as opposed to the phenomenal) half of the hard problem. This is my bailiwick. The problem, here, is a theoretical one: the absence of any clear understanding of ‘interpretability.’ What is interpretation? Why do breakdowns in our ability to explain the operation of our AI tools happen, and why do they take the forms that they do?  I think I can paint a spare yet comprehensive picture that answers these questions and places them in the context of much more ancient form of interpreting neural networks.  In fact, I think it can pop open a good number of Minsky’s suitcases and air out their empty insides.

Three Pound Brain regulars, I’m sure, have noticed a number of striking parallels between Bornstein’s characterization of the Inscrutability Problem and the picture of ‘post-intentional cognition’ I’ve been developing over the years. The apparently inscrutable algorithms derived via neural networks are nothing if not heuristic, cognitive systems that solve via cues correlated to target systems. Since they rely on cues (rather than all the information potentially available), their reliability entirely depends on their ecology, which is to say, how those cues correlate. If those cues do not correlate, then disaster strikes (as when the truck trailer that killed Joshua Brown in his Tesla Model S cued more white sky).

The primary problem posed by inscrutability, in other words, is the problem of misapplication. The worry that arises again and again isn’t simply that these systems are inscrutable, but that they are ecological, requiring contexts often possessing quirky features given quirks in the ‘environments’—data sets—used to train them. Inscrutability is a problem because it entails blindness to potential misapplications, plain and simple. Artificial neural network algorithms, you could say, possess adaptive problem-ecologies the same as all heuristic cognition. They solve, not by exhaustively taking into account the high dimensional totality of the information available, but rather by isolating cues—structures in the data set—which the trainer can only hope will generalize to the world.

Artificial neural networks are shallow information consumers, systems that systematically neglect the high dimensional mechanical intricacies of their environments, focusing instead on cues statistically correlated to those high-dimensional mechanical intricacies to solve them. They are ‘brittle,’ therefore, so far as those correlations fail to obtain.

But humans are also shallow information consumers, albeit far more sophisticated ones. Short the prostheses of science, we are also systems prone to neglect the high dimensional mechanical intricacies of our environments, focusing instead on cues statistically correlated to those high-dimensional mechanical intricacies. And we are also brittle to the extent those correlations fail to obtain. The shallow information nets we throw across our environments appear to be seamless, but this is just an illusion, as magicians so effortlessly remind us with their illusions.

This is as much the case for our linguistic attempts to make sense of ourselves and our devices as it is for other cognitive modes. Minsky’s ‘suitcase words’ are such because they themselves are the product of the same cue-correlative dependency. These are the granular posits we use to communicate cue-based cognition of mechanical black box systems such as ourselves, let alone others. They are also the granular posits we use to communicate cue-based cognition of pretty much any complicated system. To be a shallow information consumer is to live in a black box world.

The rub, of course, is that this is itself a black box fact, something tucked away in the oblivion of systematic neglect, duping us into assuming most everything is clear as glass. There’s nothing about correlative cognition, no distinct metacognitive feature, that identifies it as such. We have no way of knowing whether we’re misapplying our own onboard heuristics in advance (thus the value of the heuristics and biases research program), let alone our prosthetic ones! In fact, we’re only now coming to grips with the fractionate and heuristic nature of human cognition as it is.

natural-and-artificial-black-boxes

Inscrutability is a problem, recall, because artificial neural networks are ‘brittle,’ bound upon fixed correlations between their cues and the systems they were tasked with solving, correlations that may or may not, given the complexity of the world, be the case. The amazing fact here is that artificial neural networks are inscrutable, the province of interpretation at best, because we ourselves are brittle, and for precisely the same basic reason: we are bound upon fixed correlations between our cues and the systems we’re tasked with solving. The contingent complexities of artificial neural networks place them, presently at least, outside our capacity to solve—at least in a manner we can readily communicate.

The Inscrutability Problem, I contend, represents a prosthetic externalization of the very same problem of ‘brittleness’ we pose to ourselves, the almost unbelievable fact that we can explain the beginning of the Universe but not cognition—be it artificial or natural. Where the scientists and engineers are baffled by their creations, the philosophers and psychologists are baffled by themselves, forever misapplying correlative modes of cognition to the problem of correlative cognition, forever confusing mere cues for extraordinary, inexplicable orders of reality, forever lost in jungles of perpetually underdetermined interpretation. The Inscrutability Problem is the so-called ‘hard problem’ of intentionality, only in a context that is ‘glassy’ enough to moot the suggestion of ‘ontological irreducibility.’ The boundary faced by neuroscientists and AI engineers alike is mere complexity, not some eerie edge-of-nature-as-we-know-it. And thanks to science, this boundary is always moving. If it seems inexplicable or miraculous, it’s because you lack information: this seems a pretty safe bet as far as razors go.

‘Irreducibility’ is about to come crashing down. I think the more we study problem-ecologies and heuristic solution strategies the more we will be able to categorize the mechanics distinguishing different species of each, and our bestiary of different correlative cognitions will gradually, if laboriously, grow. I also think that artificial neural networks will play a crucial role in that process, eventually providing ways to model things like intentional cognition. If nature has taught us anything over the past five centuries it is that the systematicities, the patterns, are there—we need only find the theoretical and technical eyes required to behold them. And perhaps, when all is said and done, we can ask our models to explain themselves.