Breakneck: Review and Critical Commentary of Whiplash: How to Survive our Faster Future by Joi Ito and Jeff Howe

by rsbakker

whiplash-cover

The thesis I would like to explore here is that Whiplash by Joi Ito and Jeff Howe is at once a local survival guide and a global suicide manual. Their goal “is no less ambitious than to provide a user’s manual to the twenty-first century” (246), a “system of mythologies” (108) embodying the accumulated wisdom of the storied MIT Media Lab. Since this runs parallel to my own project, I applaud their attempt. Like them, I think understanding the consequences of the ongoing technological revolution demands “an entirely new mode of thinking—a cognitive evolution on the scale of a quadruped learning to stand on its hind feet” (247). I just think we need to recall the number of extinctions that particular evolutionary feat required.

Whiplash was a genuine delight for me to read, and not simply because I’m a sucker for technoscientific anecdotes. At so many points I identified with the collection of misfits and outsiders that populate their tales. So, as an individual who fairly embodies the values promulgated in this book, I offer my own amendments to Ito and Howe’s heuristic source code, what I think is a more elegant and scientifically consilient way to understand not only our present dilemma, but the kinds of heuristics we will need to survive them…

Insofar as that is possible.

 

Emergence over Authority

General Idea: Pace of change assures normative obsolescence, which in turn requires openness to ‘emergence.’

“Emergent systems presume that every individual within that system possesses unique intelligence that would benefit the group.” 47

“Unlike authoritarian systems, which enable only incremental change, emergent systems foster the kind of nonlinear innovation that can react quickly to the kind of change of rapid changes that characterize the network age.” 48

Problems: Insensitive to the complexities of the accelerating social and technical landscape. The moral here should be, Does this heuristic still apply?

The quote above also points to the larger problem, which becomes clear by simply rephrasing it to read, ‘emergent systems foster the kind of nonlinear transformation that can react quickly to the kind of nonlinear transformations that characterize the network age.’ The problem, in other words, is also the solution. Call this the Putting Out Fire with Gasoline Problem. I wish Ito and Howe would have spent some more time considering it since it really is the heart of their strategy: How do we cope with accelerating innovation? We become as quick and innovative as we can.

 

Pull over Push

General Idea: Command and control over warehoused resources lacks the sensitivity to solve many modern problems, which are far better resolved by allowing the problems themselves to attract the solvers.

“In the upside-down, bizarre universe created by the Internet, the very assets on your balance sheet—from printing presses to lines of code—are now liabilities from the perspective of agility. Instead, we should try to use resources that can be utilized just in time, for just that time necessary, then relinquished.” 69

“As the cost of innovation continues to fall, entire communities that have been sidelined by those in power will be able to organize themselves and become active participants in society and government. The culture emergent innovation will allow everyone to feel a sense of both ownership and responsibility to each other and to the rest of the world, which will empower them to create more lasting change that the authorities who write policy and law.” 71

Problems: In one sense, I think this chapter speaks to the narrow focus of the book, the degree it views the world through IT glasses. Trump examples the power of Pull. ISIS examples the power of Pull. ‘Empowerment’ is usually charged with positive connotations, until one applies it to criminals, authoritarian governments and so on. It’s important to realize that ‘pull’ runs any which way, rather than directly toward better.

 

Compasses over Maps

General Idea: Sensitivity to ongoing ‘facts on the ground’ generally trumps reliance on high-altitude appraisals of yesterday’s landscape.

“Of all the nine principles in the book, compasses over maps has the greatest potential for misunderstanding. It’s actually very straightforward: a map implies a detailed knowledge of the terrain, and the existence of an optimum route; the compass is a far more flexible tool and requires the user to employ creativity and autonomy in discovering his or her own path.” 89

Problems: I actually agree that this principle is the most apt to be misunderstood because I’m inclined to think Ito and Howe themselves might be misunderstanding it! Once again, we need to see the issue in terms of cognitive ecology: Our ancestors, you could say, suffered a shallow present and enjoyed a deep future. Because the mechanics of their world eluded them, they had no way of re-engineering them, and so they could trust the machinery to trundle along the way it always had. We find ourselves in the opposite predicament: As we master more and more of the mechanics of our world, we discover an ever-expanding array of ways to re-engineering them, meaning we can no longer rely on the established machinery the way our ancestors—and here’s the important bit—evolved to. We are shallow present, deep future creatures living in a deep present, shallow future world.

This, I think, is what Ito and Howe are driving at: just as the old rules (authorities) no longer apply, the old representations (maps) no longer apply either, forcing us to gerrymander (orienteer) our path.

 

Risk over Safety

General Idea: The cost of experimentation has plummeted to such an extent that being wrong no longer has the catastrophic market consequences it once had.

“The new rule, then, is to embrace risk. There may be nowhere else in this book that exemplifies how far our collective brains have fallen behind our technology.” 116

“Seventy million years ago it was great to be a dinosaur. You were a complete package; big, thick-skinned, sharp-toothed, cold-blooded, long-lived. And it was great for a long, long time. Then, suddenly… it wasn’t so great. Because of your size, you needed an awful lot of calories. And you needed an awful lot of room. So you died. You know who outlived you? The frog.” 120

Problems: Essentially the argument is that risky ventures in the old economy are now safe, and that safe ventures are now risky, which means the argument is actually a ‘safety over risk’ one. I find this particular maxim so interesting because I think it really throws their lack of any theory of the problem they take themselves to be solving/ameliorating into relief. Really the moral here is experimentation pays.


This means the cognitive ecology Ito and Howe are both describing and advocating is in some sense antithetical—and therefore alienating—to our ancestral ways of making sense of ourselves.


 

Disobedience over Compliance

General Idea: Traditional forms of development stifle the very creativity institutions require to adapt to the accelerating pace of technological change.

“Since the 1970’s, social scientists have recognized the positive impact of “positive deviants,” people whose unorthodox behavior improves their lives and has the potential to improve their communities if it’s adopted more widely.” 141

“The people who will be the most successful in this environment will be the ones who ask questions, trust their instincts, and refuse to follow the rules when the rules get in their way.” 141

Problems: Disobedience is not critique, and Ito and Howe are careful to point this out, but they fail to mention what role, if any, criticality plays in their list of principles. Another problem has to do with the obvious exception bias at work in their account. Sure, being positive deviants has served Ito and Howe and the generally successful people they count as their ingroup, but what about the rest of us? This is why I cringe every time I hear Oscar acceptance speeches urging young wannabe thespians to ‘never give up on their dream,’ because winners—who are winners by virtue of being the exception—see themselves as proof positive that it can be done if you just try-try-try… This stuff is what powers the great dream smashing factory called Hollywood—as well as Silicon Valley. All things being equal, I think being a ‘positive deviant’ is bound to generate far more grief than success.

And this, I think, underscores the fundamental problem with the book, which is the question of application. I like to think of myself as a ‘positive deviant,’ but I’m aware that I am often identified as a ‘contrarian flake’ in the various academic silos I piss in now and again. By opening research ingroups to the wider world, the web immediately requires members to vet communications in a manner they never had to before. The world, as it turns out, is filled with contrarian flakes, so the problem becomes one of sorting positive deviants (like myself (maybe)), extra-institutional individuals with positive contributions to make, from all those contrarian flakes (like myself (maybe)).

Likewise, given that every communal enterprise possesses wilful, impassioned, but unimaginative employees, how does a manager sort the ‘positive deviant’ out?

When does disobedience over compliance apply? This is where the rubber hits the road, I think. The whole point of the (generally fascinating) anecdotes is to address this very issue, but aside from some gut estimation of analogical sufficiency between cases, we really have nothing to go on.

 

Practice over Theory

General Idea: Traditional forms of education and production emphasize planning before and learning outside the relevant context of applications, when humans are simply not wired for this, and when those contexts are transforming so quickly.

“Putting practice over theory means recognizing that in a faster future, in which change has become a new constant, there is often a higher cost to waiting and planning that there is to doing and improvising.” 159

“The Media Lab is focussed on interest-driven, passion-driven learning through doing. It is also trying to understand and deploy this form of creative learning into a society that will increasingly need more creative learners and fewer human beings who can solve problems better tackled by robots and computers.” 170

Problems: Humans are the gerrymandering species par excellence, leveraging technical skills into more and more forms of environmental mastery. In this respect it’s hard to argue against Ito and Howe’s point, given the caveats they are careful to provide.

The problem lies in the supercomplex environmental consequences of that environmental mastery: Whiplash is advertised as a how-to environmentally master the consequences of environmental mastery manual, so obviously, environmental mastery, technical innovation, ‘progress’—whatever you want to call it—has become a life and death matter, something to be ‘survived.’

The thing people really need to realize in these kinds of discussions is just how far we have sailed into uncharted waters, and just how fast the wind is about to grow.

 

Diversity over Ability

General Idea: Crowdsourcing, basically, the term Jeff Howe coined referring to the way large numbers of people from a wide variety of backgrounds can generate solutions eluding experts.

“We’re inclined to believe the smartest, best trained people in a given discipline—the experts—are the best qualified to a solve a problem in their specialty. And indeed, they often are. When they fail, as they will from time to time, our unquestioning faith in the principle of ‘ability’ leads us to imagine that we need to find a better solver: other experts with similarly high levels of training. But it is in the nature of high ability to reproduce itself—the new team of experts, it turns out, trained at the same amazing schools, institutes, and companies as the previous experts. Similarly brilliant, out two sets of experts can be relied on to apply the same methods to the problem, and share as well the same biases, blind spots, and unconscious tendencies.” 183

Problems: Again I find myself troubled not so much by the moral as by the articulation. If you switch the register from ‘ability’ to competence and consider the way ingroup adjudications of competence systematically perceive outgroup contributions to be incompetent, then you have a better model to work with here, I think. Each of us carry a supercomputer in our heads and all cognition exhibits path-dependency and is therefore vulnerable to blind alleys, so the power of distributed problem solving should come as no surprise. The problem, here, rather, is one of seeing though our ingroup blinders, and coming to understand how we instinctively identify competence forecloses on distributed cognitive resources (which can take innumerable forms).

Institutionalizing diversity seems like a good first step. But what about overcoming ingroup biases more generally? And what about the blind-alley problem (which could be called the ‘double-blind alley problem,’ given the way reviewing the steps taken tends to confirm the necessity of the path taken)? Is there a way to suss out the more pernicious consequences of cognitive path-dependency?

 

Resilience over Strength

General Idea: The reed versus the tree.

Problems: It’s hard to bitch about a chapter beginning with a supercool Thulsa Doom quote.

Strike that—impossible.

 

Systems over Objects

General Idea: Unravelling contemporary problems means unravelling complex problems necessitating adoption of the systems view.

“These new problems, whether we’re talking about curing Alzheimer’s or learning to predict volatile weather systems, seem to be fundamentally different, in that they seem to require the discovery of all the building blocks in a complex system.” 220

“Systems over objects recognizes that responsible innovation requires more than speed and efficiency. It also requires a constant focus on the overall impact of new technologies, and an understanding of the connections between people, their communities, and their environments.” 224

Problems: Since so much of Three Pound Brain is dedicated to understanding human experience and cognition in naturally continuous terms, I tend to think that ‘Systems over Subjects’ offers a more penetrating approach. The idea that things and events cannot be understood or appreciated in isolation is already firmly rooted in our institutional DNA, I think. The challenge, here, lies in squaring this way of thinking with everyday cognition, with our default ways of making sense of each other and ourselves. We are hardwired to see simple essences and sourceless causes everywhere we look. This means the cognitive ecology Ito and Howe are both describing and advocating is in some sense antithetical—and therefore alienating—to our ancestral ways of making sense of ourselves.


Algorithms are set to flood this space, to begin cuing social cognition to solve biological brains in the absence of any biological brains.


 

Conclusion

When I decided to post a review on this book, I opened an MSWord doc the way I usually do and began jotting down jumbled thoughts and impressions, including the reminder to “Bring up the problem of theorizing politics absent any account of human nature.” I had just finished reading the introduction by that point, so I read the bulk of Whiplash with this niggling thought in the back of my mind. Ito and Howe take care to avoid explicit political references, but as I’m sure they will admit, their project is political through and through. Politics has always involved science fiction; after all, how do you improve a future you can’t predict? Knowing human nature, their need to eat, to secure prestige, to mate, to procreate, and so on, is the only thing that allows us to predict human futures at all. Dystopias beg Utopias beg knowing what makes us tick.

In a time of radical, exponential social and environmental transformation, the primary question regarding human nature has to involve adaptability, our ability to cope with social and environmental transformation. The more we learn about human cognition, however, the more we discover that the human capacity to solve new problems is modular as opposed to monolithic, complex as opposed to simple. This in turn means that transforming different elements in our environments (the way technology does) can have surprising results.

So for example, given the ancestral stability of group sizes, it makes sense to suppose we would assess the risk of victimization against a fixed baseline whenever we encountered information regarding violence. Our ability to intuitively assess threats, in other words, depends upon a specific cognitive ecology, one where the information available is commensurate with the small communities of farmers and/or hunter-gatherers. This suggests the provision of ‘deep’ (ancestrally unavailable) threat information, such as that provided by the web or the evening news, would play havoc with our threat intuitions—as indeed seems to be the case.

Human cognition is heuristic, through and through, which is to say dependent on environmental invariances, the ancestral stability of different relevant backgrounds. The relation between group size and threat information is but one of countless default assumptions informing our daily lives. The more technology transforms our cognitive ecologies, the more we should expect our intuitions to misfire, to prompt ineffective problem-solving behaviour like voting for ‘tough-on-crime’ political candidates. The fact is technology makes things easy that were never ‘meant’ to be easy. Consider how humans depended on all the people they knew before the industrial concentration of production, and so were forced to compromise, to see themselves as requiring friends and neighbours. You could source your clothes, your food, even your stories and religion to some familiar face. You grew up in an atmosphere of ambient, ingroup gratitude that continually counterbalanced your selfish impulses. After the industrial concentration of production, the material dependencies enforcing cooperation evaporated, allowing humans to indulge egocentric intuitions, the sweet-tooth of themselves, and ‘individualism’ was born, and with it all the varieties of social isolation comprising the ‘modern malaise.’

This cognitive ecological lens is the reason why I’ve been warning that the web was likely to aggravate processes of group identification and counter-identification, why I’ve argued that the tactics of 20th century progressivism had actually become more pernicious than efficacious, and suggested that forms of political atavism, even the rise of demagoguery, would become bigger and bigger problems. Where most of the world saw the Arab Spring as a forceful example of the web’s capacity to emancipate, I saw it as an example of ‘flash civil unrest,’ the ability of populations to spontaneously organize and overthrow existing institutional orders period, and only incidentally ‘for the better.’

If you entertained extremist impulses before the internet, you had no choice but to air your views with your friends and neighbours, where, all things being equal, the preponderance of views would be more moderate. The network constraints imposed by geography, I surmised, had the effect of ameliorating extremist tendencies. Absent the difficulty of organizing about our darker instincts, rationalizing and advertising them, I think we have good reason to fear. Humans are tribal through and through, as prone to acts of outgroup violence as ingroup self-sacrifice. On the cognitive ecological picture, it just so happens that technological progress and moral/political progress have marched hand in hand thus far. The bulk of our prosocial, democratic institutions were developed—at horrendous cost, no less—to maximize the ‘better angels’ of our natures and to minimize the worst, to engineer the kind of cognitive ecologies we required to flourish in the new social and technical environments—such as the industrial concentration of material dependency—falling out of the Renaissance and Enlightenment.

I readily acknowledge that better accounts can be found for the social phenomena considered above: what I contend is that all of those accounts will involve some nuanced understanding of the heuristic nature of human cognition and the kinds of ecological invariance they take for granted. My further contention is that any adequate understanding of that heuristic nature raises the likelihood, perhaps even the inevitability, that human social cognition will effectively breakdown altogether. The problem lies in the radically heuristic nature of the cognitive modes we use to understand each other and ourselves. Since the complexity of our biocomputational nature renders it intractable, we had to develop ways of predicting/explaining/manipulating behaviour that have nothing to do with the brains behind that behaviour, and everything to do with its impact on our reproductive fortunes. Social problem-solving, in other words, depends on the stability of a very specific cognitive ecology, one entirely innocent to the possibility of AI.

For me, the most significant revelation from the Ashley Madison scandal was the ease with which men were fooled into thinking they were attracting female interest. And this just wasn’t an artifact of the venue: Ito’s MIT colleague Sherry Turkle, in addition to systematically describing the impact of technology on interpersonal relationships, often warns of the ease with which “Darwinian buttons” can be pushed. What makes simple heuristics so powerful is precisely what renders them so vulnerable (and it’s no accident that AI is struggling to overcome this issue now): they turn on cues physically correlated to the systems they track. Break those correlations, and those cues are connected to nothing at all, and we enter Crash Space, the kind of catastrophic cognitive ecological failure that warns away everyone but philosophers.

Virtual and Augmented Reality, or even Vegas magic acts, provide excellent visual analogues. Whether one looks at stereoscopic 3-D systems like Occulus Rift, or the much-ballyhooed ‘biomimetics’ of Magic Leap, or the illusions of David Copperfield, the idea is to cue visual environments that do not exist as effectively and as economically as possible. Goerztal and Levesque and others can keep pounding at the gates of general cognition (which may exist, who knows), but research like that of the late Clifford Nass is laying bare the landscape of cues comprising human social cognition, and given the relative resources required, it seems all but inevitable that the ‘taking to be’ approach, designing AIs focused not so much on being a genuine agent (whatever that is) as cuing the cognition of one, will sweep the field. Why build Disney World when you can project it? Developers will focus on the illusion, which they will refine and refine until the show becomes (Turing?) indistinguishable from the real thing—from the standpoint of consumers.

The differences being, 1) that the illusion will be perspectivally robust (we will have no easy way of seeing through it); and 2) the illusion will be a sociocognitive one. As AI colonizes more and more facets of our lives, our sociocognitive intuitions will become increasingly unreliable. This prediction, I think, is every bit as reliable as the prediction that the world’s ecosystems will be increasingly disrupted as human activity colonizes more and more of the world. Human social cognition turns access to cues into behaviour solving otherwise intractable biological brains—this is a fact. Algorithms are set to flood this space, to begin cuing social cognition to solve biological brains in the absence of any biological brains. Neil Lawrence likens the consequences to the creation of ‘System Zero,’ an artificial substratum for the System 1 (automatic, unconscious) and System 2 (deliberate, conscious) organization of human cognition. He writes:

“System Zero will come to understand us so fully because we expose to it our inner most thoughts and whims. System Zero will exploit massive interconnection. System Zero will be data rich. And just like an elephant, System Zero will never forget.”

Even as we continue attempting to solve it with systems we evolved to solve one another—a task which is going to remain as difficult as it always has, and will likely grow less attractive as fantasy surrogates become increasingly available. Talk about Systems over Subjects! The ecology of human meaning, the shared background allowing us to resolve conflict and to trust, will be progressively exploited and degraded—like every other ancestral ecology on this planet. When I wax grandiloquent (I am a crazy fantasy writer after all), I call this the semantic apocalypse.

I see no way out. Everyone thinks otherwise, but only because the way that human cognition neglects cognitive ecology generates the illusion of unlimited, unconstrained cognitive capacity. And this, I think, is precisely the illusion informing Ito and Howe’s theory of human nature…

Speaking of which, as I said, I found myself wondering what this theory might be as I read the book. I understood I wasn’t the target audience of the book, so I didn’t see its absence as a failing so much as unfortunate for readers like me, always angling for the hard questions. And so it niggled and niggled, until finally, I reached the last paragraph of the last page and encountered this:

“Human beings are fundamentally adaptable. We created a society that was more focussed on our productivity than our adaptability. These principles will help you prepare to be flexible and able to learn the new roles and to discard them when they don’t work anymore. If society can survive the initial whiplash when we trade our running shoes for a supersonic jet, we may yet find that the view from the jet is just what we’ve been looking for.” 250

This first claim, uplifting as it sounds, is simply not true. Human beings, considered individually or collectively, are not capable of adapting to any circumstance. Intuitions systematically misfire all the time. I appreciate how believing as much balms the conscience of those in the innovation game, but it is simply not true. And how could it be, when it entails that humans somehow transcend ecology, which is a far different claim than saying humans, relative to other organisms, are capable of spanning a wide-variety of ecologies. So long as human cognition is heuristic it depends on environmental invariances, like everything else biological. Humans are not capable of transcending system, which is precisely why we need to think the human in systematic terms, and to look at the impact of AI ecologically.

What makes Whiplash such a valuable book (aside from the entertainment factor) is that it is ecologically savvy. Ito and Howe’s dominant metaphor is that of adaptation and ecology. The old business habitat, they argue, has collapsed, leaving old business animals in the ecological lurch. The solution they offer is heuristic, a set of maxims meant to transform (at a sub-ideological level no less!) old business animals into newer, more adaptable ones. The way to solve the problem of innovation uncertainty is to contribute to that problem in the right way—be more innovative. But they fail to consider the ecological dimensions of this imperative, to see how feeding acceleration amounts to the inevitable destruction of cognitive ecologies, how the old meaning habitat is already collapsing, leaving old meaning animals in the ecological lurch, grasping for lies because those, at least, they can recognize.

They fail to see how their local survival guide likely doubles as a global suicide manual.


The meta-heuristics they offer, the new guiding mythologies, are meant to encapsulate the practical bases of evolvability itself… They’re teaching ferns how to grow flowers.


 

PS: The Big Picture

“In the past twenty-five years,” Ito and Howe write, “we have moved from a world dominated by simple systems to a world beset and baffled by complex systems” (246). This claim caught my attention because it is both true and untrue, depending how you look at it. We are pretty much the most complicated thing we know of in the universe, so it’s certainly not the case that we’ve ever dwelt in a world dominated by simple systems. What Ito and Howe are referring to, of course, is our tools. We are moving from a world dominated by simple tools to a world beset and baffled by complex ones. Since these tools facilitate tool-making, we find the great ratchet that lifted us out of the hominid fog clicking faster and faster and faster.

One of these ‘simple tools’ is what we call a ‘company’ or ‘business,’ an institution itself turning on the systematic application of simple tools, ones that intrinsically value authority over emergence, push over pull, maps over compasses, safety over risk, compliance over disobedience, theory over practice, ability over diversity, strength over resilience, and objects over systems. In the same way the simplicity of our physical implements limited the damage they could do to our physical ecologies, the simplicity of our cognitive tools limited the damage they could do to our cognitive ecology. It’s important to understand that the simplicity of these tools is what underwrites the stability of the underlying cognitive ecology. As the growing complexity and power of our physical tools intensified the damage done to our physical ecologies, the growing complexity and power of our cognitive tools is intensifying the damage done to our cognitive ecologies.

Now, two things. First, this analogy suggests that not all is hopeless, that the same way we can use the complexity and power of our physical tools to manage and prevent the destruction of our physical environment, we should be able to use the complexity and power of our cognitive tools to do the same. I concede the possibility, but I think the illusion of noocentrism (the cognitive version of geocentrism) is simply too profound. I think people will endlessly insist on the freedom to concede their autonomy. System Zero will succeed because it will pander ever so much better than a cranky old philosopher could ever hope to.

Second, notice how this analogy transforms the nature of the problem confronting that old animal, business, in the light of radical ecological change. Ancestral human cognitive ecology possessed a shallow present and a deep future. For all his ignorance, a yeoman chewing his calluses in the field five hundred years ago could predict that his son would possess a life very much resembling his own. All the obsolete items that Ito and Howe consider are artifacts of a shallow present. When the world is a black box, when you have no institutions like science bent on the systematic exploration of solution space, the solutions happened upon are generally lucky ones. You hold onto the tools you trust, because it’s all guesswork otherwise and the consequences are terminal. Authority, Push, Compliance, and so on are all heuristics in their own right, all ways of dealing with supercomplicated systems (bunches of humans), but selected for cognitive ecologies where solutions were both precious and abiding.

Oh, how things have changed. Ambient information sensitivity, the ability to draw on everything from internet search engines, to Big Data, to scientific knowledge more generally, means that businesses have what I referred to earlier as a deep present, a vast amount of information and capacity to utilize in problem solving. This allows them to solve systems as systems (the way science does) and abandon the limitations of not only object thinking, but (and this is the creepy part) subject thinking as well. It allows them to correct for faulty path-dependencies by distributing problem-solving among a diverse array of individuals. It allows them to rationalize other resources as well, to pull what they need when they need it rather than pushing warehoused resources.

Growing ambient information sensitivity means growing problem-solving economy—the problem is that this economy means accelerating cognitive ecological transformation. The cheaper optimization becomes, the more transient it becomes, simply because each and every new optimization transforms, in ways large or small but generally unpredictable, the ecology (the network of correlations) prior heuristic optimizations require to be effective. Call this the Optimization Spiral.

This is the process Ito and Howe are urging the business world to climb aboard, to become what might be called meta-ecological institutions, entities designed in the first instance, not to build cars or to mediate social relations or to find information on the web, but to evolve. As an institutionalized bundle of heuristics, a business’s ability to climb the Optimization Spiral, to survive accelerating ecological change, turns on its ability to relinquish the old while continually mimicking, tinkering, and birthing with the new. Thus the value of disobedience and resilience and practical learning: what Ito and Howe are advocating is more akin to the Precambrian Explosion or the rise of Angiosperms than simply surviving extinction. The meta-heuristics they offer, the new guiding mythologies, are meant to encapsulate the practical bases of evolvability itself… They’re teaching ferns how to grow flowers.

And stepping back to take the systems view they advocate, one cannot but feel an admixture of awe and terror, and wonder if they aren’t sketching the blueprint for an entirely unfathomable order of life, something simultaneously corporate and corporeal.