To Ping or Not to Ping: Physics, Phenomenology, and Observer Effects
JAMES XAVIER: Sam, what’s the range of human vision?
SAM BRANT: Distance?
JAMES XAVIER: No, wavelength.
SAM BRANT: Between 4000 angstrom units and 7800 angstrom units.* You know that.
JAMES XAVIER: Less than one-tenth of the actual wave spectrum. What could we really see if we had access to the other ninety-percent? Sam, we are virtually blind, all of us. You tell me my eyes are perfect, well they’re not. I’m blind to all but a tenth of the universe.
SAM BRANT: My dear friend, only the gods see everything.
JAMES XAVIER: My dear doctor, I’m closing in on the gods.
What happens when we begin to see too much? Roger Corman’s X: The Man with the X-Ray Eyes poses this very question. At first the dividends seem to be nothing short of fantastic. Dr. Xavier can see through pockets, sheets of paper, even the clothes of young men and women dancing. What’s more, he discovers he can look through the bodies of the ill and literally see what needs to be done to save them. The problem is that the blindness of others defines the expectations of others: Dr. Xavier finds himself a deep information consumer in a shallow information cognitive ecology. So assisting in surgery he knows the senior surgeon is about to kill a young girl because he can see what’s truly ailing her. He has to usurp his superior’s authority—it is the only sane thing to do—and yet his subsequent acts inevitably identify him as mad.
The madness, we discover, comes later, when the information becomes so deep as to overwhelm his cognitive biology. The more transparent the world becomes to him, the more opaque he becomes to others, himself included. He begins by nixing his personal and impersonal social ecologies, finds respite for a time as first a carnival mystic and then a quasi-religious faith healer, but even these liminal social habitats come crashing down around him. In the end, no human community can contain him, not even the all-embracing community of God. A classic ‘forbidden knowledge’ narrative, the movie ends with the Biblical admonition, “If an eye offends thee…” and Dr. Xavier plucking out his own eyes—and with remarkable B-movie facility I might add!
The idea, ultimately, isn’t so much that ignorance is bliss as it’s adaptive. The great bulk of human cognition is heuristic, turning not so much on what’s going on as cues systematically related to what’s going on. This dependence on cues is what renders human cognition ecological, a system functionally dependent upon environmental invariants. Change the capacity adapted to those cues, or change the systems tracked by these cues, and we find ourselves in crash space. X: The Man with the X-Ray Eyes provides us with catastrophic and therefore dramatic examples of both.
Humans, like all other species on this planet, possess cognitive ecologies. And as I hope to show below, the consequences of this fact can be every bit as subtle and misleading in philosophy as they are disastrous and illuminating in cinema.
We belong to the very nature we’re attempting to understand, and this has consequences for our capacity to understand. At every point in its scientific development, humanity has possessed a sensitivity horizon, a cognitive range basically, delimiting what can and cannot detected, let alone solved. Ancestrally, for instance, we were sensitive only to visible light and so had no way of tracking the greater spectrum. Once restricted to the world of ‘middle-sized dry goods,’ our sensitivity horizons now delve deep into the macroscopic and microscopic reaches of our environment.
The difference between Ernest Rutherford’s gold foil experiment and the Large Hadron Collider provides a dramatic expression of the difficulties entailed by extending this horizon, how the technical challenges tend to compound at ever more distal scales. The successor to the LHC, the International Linear Collider, is presently in development and expected cost 10 billion dollars, twice as much as the behemoth outside Geneva. Meanwhile the James Webb Space Telescope, the successor to the Hubble and the Spitzer, has been projected to cost 8 billion dollars. Increasingly, cutting edge science is an industrial enterprise. We talk about searching for objects, signals, and so forth, but what we’re actually doing is engineering ever more profound sensitivities, ways to mechanically relate to macroscopic and microscopic scales.
When it comes to microscopic sensitivity horizons, the mechanical nature of this relation renders the problem of so-called ‘observer effects’ all but inevitable. The only way to track systematicities is to physically interact with them in some way. The more diminutive those systematicities become, the more sensitive they become, the more disruptive our physical interactions become. Intractable observational interference is something fundamental physics was pretty much doomed to encounter.
Now every account of observer effects I’ve encountered turns on the commonsensical observation that intervening on processes changes them, thus complicating our ability to study that process as it is. Observer effects, therefore, knock target systems from the very pins we’re attempting to understand. Mechanical interaction with a system scrambles the mechanics of that system—what could be more obvious? Observer effects are simply a consequence of belonging to the same nature we want to know. The problem with this formulation, however, is that it fails to consider the hybrid system that results. Given that cognition is natural, we can say that all cognition turns on the convergence of physical systems, the one belonging to the cognizer, the other belonging to the target. And this allows us to distinguish between kinds of cognition in terms of the kinds of hybrid systems that result. And this, I hope to show, allows us not only to make more sense of intentionality—‘aboutness’—but also why particle physics convinces so many that consciousness is somehow responsible for reality.
Unless you believe knowledge is magical, supernatural, claiming that our cognitive sensitivity to environmental systematicities has mechanical limits is a no-brainer. Blind Brain Theory amounts to little more than applying this fact to deliberative metacognition, or reflection, and showing how the mystery of human meaning can be unravelled in entirely naturalistic terms. On Blind Brain Theory, the first-person is best understood as an artifact of various metacognitive insensitivities. The human brain is utterly insensitive to the mechanics of its physical environmental relations (it has to be for a wide variety of reasons); it has no alternative but to cognize those relations in a radically heuristic manner, to ignore all the mediating machinery. As BBT has it, what philosophers call ‘intentionality’ superficially tracks this specialized cognitive tool—fetishizes it, in fact.
Does the brain possess the capacity to cognize its own environmental relations absent cognition of the actual physical relations obtaining? Yes. This is a simple empirical fact. So what is this capacity? Does it constitute some intrinsically inexplicable pocket of reality, express some fundamental rupture in Being, or is it simply heuristic? Since inexplicable pockets and fundamental ruptures entail a wide variety of perpetually speculative commitments, heuristics have to be the only empirically plausible alternative.
Intentional cognition is heuristic cognition. As such, it possesses a corresponding problem ecology, which is to say, a limited scope of effective application. Heuristic cognition always requires that specific background conditions obtain, some set of environmental invariants. Given our insensitivity to these limits, our ‘autoinsensitivity’ (what I call medial neglect elsewhere), it makes sense we would run afoul misapplications. Blind Brain Theory provides a way of mapping these limits, of understanding how and where things like the intentionality heuristic might lead us astray.
Anyone who’s watched or read The Red October knows about the fundamental distinction drawn between active and passive modes of detection in the military science of warning systems. So with sonar, for instance, one can ‘ping’ to locate their potential enemy, transmit an acoustic pulse designed to facilitate echoic location. The advantage of this active approach is that reliably locates enemies, but it does so at the cost of alerting your enemy to your presence. It reliably detects, but it changes the behaviour of what it detects—a bona fide observer effect. You know where your enemy is, sure, but you’ve made them more difficult to predict. Passive sonar, on the other hand, simply listens for the sounds your enemy is prone to make. Though less reliable at detecting them, it has the advantage of leaving the target undisturbed, thus rendering your foe more predictable, and so more vulnerable. this.
Human cognition cleaves along very similar lines. In what might be called passive cognition, the cognitive apparatus (our brain and other enabling media) has a negligible or otherwise irrelevant impact on the systematicities tracked. Seeing a natural process, for instance, generally has no impact on that process, since the photons used would have been reflected whether or not they were subsequently intercepted by your retinas. With interactive cognition, on the other hand, the cognitive apparatus has a substantial impact on the systematicities tracked. Touching a natural process, for example, generally interferes with that process. Where the former allows us to cognize functions independent of our investigation, the latter does not. This means that interactive cognition always entails ignorance in a way that passive cognition does not. Restricted to the consequences of our comportments, we have no way of tracking the systematicities responsible, which means we have no way of completely understanding the system. In interactive cognition, we are constitutive of such systems, so blindness to ourselves effectively means blindness to those systems, which is why we generally learn the consequences of our of interference, and little more. Of course passive cognition suffers the identical degree of autoinsensitivity; it just doesn’t matter given how the passivity of the process preserves the functional independence of the systematicities involved. Things do what they would have done whether you had observed them or not.
We should expect, then, that applications of the intentionality heuristic—‘aboutness’—will generally facilitate cognition when our targets exhibit genuine functional independence, and generally undermine cognition when they do not. Understanding combustion engines requires no understanding of the cognitive apparatus required to understand combustion engines. The radically privative schema of knower and known, subject and object, works simply because the knowing need not be known. We need possess no comportment to our comportments in instances of small engine repair, which is a good thing, given the astronomical neural complexities involved. Thinking in terms of ‘about’ works just fine.
The more interactive cognition becomes, however, the more problematic assumptive applications of the intentionality heuristic are likely to become. Consider phenomenology, where the presumption is that the theorist can cognize experience itself, and not simply the objects of experience. It seems safe to say that experience does not enjoy the functional independence of, say, combustion engines. Phenomenologists generally rely on the metaphorics of vision in their investigations, but insofar as both experience and cognition turn on one and the same neural system, the suspicion has to be that things are far more tactile than their visual metaphors lead them to believe. The idea of cognizing experience absent any understanding of cognition is almost comically farfetched, if you think about it, and yet this is exactly what phenomenologists purport to do. One might wonder what two things could be more entangled, more functionally interdependent, than conscious experience and conscious cognition. So then why would anyone entertain phenomenology, let alone make it their vocation?
The answer is neglect. Since phenomenologists suffer the same profound autoinsensitivity as the rest of the human species, they have no way of distinguishing between those experiential artifacts genuinely observed and those manufactured—between what they ‘see’ and what they ‘handle.’ Since they have no inkling whatsoever of their autoinsensitivity, they are prone to assume, as humans generally do when suffering neglect, that what they see is all there is, despite the intrinsically theoretically underdetermined nature of their field. As we have seen, the intentionality heuristic presumes functional independence, that we need not know much of anything about our cognitive capacities to solve a given system. Apply this presumption to instances of interactive cognition, as phenomenologists so obviously do, and you will find yourself in crash space, plain and simple.
Observer effects, you could say, flag the points where cognitive passivity becomes interactive—where we must ping our targets to track them. Given autoinsensitivity, our brains necessarily neglect the enabling (or medial) mechanical dimension of their own constitution. They have no way, therefore, of tracking anything apart from the consequences of their cognitive determinations. This simply follows from the mechanical nature of consciousness, of course—all cognition turns on deriving, first and foremost, predictions from mechanical consequences, but also manipulations and explanations. The fact that we can only cognize the consequences of cognition—source neglect—convinces reflection that we somehow stand outside nature, that consciousness is some kind of floating source as opposed to what it is, another natural system embedded in a welter of natural systems. Autoinsensitivity is systematically mistaken for autosufficiency, the profound intuition that conscious experience somehow must come first. It becomes easy to suppose that the collapse of wave-functions is accomplished by the intervention of consciousness (or some component thereof) rather than the interposition of another system. We neglect the system actually responsible for decoherence and congratulate the heuristic cartoon that evolution has foisted upon us instead. The magically floating, suspiciously low-dimensional ‘I’ becomes responsible, rather than the materially embedded organism we know ourselves to be.
Like Deepak Chopra, Donald Hoffman and numerous others insisting their brand of low-dimensional hokum is scientifically grounded, we claim that science entails our most preposterous conceit, that we are somehow authors of reality, rather than just another thermodynamic waystation.
*The typical human eye is actually sensitive to 3900 to 7000 angstroms.
 Baer, Howard, Barger, Vernon D., and List, Jenny. “The Collider that Could Save Physics,” Scientific American, June, 2016. 8.
 Sometimes it can be gerrymandered to generate understanding in novel contexts, sometimes not. In those cases where it can be so adapted, it still relies on some kind invariance between the cues accessed, and the systems solved.
 Absent, that is, sophisticated theoretical and/or experimental prostheses. A great deal needs to be said here regarding the various ‘hacks’ we’ve devised to suss out natural processes via a wild variety of ingenious interventions. (Hacking’s wonderful Representing and Intervening is replete with examples). But none of these methods involve overcoming medial neglect, which is to say all of them leverage cognition absent autocognition.
 Blind Brain Theory can actually be seen as a generalization Daniel Kahneman calls WYSIATI (‘What-You-See-Is-All-There-Is’) effects in his research.
 This is entirely consonant with an exciting line of research (one of multiple lines converging on Blind Brain Theory) involving ‘inherence heuristics.’ Andrei Cimpian and Erika Saloman write:
we propose that people often make sense of [environmental] regularities via a simple rule of thumb–the inherence heuristic. This fast, intuitive heuristic leads people to explain many observed patterns in terms of the inherent features of the things that instantiate these patterns. For example, one might infer that girls wear pink because pink is a delicate, inherently feminine color, or that orange juice is consumed for breakfast because its inherent qualities make it suitable for that time of day. As is the case with the output of any heuristic, such inferences can be–and often are–mistaken. Many of the patterns that currently structure our world are the products of complex chains of historical causes rather than being simply a function of the inherent features of the entities involved. The human mind, however, may be prone to ignore this possibility. If the present proposal is correct, people often understand the regularities in their environments as inevitable reflections of the true nature of the world rather than as end points of event chains whose outcomes could have been different.
See, Andrei Cimpian and Erika Saloman, “The inherence heuristic: An intuitive means of making sense of the world and a potential precursor to psychological essentialism,” Behavioral and Brain Sciences 37 (2014), 461-462.