Truth, Evicted: Notes Toward Naturalizing the ‘View from Nowhere’
Aphorism of the Day: In the graphic novel of life, Truth denotes those panels bigger than the page.
In his latest book, Constructing the World, David Chalmers begins by invoking the famous ‘Laplacean intellect’ for whom the future and the past “would be present before its eyes” because it could access all the facts pertaining to structure and function in a deterministic universe. In order to pluck ‘Laplace’s demon’ from the all the damning criticism it has received regarding quantum indeterminacy and subjective facts, he provides it with the informatic access it needs to overcome these problems. What Chalmers is after is something he calls ‘scrutability,’ the notion that, in principle, there exists a compact class of truths from which all truths can be determined given a sufficiently powerful intellect.
In the actual world, we may suppose, one truth is that there are no Laplacean demons. But no Laplacean demon could know that there are no Laplacean demons. To avoid this problem, we could require the demon to know all the truths about its modified world rather than the actual world. But now the demon has to know about itself and a number of paradoxes threaten. There are paradoxes of complexity: to know the whole universe, the demon’s mind needs to be as complex as the whole universe, even though it is just one part of the universe. There are paradoxes of prediction: the demon will be able to predict its own actions and then try to act contrary to the prediction. And there are paradoxes of knowability: if there is any truth q that the demon never comes to know, perhaps because it has never entertains q, then it seems the demon could never know the true proposition that q is a truth that it does not know.
To avoid these paradoxes, we can think of the demon as lying outside the world it is trying to know. (xv)
Constructing the World is raised upon the notion that ‘scrutability’–the thesis that one can derive all truths from the proper set of partial truths, given the proper inferential resources–can provide hitherto unappreciated ways and means to approach a number of issues in philosophy. My concern, however, pretty much begins and ends with this single quote, this tale of truth evicted. Why? Because of the beautiful way it illustrates how the view from nowhere has to be, quite literally, from nowhere.
This is my hunch. I think that what we call ‘truth’ is simply a heuristic, a way for a neural system possessing certain, structurally enforced information access and cognitive processing constraints to conserve resources by exploiting those limitations. Truth, I want to suggest, is a product of informatic neglect, a result of the insuperable difficulty human brains have cognizing themselves as human brains, which is to say, as another causal channel in their causal environments.
Now all of this is wildly speculative, but I submit that the parallels are not simply interesting, but striking, enough to warrant investigation–no matter what one thinks of Blind Brain Theory. Who knows? it could lead to the naturalization of truth.
Neural systems are primarily environmental intervention machines, which is to say, the bulk of their resources are dedicated to inserting themselves in effective causal relationships with their environment. This requires them to be predictive machines, to somehow isolate causal regularities out of the booming, buzzing confusion of their environments. And this puts them in a pretty little pickle. Why? Because sensory effects are all they have to go on. They literally have to work backward from regularities in sensory input to regularities in their environment.
So let’s draw a distinction between two causal axes, the first, which we will call medial, pertaining to sensory and neural relations of cause and effect, the second, which we will call lateral, pertaining to environmental relations of cause and effect.
Depicted this way, the problem can be characterized as one of extracting lateral regularities (predictive power) from medial streams of information. With this distinction it’s easy to see that any system that overcomes this problem will suffer what might be called ‘global medial neglect.’ The point of the system is to literally allow the lateral regularities in its environment to drive its behavioural outputs in efficacious ways. The idea, in other words, is to ‘bind’ medial causal regularities to lateral causal regularities in a way that allows the system to predict and selectively ‘bind’ lateral causal regularities in its environment via behaviour. Neural systems are designed to be ‘environmentally orthogonal,’ to be the kind of parasitic information economies they need to be to effectively parasitize their environment. ‘Cognition,’ on this model, simply means ‘behaviourally effective medial enslavement.’ Environmental knowledge is environmental power.
So, to behaviourally command environmental systems, a neural system has to be perceptually commanded by those systems. The ‘orthogonal relationship’ between the neural and the environmental systems refers to the way the machinery of the former is dedicated to the predictive recapitulation of the latter via its ambient sensory effects (such as the reflection of light).
So far so good. We have medial neglect: the way the orthogonal relation between the neural and the environmental necessitates the blindness of the neural to the neural. Since the resources of the neural system are dedicated to modelling the environmental system, the neural system becomes the one part of its environment that it cannot model–at least not easily.
We also have what might be called encapsulation: the way information unavailable to the neural system cannot impact its processing. This means the neural system, at all turns and at all times, is bound by what might be called a ‘only game in town effect.’ At any given time, the information it has to go on is all the information it has to go on.
Now let’s define a ‘perspective’ as the sum of lateral information available to an environmentally situated neural system. This definition allows us to conceptualize perspectives in privative terms. Though a product of a neural system, a perspective would have no access to the neural intricacies of that system. A perspective, in other words, would have no real access to its own causal foundations. And this is just to say that a perspective on an environment would lack basic information pertaining to its natural relation to that environment.
Now a perspective, of course, is simply a view from somewhere. Given the definition above, however, we can see that it is a view from somewhere that has, for very basic structural reasons, difficulty with the ‘from somewhere.’ In particular, medial neglect means that neural system will be unable to situate its own functions in its environmental models. This means, curiously, that perspective, from the perspective of itself, will appear to both belong and not belong to its environments.
Another way to put this is to say that perspective, from the perspective of itself, will appear to be both a view from somewhere and a view from nowhere. The amazing thing about this account is that it appears to be from nowhere for the very same reason paradox pries the Laplacean demon out of somewhere into nowhere. Mechanism cannot admit a computational system that does not suffer medial neglect. The Laplacean demon, to know itself, would have to know to its knowing, which means it has another knowing to know, which means it has another knowing to know, and so on, and so on. The only way to escape this regress is to posit some kind of unknown knowing. This is why the Laplacean demon, in order to know the whole universe, has to stand outside the universe–or nowhere. It cannot itself be known.
Let’s unpack this with an example.
A ‘view from somewhere,’ you could say, is a situated information channel, or in other words, an information channel possessing any number of access and processing constraints. So an ancient Sumerian astrologer with Alzheimer’s, for instance, isn’t going to know much about the moon. A ‘view from nowhere,’ conversely, is a channel possessing no access or processing constraints. Laplace’s demon can tell you everything you could possibly want about the moon.
Here’s the thing. It’s not that we’re believing machines, it’s that we’re machines. Even though the ancient Sumerian astrologer with Alzheimer’s doesn’t know anything about the moon, don’t try telling him that. Information is mechanical, which means that so long as it’s effective, it’s canonical. Given medial neglect, our Sumerian friend will have difficulty situating his cognitive perspective in its time and place. Certainly he will admit that he lives at a certain time and place, that he is situated, but his cognitive perspective will seem to stand outside that time and place, especially since what he knows about the moon is entirely independent on where or when he happens to find himself.
He will confuse himself, in other words, for a Laplacean demon.
There’s the stuff you have to peer around, the flags that spur you to sample various positions and relations relative to said stuff. And there’s the stuff you’ve seen enough of, the stuff without flags. The problem is, without the flags, the ‘exploratory’ systems shut down, and the information becomes detached from contextual cues. ‘Somewhere’ fades away, leaving the information–a host of canonized heuristics–hanging in the nowhere of neglect. This is our baseline, all the information we have when initiating action. Thanks to medial neglect, this baseline almost entirely eludes our ability to make explicit.
This is where a cornucopia of natural explanatory possibilities suggest themselves.
1) This seems to explain, not so much why we each have our own ‘truth’ (the brute difference between our brains explains that), but why we have such difficulty recognizing that our disagreements merely pertain to those brute differences. This explains, in other words, why universalism is the assumptive default, and the ability to relativize our beliefs is a hard won cognitive achievement.
2) This seems to explain why implicit assumptions invariably trump explicit beliefs in the absence of conscious deliberation–why we’re more prone to act as we assume rather than act as we say.
3) Related to (1), this seems to explain why we suffer bias blindspots like those characteristic of asymmetric insight, why the cognitive limitations of the other tend to be glaring to the extent ours are invisible.
4) This seems to explain why ‘truth’ goes without saying, and why falsehood always takes the form of truth with a flag.
5) Perhaps this is the reason for the intuitive attraction of deflationary theories of truth: it poses in logical form the natural fact that information is sufficient unless flagged otherwise.
6) This seems to explain why contextualizing claims has the effect of relativizing them, which is to say, depleting them of ‘truth.’ Operators like, ‘standpoint,’ ‘vantage,’ ‘point-of-view,’ intuitively impose any number of informatic access and processing constraints on what was, previously, a virtual ‘view from nowhere.’
7) Perhaps this is the reason propositional attitudes short-circuit compositionality.