Notes Toward a Post-Normative Philosophy


Some preliminary definitions:

The conscious brain (C-neural systems): systems which process information that we are aware of at any given moment.

The accessible brain (A-neural systems): systems which process information that can be directly accessed by C-neural systems.

The inaccessible brain (I-neural systems): systems which process information that cannot be directly accessed by C-neural systems, though it can be gleaned in other ways, inferred via perception, etc.

The ‘greater brain’: the A-neural and I neural systems not being accessed by C-neural systems at any given time, which is to say, almost all the information in the brain save an infinitesimal sliver.


The question is simple: Are semantics as we presently understand them an ‘evolved discovery’ of the human brain, or are they a gerrymandered compromise?

There are at least two ways to be related to some natural law: one can cognize it, or one can exemplify it. The premise here is that abstract ‘inference structures’ such as those found in mathematics and logic are simply what happens when concrete interaction structures are exemplified from a certain parochial standpoint. Mathematic, for example, would simply be the ground floor, an information science grasped from the inside–where thought itself is the experimental apparatus.

Consider the following fable: Since genetic life is always ‘etiologically thrown’ (cut off from the actual etiological chains that connect its information to its environments (Principle of Amnesis)), it had to rely on the evolutionary selection of ‘iterables,’ devices that can be effectively deployed in a wide variety of basic environmental contexts, and then of iterability, the capacity to innovate devices that can be effectively deployed in a wide variety of environmental contexts.

Environmental variability will select devices more according to range, whereas environmental continuity will select devices according to penetration (the classic tradeoff between generalizing and specializing in evolution).

The first (effect feedback) iterables were simply bodies, morphologies (M-devices) with basic behavioural possibilities built into them. The development of basic nervous systems brought about the behavioural exploitation of environments via the deployment of neurobehavioural circuits (Nb-devices). The development of Nb-devices allowed for the social coordination of behaviours via the deployment of neurocommunicative circuits (Nc-devices). Human language is the most sophisticated Nc-device known.

Nc-devices transmit information from one brain to another. It is important to resist the (natural) urge to conceive of this information as information ‘about.’ Likewise, it is also important to resist the (natural) urge to conceive this information as ‘rule-governed.’ We’re simply talking about a machine (at the present moment). It is also important to note that Nc-devices need only be functional or ‘virtual,’ sharing as many neural resources as possible, rather than being discrete, autonomous, or self-contained. The brain is not a warehouse of representations, but rather a ‘iteration engine,’ a ‘device device.’

Given evolutionary exigency, Nc-devices were perhaps destined to become combinatorial, to utilize the fewest number of elements to effectuate (transneural) engagement with the greatest number of environments. Given the evolutionary importance of range (the degree of applicability of iterables), Nc-devices were also destined to become plastic, which is to say, the brain would become increasingly efficient at generating modified devices and culling them.

Forced through sieves of natural and neural selection Nc-devices maintain exquisitely flexible yet rigorously structured actual and potential relationships with their neural and non-neural environments. What we call ‘semantics,’ on this cartoon, represents an ‘informatic cross-section’ what the brain is actually doing. ‘Meaning’ (understood denotationally or otherwise), ‘correctness,’ ‘competence,’ ‘inference,’ and so on, are simply artifacts of our coarse, truncated, and tangled ‘perspective’ on our own neural computations.

BBT provides a means to understand some of the specifics of this distorted view.

According to Church’s thesis, any function that can be calculated can be computed on a Turing machine. This thesis is normally taken to demonstrate the principled inability to segregate the a priori and the a posteriori, since it is at once empirical and foundational to the contemporary understanding of first-order logic. But it also demonstrates that interaction exhausts inference, which is to say, that semantics cannot go it alone. This suggests what might be called a Semantic Asymmetry Argument: that interaction is a necessary condition of inference, but not vice versa.

Why might this be so?

It all comes down to data compression, not simply the elimination of redundancies, but the glossing of complexities. The system always has to respond to what actually happens, but in ways that balance cost against effectiveness. This is what makes ‘intentional sciences’ so difficulty: linguistics, for instance, purports to study human verbal communication. Communication, however, is the study of information transfer. But linguistics has no access whatsoever to the actual information exchanged between two brains during actual, face to face speech. It remains stranded at the level of syntax and semantics–which is to say, linguistic awareness.

Nc-devices are exceedingly complicated because they draw on the whole–each is the mutation of others. Why generate a new device for ‘elm’ when all you need is a different activation of ‘tree’? But all that is required is the simplest of associations to trigger this complexity. Sentences are Nc-devices. Arguments are Nc-devices. Stories are Nc-devices. The receiver filters all these Nc-device-combines according to non-normative ‘competency,’ ‘reliability,’ ‘propriety,’ ‘utility’ and more.

The One-Yardstick and Defector Problems: ‘natural logic’ as Nc-device compatibility protocol, allowing brains to defer to brains, and also allowing brains to detect cheating brains. ‘Truth’ is the default because, as the basis of action, the correlated Nc-devices have to be the ‘first responders,’ and so not have any ‘filter’ tagged to it.

The explains the intimate relationship between ‘belief’ and ‘action’: since survival often depends on response time, certain Nc-devices must inform Nb-devices with as little intermediate processing as possible.

This explains why Truth seems to possess a ‘view from nowhere’ structure: Relativization requires excess information, that certain Nc-devices be ‘tagged’ within a system of other Nc-devices. ‘Propositional attitudes’ are just such a ‘tagging device.’ ‘True’ Nc-devices possess no such ‘tags,’ no excess information. In this sense, they carry the conscious brain’s information horizons ‘on their back,’ so to speak.

The two main dimensions of neural-environmental causal interaction might be called the illatic and nonillatic. This is analogous to the difference between passive and active sonar: brains can interact with causal systems passively, like ‘a fly on a wall,’ or actively, like an engineer. It typically uses illatic and nonillatic interaction in close concert: observing, doing, observing, and so on.

In each case it suffers encapsulation (the nonintentional correlate of ‘perspective): it only has access to fragmentary information. This has a decisive impact on device formation. The greater brain, you might say, is always ‘causally thrown,’ or etiologically blind. In this respect you could say the conscious brain is doubly causally thrown–twice blind. The transparency of perception provides an excellent example, where we are both laterally and medially blind to our causal horizons.

‘Lateral’ and ‘medial’ simply refers to the way the conscious brain tags the causal provenance of the information it processes. The lateral is the environmental, the information fed forward from its ancient and extremely powerful perceptual processors, whereas the medial is neural, the dim informatic motley that is scavenged from systems other than the perceptual. We can assume that the ‘brightness’ of the lateral/environmental and the ‘dimness’ of the medial/neural indicates the relative evolutionary importance of recursive neural processing (C-neural system access) with respect to each. (One can imagine ‘medial brights and lateral dims,’ a species possessing only low resolution environmental awareness, but high resolution neural awareness. You could imagine rewiring environmental processors to do neural work, leading to a kind of ‘extreme synaesthesia,’ things like philosophers who literally dream ideas.)

This explains the crucial cognitive importance of notation: it allows the lateralization of medial processes. Quite literally, it allows us to think at right angles to our thinking. So the implicit natural laws of computation (which only seem ‘a priori’ because we embody them) that evolution had stumbled upon could be submitted to cognition–rendered explicit–and progressively refined in a manner not so different than flint spearheads. We evolved to be Nb-device innovators, to work environmental information to work environments.

‘Making explicit’ is simply a kind of neurorecursive device formation, a ‘making available’ to the conscious brain (which feeds it back into the accessible greater brain).

To say that the brain is a ‘prediction machine’ is to say that it is an abstraction machine. It’s a kind of sieve, progressively extracting pattern and structure, compressing more and more information into device-devices that best approximate what might be called the Optimal Intervention Ratio, the dynamic applicability of information to the most environments for the least metabolic cost. The brain is what Chaitin might call a ‘theory machine.’ It is open, both in terms of inputs and outputs yet it is encapsulated, which is to say informatically localized.

What we call ‘consciousness’ could very well be a kind of ‘recursive compression interface,’ one likely developed in the course of evolving language (which requires the compression of vast amounts of information into a linear behavioural code, be it visual (signing, writing) or auditory (speech)) then stretched in other directions as knock-on advantages accrued.

If this is what consciousness is, then we should expect to find evidence of compression everywhere we turn–and so we do, as the Blind Brain Theory makes clear. It postulates that most of the mysteries of consciousness–and even philosophy more generally–are the artifact of fundamental ‘Compression Heuristics,’ ways evolution has forced the greater brain, and the conscious brain more specifically, to make various informatic tradeoffs, the specifics of which only psychology and neuroscience can determine.

If this is right, it utterly transforms philosophy into the activity of interpreting the conceptual out of the conceptual and the phenomenological out of the phenomenological.