Paradox as Cognitive Illusion
Aphorism of the Day: A blog knows no greater enemy than Call of Duty. A blogger, no greater friend.
Paradoxes. I’ve been fascinated by them since my contradictory youth.
A paradox is typically defined as a conjunction of two true, yet logically incompatible statements – which those of you with a smattering of ancient Greek will recognize in the etymology of the word (para, ‘beside,’ doxa, ‘holdings’). So in a sense, it would be more accurate to say that I’m fascinated by paradoxicality, that sense of ethereal torsion that you get whenever baffled by self-reference, as in the classic verbalization of Russell’s Set Paradox,
The barber in town shaves all those who don’t shave themselves. Does the barber shave himself?
Or the grandaddy of them all, the Liar’s Paradox,
This sentence is false.
Pondering these while doing my philosophy PhD. at Vanderbilt led me to posit something I called ‘performance-reference asymmetry,’ the strange way referring to the performance of the self-same performance seemed to cramp sense, whether the resulting formulation was paradoxical or not. As in, for instance,
This sentence is true.
This led me to the notion that paradoxes were, properly speaking, a subset of the kinds of problems generated by self-reference more generally. Now logicians and linguists like to argue away paradoxes by appeal to some interpretation of the ‘proper use’ of the terms you find in statements like the above. ‘This sentence is true,’ plainly abuses the indexical function of ‘this,’ as well as the veridical function of ‘true,’ creating a little verbal golem that, you could argue, merely lurches in the semblance semantic life. But I’ve never been interested in the legalities of self-reference or paradox so much as the implications. The important fact, it seems to me, is that self-reference (and therefore paradox) is a defining characteristic of human life. Whatever else that might distinguish us from our mammalian kin, we are the beasts that endlessly refer to the performance of our referring…
Which is to say, continually violate what seems to be a powerful bound of intelligibility.
Now I know that oh-so-many see this as an occasion for self-referential back-slapping, an example of ‘human transcendence’ or whatever. For many, the term ‘aporia’ (which means ‘difficult passage’ in ancient Greek) is a greased pipeline delivering all kinds of super-rational goodies. I’m more interested in the difficulty part. What is it about self-reference that is so damn difficult? Why should referring to the performance of our referring exhibit such peculiar effects?
Now if we were machines, we simply wouldn’t have this problem. It seems to be a brute fact of nature that an information processing mechanism cannot model its modelling as it models. Why? Simply because its resources are engaged. It can model its modelling (at the expense of fidelity) after it has modelled something else. But only after, never as.
Thus, thanks to the irreflexivity of nature, the closest a machine can come to a paradox is a loop. Well, actually, not even that, at least to the extent that ‘loops’ presuppose some kind of circularity. An information processing mechanism can only model the performance of its modelling subsequent to its modelling, which is just to say the circle is never closed, thanks to the crowbar of temporality. So rather, what we have looks more like a spiral than a loop.
Machines can only ‘refer’ to their past states simply because they need their present states to do the ‘referring.’
Can you see the creepy parallel building? Here we have all these ancient difficulties referring to the performance of our referring, and information processing machines, meanwhile, are out-and-out incapable of modelling the performance of their modelling as they model. Could these be related? Perhaps our difficulty stems from the fact that we are actually trying to do something that is, when all is said and done, mechanically impossible.
But as I said above, one of the things that distinguishes us humans from animals is our extravagrant capacity for self-reference. The implicit assumption was that this is also what distinguishes us from machines.
But recall what I said above: information processing machines can only model their modelling – at the expense of fidelity – after they have modelled something else. Any post hoc models an information processing machine generates of its modelling will necessarily be both granular and incomplete, granular because the mechanical complexity required to model its modelling necessarily outruns the complexity of the model, and incomplete because ‘omniscient access’ to information pertaining to its structures and functions is impossible.
Now, of course, the life sciences tell us that the mental turns on the biomechanical – that we are machines, in effect. The reason we need the life sciences to tell us this is that the mental appears to be anything but biomechanical – which is to say, anything but irreflexive. The mental, in other words, would seem to be radically granular and incomplete. This raises the troubling but provocative possibility that our ‘difficulty with self-reference’ is simply the most our stymied cognitive systems can make of the mechanical impossibility of modelling our modelling simultaneous to our modelling.
Like any other mechanism, the brain can only model its past states, and only in a radically granular and incomplete manner, no less. Because it can only cognize itself after the fact, it can never cognize itself as it is, and so cannot cognize the interval between. In other words, even though it can model time (and so easily cognize the mechanicity of other brains), it cannot model the time of modelling, and so has to remain utterly oblivious to its own irreflexivity.
It perceives a false reflexivity, and so is afflicted by a welter of cognitive illusions, enough to make consciousness a near magical thing.
Structurally enforced myopia, simple informatic neglect, crushes the mechanical spiral that decompresses paradoxical self-reference flat. Put differently, what I called ‘paradox in the living sense’ above arises because a brain shaped like this: