Life as Alien Transmission

by rsbakker

Aphorism of the Day: The purest thing anyone can say about anything is that consciousness is noisy.

.

In order to explain anything, you need to have some general sense of what it is you’re trying to explain. When it comes to consciousness, we don’t even have that. In 1983, Joseph Levine famously coined the phrase ‘explanatory gap‘ to describe the problem facing consciousness theorists and researchers. But metaphorically speaking, the problem resembles an explanatory cliff more than a mere gap. Instead of an explanandum, we have noise. So whatever explanans anyone cooks up, like Tononi’s IITC, for instance, is simply left hanging. Given the florid diversity of incompatible views, the consensus will almost certainly be that the wrong thing is being explained. The Blind Brain Theory offers a diagnosis of why this is the case, as well as a means of stripping away all the ‘secondary perplexities’ that plague our attempts to nail down consciousness as an explanandum. It clears away Error Consciousness, or the consciousness you think you have, given the severe informatic constraints placed on reflection.

So what, on the Blind Brain view, makes consciousness so frickin difficult?

Douglas Adams famously posed the farcical possibility that earth and humanity were a kind of computer designed to answer the question of the meaning of life. I would like to pose an alternate, equally farcical possibility: what if human consciousness were a code, a message sent by some advanced alien species, the Ring, for purposes known only to them? How might their advanced alien enemies, the Horn, go about deciphering it?

The immediate problem they would face is one of information availability. In normal instances of cryptanalysis, the coded message or ciphertext is available, as is general information regarding the coding algorithm. What is missing is the key, which is required to translate the message coded or plaintext from the ciphertext. In this case, however, the alien cryptanalysts would only have our reports of our conscious experiences to go on. Their situation would be hopeless, akin to attempting to unravel the German Enigma code via reports of its existence. Arguably, becoming human would be the only way for them to access the ciphertext.

But say this is technically feasible. So the alien enemy cryptanalysts transform themselves into humans, access the ciphertext in the form of conscious experience, only to discover another apparently insuperable hurdle: the issue of computational resources. To be human is to possess certain on-board cognitive capacities, which, as it turns out, are woefully inadequate. The alien cryptanalysts experiment, augment their human capacities this way and that, but they soon discover that transforming human cognition has the effect of transforming human experience, and so distorting the original ciphertext.

Only now do the Horn realize the cunning ingenuity of their foe. Cryptanalysis requires access both to the ciphertext and to the computational resources required to decode it. As advanced aliens, they possessed access to the latter, but not the former. And now, as humans, they possess access to the former, but at the cost of the latter.

The only way to get at the code, it seems, is to forgo the capacity to decode it. The Ring, the Horn cryptanalysts report, have discovered an apparently unbreakable code, a ciphertext that can only be accessed at the cost of the resources required to successfully attack it. An ‘entangled observer code,’ they call it, shaking their polyps in outrage and admiration, one requiring the cryptanalyst become a constitutive part of its information economy, effectively sequestering them from the tools and information required to decode it.

The only option, they conclude, is to destroy the message.

The point of this ‘cosmic cryptography’ scenario is not so much to recapitulate the introspective leg of McGinn’s ‘cognitive closure’ thesis as to frame the ‘entangled’ relation between information availability and cognitive resources that will preoccupy the remainder of this paper. What can we say about the ‘first-person’ information available for conscious experience? What can we say about the cognitive resources available for interpreting that information?

Explanations in cognitive science generally adhere to the explanatory paradigm found in the life sciences: various operations are ‘identified’ and a variety of mechanisms, understood as systems of components or ‘working parts,’ are posited to discharge them. In cognitive science in particular, the operations tend to be various cognitive capacities or conscious phenomena, and the components tend to be representations embedded in computational procedures that produce more representations. Theorists continually tear down and rebuild what are in effect virtual ‘explanatory machines,’ using research drawn from as many related fields as possible to warrant their formulations. Whether the operational outputs are behavioural, epistemic, or phenomenal, these virtual machines inevitably involve asking what information is available for what component system or process.

I call this process of information tracking the ‘Follow the Information Game’ (FIG). In a superficial sense, playing FIG is not all that different from playing detective. In the case of criminal investigations, evidence is assembled and assessed, possible motives are considered, various parties to the crime are identified, and an overarching narrative account of who did what to whom is devised and, ideally, tested. In the case of cognitive investigations, evidence is likewise assembled and assessed, possible evolutionary ‘motives’ are considered, a number of contributing component mechanisms are posited, and an overarching mechanistic account what does what for what is devised for possible experimental testing. The ‘doing’ invariably involves discharging some computational function, processing and disseminating information for subsequent, downstream or reentrant computational functions.

The signature difference between criminal and cognitive investigations, however, is that criminal investigators typically have no stake or role in the crimes they investigate. When it comes to cognitive investigations, the situation is rather like a bad movie: the detective is always in some sense under investigation. The cognitive capacities modelled are often the very cognitive capacities modelling. Now if these capacities consisted of ‘optimization mechanisms,’ devices that weight and add as much information as possible to produce optimal solutions, only the availability of information would be the problem. But as recent work in ecological rationality has demonstrated, problem-specific heuristics seem to be evolution’s weapon of choice when it comes to cognition. If our cognitive capacities involve specialized heuristics, then the cognitive detective faces the thorny issue of cognitive applicability. Are the cognitive capacities engaged in a given cognitive investigation the appropriate ones? Or, to borrow the terminology used in ecological rationality, do they match the problem or problems we are attempting to solve?

The question of entanglement is essentially this question of cognitive applicability and informatic availability. There can be little doubt that our success playing FIG depends, in some measure, on isolating and minimizing our entanglements. And yet, I would argue that the general attitude is one of resignation. The vast majority of theorists and researchers acknowledge that constraints on their cognitive and informatic resources regularly interfere with their investigations. They accept that they suffer from hidden ignorances, any number of native biases, and that their observations are inevitably theory-laden. Entanglements, the general presumption seems to be, are occupational hazards belonging to any investigative endeavour.

What is there to do but muddle our way forward?

But as the story of the Horn and their attempt to decipher the Ring’s ‘entangled observer code’ makes clear, the issue of entanglement seems to be somewhat more than a run-of-the-mill operational risk when consciousness is under investigation. The notional comparison between the what-is-it-likeness, or the apparently irreducible first-person nature of conscious experience, with an advanced alien ciphertext doesn’t seem all that implausible given the apparent difficulty of the Hard Problem. The idea of an encryption that constitutively constrains the computational resources required to attack it, a code that the cryptanalyst must become to simply access the ciphertext, does bear an eerie resemblance to the situation confronting consciousness theorists and researchers–certainly enough to warrant further consideration.