A Brick o’ Qualia: Tononi, Phi, and the Neural Armchair

Aphorism of the Day: The absence of light is either the presence of dark–or death. For every decision made, death is the option not taken.

Aphorism of the Day II: Things we see through: eyes, windows, words, images, thoughts, lies, lingerie, and excuses.

.

So Guilio Tononi’s new book Phi: A Voyage from the Brain to the Soul has been out for a few weeks now, and I’ve had this ‘review’ coalescing in my brain’s gut (the reason for the scarequotes should become evident in due course). In the meantime, as fate would have it, I’ve stumbled across several reviews of the book, including one that is genuinely philosophically savvy, as well as several other online considerations of his theory of consciousness. And of course, everyone seems to have an opinion quite the opposite of my own.

First, I should say that this book is written for the layreader: it is in fact, the most original, beautiful general interest book on consciousness I’ve read since Douglas Hofstadter’s Godel, Esher, Bach: The Eternal Golden Braid – a book I can’t help but think provided Tononi with more than a little inspiration – as well as a commercial argument to get his publishers on board. Because on board they most certainly were: Phi is literally one of the most gorgeous books I have ever purchased, so much so that ‘book’ doesn’t seem to do it justice. Volume, would be a better word! The whole thing is printed on what looks like #100 gloss text paper. Posh stuff.

Anyway, if you’re one of my fiction readers who squints at all this consciousness stuff, this is the book for you.

What makes this book extraordinary is the way it ‘argues’ across numerous noncognitive registers. Tononi, with the cooperation of his publisher, put a great deal of effort into the crafting the qualia of the book, to create, in a sense, a kind of phenomenal ‘argument.’ It’s literally bursting with imagery, a pageant of photographic plates that continually frame the text. He writes with a kind of pseudo-Renaissance diction, hyperbolic, dense with cultural references, and downright poetic at times. He uses a narrative and dialogic structure, taking Galileo as his theoretical protagonist. With various guides, the father of science passes through a series of episodes with thinly disguised historical interlocutors, some of them guides, others mere passersby. This is obviously meant to emulate Dante’s Inferno, but sometimes, unfortunately, struck me as more reminiscent of “A Christmas Carol.” Following each of these episodes, he provides ‘Notes,’ which sometimes clarify and other times contradict the content of the preceding narrative and dialogue, generating a number of postmodern effects in genuinely unprecedented ways. Phi, in other words, is entirely capable of grounding thoroughly literary readings.

The result is that his actual account, the Information Integration Theory of Consciousness (IITC), is deeply nested within a series of ‘quality intensive’ expressive modes. The book, in other words, is meant to be a kind of tuning fork, something that hums with the very consciousness that it purports to explain. A brick o’ qualia…

An exemplar of Phi itself, the encircled ‘I’ of information.

So at this expressive level, at least, there is no doubting the genius of the book. Of course there’s many things I could quibble about (including sexism, believe it or not!) but they strike me as too idiosyncratic to belong in a review meant to describe and evaluate the book for others.

What I’ve found so surprising these past weeks is the apparent general antipathy to IITC in consciousness research circles, when personally, I class it in the same category as its main scientific competitors, like Bernard Baars’ Global Workspace theory of consciousness. And unlike pretty much everyone I’ve read, I actually think Tononi’s account of qualia (the term philosophers use for the purely phenomenal characteristics of consciousness, the redness of red, and so on) can actually do some real explanatory work.

Most seem to agree with Peter Hankins’ assessment of IITC on Conscious Entities, which boils down to ‘but red ain’t information’! Tononi, I admit, does have the bad habit of conflating his primary explanans for his explandum (and thus flirting with panpsychism), but I actually don’t think he’s arguing that red is information as he’s arguing that information integration can explain red as much as it needs to be explained.

Information integration builds on Gerald Edelman’s guiding insight that whatever consciousness is, it has something to do with differentiated unity. ‘Phi’ refers to the quantity of information (in its Shannon-Weaver incarnation) a system possesses over and above the information possessed by its component parts. One photodiode can be either on or off. Add another, and all you have are two photodiodes that are on or off. Since they are disconnected, they generate no information over and above on/off. Integrate them, which is to say, plug them into a third system, and suddenly the information explodes: on/on, on/off, off/on, off/off. Integrate another, and you have: on/on/on, on/on/off, on/off/off, off/off/off, off/off/on, off/on/on, off/on/off, on/off/on. Integrate another and… you get the picture.

Tononi argues that consciousness is a product of the combinatorial explosion of possible states that accompanies the kind of neuronal integration that seems to be going on in the thalamocortical system of the human brain. And he claims that this can explain what is going on with qualia, the one thing in consciousness research that seems to be heavier than Thor’s hammer.

Theoretically speaking, this puts him in a pretty pickle, because when it comes to qualia, two warring camps dominate the field: those who think qualia are super special, and those who think qualia are not what we make of them, conceptually incoherent, or impossible to explain without begging the question. Crudely put, the problem Tononi faces with the first tribe is that as soon as he picks the hammer up, they claim that it wasn’t Thor’s hammer after all, and the problem he faces with the second tribe is that they don’t believe in Thor.

The only safe thing you can say about qualia is that they are controversial.

Tononi thinks the explanation will look something like:

The many mechanisms of a complex, in various combinations, specify repertoires of states they can distinguish within the complex, above and beyond what their parts can do: each repertoire is integrated information–each an irreducible concept. Together they form a shape in qualia space. This is the quality of experience, and Q is its symbol. (217)

The reason I think this notion has promise lies in the way it explains the apparent inexplicability of things like red. And this, to me, seems as good a place to begin as any. Gary Drescher, for instance, argues that qualia should be understood by analogue to gensyms in Lisp programming. Gensyms are elements that are inscrutable to the program outside of their distinction from other elements. Lisp can recognize only that a gensym is a gensym, and none of its properties.

Similarly, we have no introspective access to whatever internal properties make the red gensym recognizably distinct from the green; our Cartesian camcorders are not wired up to monitor or record those details. Thus we cannot tell what makes the red sensation redlike, even though we know the sensation when we experience it. (Good and Real, 81-2)

Now I think this analogy fails in a number of other respects, but what gensyms do is allow us to see the apparent inexplicability of qualia as an important clue, as a positive feature possessing functional consequences. Qualia qua qualia are informatically impoverished, ‘introspectively opaque,’ so much so you might almost think they belonged to a system that was not designed to cognize them as qualia – which, as it turns out, is precisely the case. (Generally speaking, theoretical reflection on experience is not something that will get you laid). So in a sense, the first response to the ‘problem of qualia’ should be, Go figure. Given the exhorbitant metabolic cost of neural processing, we should expect qualia to be largely inscrutable to introspection.

For Tononi, Q-space allows you to understand this inscrutability. Red is a certain dedicated informatic configuration (‘concept’) that is periodically plugged into the larger, far more complex succession of configurations that occupy the whole.

Now for all it’s complexity, it’s important to recall that our brains are overmatched by the complexity of our environments. Managing the kind of systematic relationships with our environments that our brain does requires a good deal of complexity reduction, heuristic mechanisms robust enough to apply to as many circumstances as possible. So a palette of environmental invariants are selected according to the whims of reproductive success, which then form the combinatorial basis for ‘aggregate heuristic mechanisms’ (or ‘representations’) capable of systematically interacting with more variant, but recurrent, features of the environment.

So red helped our primate ancestors identify apples. As thalamocortical complexity increased, it makes sense that our cognitive capacities would adapt to troubleshoot things like apples instead of things like red, simply because the stakes of things like light reflected at 650nm are low compared to things like apples. Qualia, you could say, are existentially stable. Redness doesn’t ambush or poison or bloom or hang from perilous branches. It makes sense that the availability of information and corresponding cognitive resources would covary with the ‘existential volatility’ of a given informatic configurations (prerepresentational or representational).

What Tononi gets is that red engages the global configuration in a fixed way, one that does not allow the it nearly so many ‘degrees of dynamic reconfiguration’ relative to it as opposed to apples. Okay, so this last bit isn’t so much Tononi as the way IITC plugs into the Blind Brain Theory (BBT). But his insight provides a great starting point.

So what explains the ‘redness’ of red, the raw, ineffable feel of pain? This is where qualiaphiles will likely want to jump ship. From Tononi’s Q-space perspective, a given space (heuristic configuration) simply is what it is – ‘irreducible,’ as he puts it. Thanks to evolution, we inherited a wild variety of differentiating shapes, or qualia, by happenstance. If you want to understand what makes red red, let me refer you to the anthropic principle. It’s part of basic cable. These are simply the channels available when cable first got up and running.

Returning to BBT, the thing to appreciate here is what I call encapsulation. Even though the brain is an open system, conscious experience only expresses information that is globally broadcast or integrated. If it is the case that System 2 deliberation (reflection) is largely restricted to globally broadcast or integrated information, then our reasoning is limited to what we can consciously experience. Our senses, of course, provide a continuous stream of environmental information which finds itself expressed in transformations of aggregate heuristic configurations, representations. With apples we can vary our informatic perspective and sample hitherto unavailable information to leverage the various forms of dynamic reconfiguration that we call cognition.

Not so with red. Basic heuristic configurations (combinatorial prerepresentations or qualia) are updated, certainly. Green apples turn red. Blood dries to brown. But unlike apples, we can never get up and look at the backside of red, never access the information required to effect the various degrees of dynamic reconfiguration required for cognition.

It’s a question of informatic ‘perspective.’ With qualia we are trapped in our neural armchair. The information available to System 2 deliberation (reflection) is simply too scant (and likely too mismatched to the heuristic demands of environmental cognition) to do anything but rhapsodize or opine. Red is too greased and cognition too frostbitten to do the juggling that knowledge requires. (Where science is in the business of economizing excesses of information, phenomenology, you could say, is in the business of larding its shortage).

But this doesn’t mean that qualia can’t be naturalistically explained. I just offered an outline of a possible explanation above. It just means that qualia are fundamentals of our cognitive system in a manner perhaps similar to the way the laws of physics are fundamentals of the universe. (And it doesn’t mean that an attenuated ‘posthuman’ brain couldn’t be a radical game changer, providing our global configuration with cognitive resources required to get out of our neural armchair and ‘scientifically’ experiment with qualia). The qualification ‘our cognitive system’ above is an important one. What qualia share in common with the laws of physics has to do with encapsulation, which is to say, constraints on information availability. What qualia and the laws of physics share is certain informatic inscrutability, an epistemological profile rather than an ontological priority. The same way we can’t get out of our neural armchair to see the backside of red, we can’t step outside the universe to see the backside of the Standard Model.*

But the fact is the kind of nonsemantic informatic approach I’m taking here marks a radical departure from the semantic approaches that monopolize the tradition. Peter, in his Conscious Entities critique of IITC linked above, references the Frank Jackson’s famous thought experiment of Mary, the colour-deprived neuroscientist. The argument asks us to assume that Mary has learned all physical facts about red there is to know while sequestered in a black and white environment. The question is whether she learns a new fact, namely what red looks like, when she encounters and so experiences red for the very first time. If the answer is yes, as intuition wants to suggest, then it seems that qualia constitute a special kind of nonphysical fact, and that physicalism is accordingly untrue.

As Peter writes,

And this proves that really seeing red involves something over and above the simple business of wavelengths and electrical impulses. Doesn’t it? No, of course not. Mary acquired no new knowledge when she saw the rose – she had simply had a new experience. Focussing too exclusively on the role of the senses as information gatherers can lead us into the error of supposing that to experience a particular sight or sound is merely to gain some information. If that were so, reading the label on a bottle of wine would be as enjoyable as drinking it. Of course experiencing something allows us to generate information about it, but we also experience the reality, which in itself has nothing to do with information.

The reason he passes on IITC is that he thinks qualia obviously involves something over and above ‘mere information,’ what he calls the ‘reality’ of the experience. This is a version of a common complaint you find levelled against Tononi and IITC, the notion that information and experience are obviously two different things – otherwise, as Peter says, “reading the label on a bottle of wine would be as enjoyable as drinking it.” Something else has to be going on.

This is an example of a demand I have only ever seen in qualia debates: the notion that the explanans must somehow be the explanandum. Critics always focus on how strange this demand looks when mapped onto other instances of natural explanation. Should chemical notations explaining grape fermentation get us drunk? Should we reject them because they don’t? But the interesting question, I think, is why this move seems so natural in this particular domain of inquiry. Why, when we have no problem whatsoever with the explanatory power of information regarding physical phenomenal, do we suddenly balk when it’s applied to the phenomenal?

In fact, it’s quite understandable given the explanation I’ve given above. Rather than arising as an artifact of the radical (and quite unexplained) disjunct between mechanistic and phenomenal conceptualities as most seem to assume, the problem rather lies with the neural armchair. The thing to realize (and this is the insight that BBT generalizes) is that qualia are as much defined by their informatic simplicity as they are by the information they provide. Once again, qualia are baseline heuristics (prerepresentations): like gensyms, they are defined by the information they lack. Qualia are those elements of conscious experience that lack a backside. Since the province of explanation is to provide information, to show the backside, as it were, there is a strange sense in which we should expect our explanations will jar with our phenomenal intuitions.

Rethinking the Mary argument in nonsemantic informatic terms actually illustrates this situation in rather dramatic fashion. So Mary has, available for global broadcasting or integration (conscious processing), representations (knowledge of the brain as object) leveraged via prerepresentational systems lacking any colour. Suddenly her visual systems process information secondary to light with the wavelength of 650nm.  Her correlated neurophysiology lights up. In informatic terms, we have two different sets of channels–one ‘access’ and one ‘phenomenal’–performing a variety of overlapping and interlocking functions matching her organism to its environments. For the very first time in her brain’s history, red is plugged into this system and globally broadcast or integrated, becoming available for conscious experience. She sees ‘red’ for the very first time.

Certainly this constitutes a striking change in her cognitive repertoire, and so, one would think, knowledge of the brain as subject.

From a nonsemantic informatic perspective, the metaphysical implications (the question of whether physicalism is true) are merely symptomatic of what is really interesting. The Mary argument raises an artificial barrier between what are otherwise integral features of cognition, and so pits a fixed prerepresentational channel against a roaming, representational one. Through it, Jaskson manages to produce a kind of ‘conceptual asymbolia,’ a way to calve phenomenality from thought in thought, and so throw previously implicit assumptions/intuitions into relief.

The Mary Argument demonstrates something curious about the way information that makes it to global broadcasting or integration (conscious awareness) is ‘divvied up’ (while engaging System 2 deliberation (reflection), at any rate). The primary intuition it seems to turn on, the notion that ‘complete physical knowledge’ is possible absent prerepresentational components such as red, suggests a powerful representational bias, to the point of constituting a kind of informatic neglect. We have already considered how red is dumbmute, like a gensym. We have also considered the way deliberative cognition possesses a curious insensitivity to information outside its representational ambit. In rank intentional terms, you could say we are built to look through. The informatic role of qualia is left mysterious, unintergrated, unbroadcast–almost entirely so. We might as well be chained in Plato’s cave where they are concerned, born into them, unable vary our perspective relative to them.

The Mary argument, in other words, doesn’t so much reveal the limitations of physicalism as it undermines the semantic assumptions that underwrite it. Of course ‘seeing red’ provides Mary with a hitherto unavailable source of information. Of course this information, if globally broadcast or integrated will be taken up by her cognitive systems, dynamically reconfiguring ‘K-space,’ the shape of knowledge in her brain. The only real question is one of why we should have so much difficulty squaring these platitudinal observations with our existing understanding of knowledge.

The easy answer is that these semantic assumptions are themselves prerepresentational heuristics, kluges, if you will, selected for their robustness, and matched (in the ecological rationality sense) to our physical-environmental cognitive systems. But this is a different, far more monstrous story.

Ultimately, the thing to see is that Tononi’s Phi is a kind of living version of the Mary Argument. He gives us a brick o’ qualia, a book that fairly throbs with phenomenality, so seating us firmly in our neural armchair. And through the meandering of rhapsody and opinion, he gives our worldly cognitive systems something to fasten onto, information nonsemantically defined, allowing us, at long last, to set aside the old dualisms, and so range from nature to the soul and back again, however many times it takes.

Notes:

* I personally don’t think qualia are the mystery everyone makes them out to be, but this doesn’t mean I think the hard problem is solved – far from it. The question of why we should have these informatically dumbmute qualia at all remains as much as burning mystery as ever.

Less Than ‘Zero Qualia’: Or Why Getting Rid of Qualia Allows us to Recover Experience (A Reply to Keith Frankish)

Aphorism of the Day: Here, it turns out, is so bloody small that even experience finds itself evicted and housed over there.

.

From Philosophy TV:

Richard Brown: And you know there is a–I don’t want to say growing movement–but there is a disturbing undercurrent [laughs] of philosophers who are out and saying that they are in fact zombies. So I don’t know if you are aware of this or not but…

Keith Frankish: I’m… [laughs] Not phenomenally.

Richard Brown: Okay… [laughs]

Keith Frankish: [laughs] Yes, I might align myself with this ‘disturbing undercurrent.’

.

I think philosophy of mind–as an institution–is caught in a great dilemma: either they accept the parochial, heuristic nature of intentional cognition, or they condemn themselves to never understanding human consciousness. This was the basis of my interpretation of Frank Jackson’s Mary argument as a ‘heuristic scope of application detector,’ a way to make the limits of human environmental cognition known. Why does it seem possible for Mary to know everything about red without every having experienced red? Why does the additional information provided by experiencing red not obviously count as ‘knowledge’?  In other words, why the conflict of intuitions?

The problem, in a nutshell, has to do with informatic neglect (see my previous post for more detail). Heuristic cognition leverages computational efficiencies by ignoring information. Intentional cognition, in particular, systematically neglects all the neurofunctional information pertaining to our environmental tracking. In a sense, this is all that ‘transparency’ is: blindness to the mechanisms responsible for environmental cognition. Given the functional independence of our environments, neglecting this information pays real computational dividends. Given reliable tracking systems, information regarding those systems is not necessary to cognize systems tracked, but only so long as those systems tracked are not ‘functionally entangled’ with the systems tracking. You can puzzle through a small engine repair because the systems doing the tracking in no way interfere with the system tracked. What you might call the medial causal relations that enable you to repair small engines in no way impinge on the lateral causal relations that make engines breakdown or run.

This is why intentional cognition is almost environmentally universal, simply because the environmental systems tracked are almost universally functionally independent of our cognition. I say ‘almost,’ of course, because on the microscopic level this functional independence breaks down as the lateral systems tracked become sensitive to ‘interference’ from medial systems tracking: if photons leave small engines untouched, they have dramatic effects on subatomic particles. This is also why intentional cognition can only get consciousness wrong. When we attempt to cognize conscious experience, we have an instance of a cognitive system that systematically neglects medial causal relationships attempting to track a functionally entangled system as if it were independent. The lateral and the medial are one and the same in these instances of attempted cognition, which quite simply means that neither can be cognized or ‘intuited.’

And this, on the Blind Brain Theory (BBT), is the primary hook from which the ‘mind/body’ problem hangs. What we ‘cognize’ when we draw conscious experience into deliberative cognition is quite literally analogous to Anton’s Syndrome: we think we see everything there is to be seen, and yet we really don’t see anything at all. Consciousness, as it appears to us, is a kind of ‘forced perspective’ illusion. Given that we are brainbound, or functionally entangled, and given the environmental orientation of our cognitive systems, we have no way to ‘intuit’ consciousness absent gross distortions. As such, consciousness as it appears is literally inexplicable, period, let alone in natural terms. It can only be explained away, leaving a remainder, consciousness as it is, as the only thing science need concern itself with.

In this post, I want to consider a recent ‘radical position’ in the philosophy of mind, that belonging to Keith Frankish, and show 1) the facility with which his argument can be recapitulated, even explained, in BBT terms; and 2) how it is nowhere near radical enough.

In his “Quining Diet Qualia,” Frankish notes that defences of what he terms ‘classic qualia,’ understood as “introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective” (1-2) have largely vanished from the literature, primarily because ‘intrinsic properties’ resist explanation in either functional or representational terms. Instead, theorists have opted for a ‘watered-down conception’ of qualia in terms of “phenomenal character, subjective feel, raw feel, or ‘what-is-it-likeness’” (2), what Frankish calls ‘diet qualia.’ The idea behind talking about qualia in these terms makes them palatable to both dualists and physicalists, or ‘theory-neutral,’ as Frankish puts it, since everyone assumes that qualia, in this restricted sense, at least, are real.

But Frankish doubts that qualia make sense in even this minimal sense. To illustrate his suspicion, he introduces the concept of ‘zero qualia,’ which he defines as those “properties of experiences that dispose us to judge that experiences have introspectable qualitative properties that are intrinsic, ineffable, and subjective” (4). His strategy will be to use zero qualia to show that diet qualia don’t differ from classic qualia in any meaningful sense.

Now, one of the things that caught my eye in this paper was the striking resemblance between zero qualia and my phenophage thought experiment from several weeks back:

Imagine a viscous, gelatinous alien species that crawls into human ear canals as they sleep, then over the course of the night infiltrates the conscious subsystems of the brain. Called phenophages, these creatures literally feed on the ‘what-likeness’ of conscious experience. They twine about the global broadcasting architecture of the thalamocortical system, shunting and devouring what would have been conscious phenomenal inputs. In order to escape detection, they disconnect any system that could alert its host to the absence of phenomenal experience. More insidiously still, they feed-forward any information the missing phenomenal experience would have provided the cognitive systems of its host, so that humans hosting phenophages comport themselves as if they possessed phenomenal experience in all ways. They drive through rush hour traffic, complain about the sun in their eyes, compliment their spouses’ choice of clothing, ponder the difference between perfumes, extol the gustatory virtues of their favourite restaurant, and so on. (TPB 21/09/2012)

By defining zero qualia in terms of their cognitive effects, Frankish has essentially generated a phenophagic concept of qualia–which is to say, qualia that aren’t qualitative at all. I-know-I-know, but before you let that squint get the better or you, consider the way this conceptualization recontextualizes the supposedly minimal commitment belonging to diet qualia. By detaching the supposed cognitive effects of phenomenality from phenomenality, zero qualia raise the question of just what this supposedly neutral ‘phenomenal character’ is. As Frankish puts it, “What could a phenomenal character be, if not a classical quale? How could a phenomenal residue remain when intrinsicality, ineffability, and subjectivity have been stripped away?” (4). Zero qualia, in other words, have the effect of showing that diet qualia, despite the label, are packed with classic calories:

The worry can be put another way. There are competing pressures on the concept of diet qualia. On the one hand, it needs to be weak enough to distinguish it from that of classic qualia, so that functional or representational theories of consciousness are not ruled out a priori. On the other hand, it needs to be strong enough to distinguish it from the concept of zero qualia, so that belief in diet qualia counts as realism about phenomenal consciousness. My suggestion is that there is no coherent concept that fits this bill. In short, I understand what classic qualia are, and I understand what zero qualia are, but I don’t understand what diet qualia are; I suspect the concept has no distinctive content. (4-5)

Frankish then continues to show why he thinks various attempts to save the concept are doomed to failure. The dilemma is structured so that either the proponent of diet qualia takes the further step of defining ‘phenomenal character,’ a conceptual banana peel that sends them skidding back into the arms of classic qualia, or they explain why dispositions aren’t what they really meant all along.

Now on the BBT account, qualia need to be rethought within a consciousness and cognition structured and fissured by informatic neglect. The heuristic nature of intentional cognition means that medial neurofunctionality is always neglected. And as I said above, this means deliberative reflection on conscious experience constitutes a clear cut ‘scope violation,’ an instance of using a heuristic to solve a problem it never evolved to tackle. Introspective intentional cognition, on this account, is akin to climbing trees with flippers.

Of course it doesn’t seem this way–quite the opposite in fact–and for reasons that BBT predicts. Like medial neurofunctionality, the limits of intentional cognition are also lost to neglect. Short of learning those limits, the scope of applicability of intentional cognition, universality is bound to be the default assumption. So our intentional cognitive systems make sense of what they can oblivious of their incapacity. The ease with which they conjure worlds out of pixels and paint, for instance, demonstrates their power and automaticity. BBT suggests that something analogous happens when intentional cognition is fed metacognitive information: the information is organized in a manner amenable to intentional, environmental cognition.

As asserted above, the point of the intentional heuristic is to isolate and troubleshoot lateral environmental relations (normative or causal) against a horizon of variable information access. Thus it ‘lateralizes,’ you could say, the first-person, turns it into little environment. The problem is that this ‘phenomenal environment’ literally possesses no horizon of variable access (cognition is functionally entangled, or ‘brainbound,’ with reference to experience) and, thanks to the interference of the medial neurofunctionality neglected, no lateral causal relationships. Like Plato’s cave-dwellers, intentional cognition is quite simply stuck with information it cannot cognize. ‘Phenomenal character’ becomes a round peg in a world of cognitive squares: as it has to be on the BBT account.

By making the move to ‘cognitive dispositions,’ zero qualia bank on our scientific knowledge of the otherwise neglected axis of medial neurofunctionality. The challenge, for the diet qualia advocate, is to explain how phenomenal character anchors this medial neurofunctionality (understood as cognitive dispositions), to explain, in other words, what role ‘phenomenal character’ plays–if any. But of course, thanks to the heuristic short-circuit described above, this is precisely what the diet qualia advocate cannot do. The question then becomes, of course, one of what ‘diet’ amounts to. Either one moves inside the black box and embraces classic qualia or one moves outside it and settles for zero qualia.

But of course, neither of these options are tenable either. Dispositional accounts, though epistemologically circumspect, have a tendency to be empirically inert: the job of science is to explain dispositions, which is to say, use theory to crack open black boxes. Epistemological modesty isn’t always a virtue. And besides, there remains the fact that we actually do have these experiences!

Frankish’s real point, of course, is that philosophy of mind has made no progress whatsoever in the move to diet qualia, that phenomenality remains as impervious as ever to functional or representational explanation and understanding. But he remains as mystified as everyone else about the origins and dynamics of the problem. I would append, ‘only more honestly so,’ were it not for claims like, “I think everyone agrees that zero qualia exist,” in the interview referenced above. I certainly don’t, and for reasons that I think should be quite clear.

For one, consider how his ‘cognitive dispositions’ only run one way, which is to say, from the black box of phenomenality, when the medial neurofunctionality occluded by metacognitive deliberation almost certainly runs back and forth, or in other words, is exceedingly tangled. And this underscores the artificiality of zero qualia, the way they can only do their intuitive work by submitting to what is a thoroughly distorted understanding of conscious experience in the first place. The very notion that phenomenal character can be ‘boxed,’ cleanly parsed from its cognitive consequences, is an obvious artifact of neurofunctional informatic neglect, the way, intentional cognition automatically organizes information for troubleshooting.

On the BBT account, the problem lies in the assumption that intentional cognition is universal when it is clearly heuristic, which is to say, an information neglecting problem-solving device adapted to specific problem-solving contexts. The ‘qualia’ that everyone has been busily arguing about and pondering in consciousness research and the philosophy of mind are simply the artifacts of a clear (once you know what to look for) heuristic scope violation. There are no such things, be they classic, diet, or zero.

Now given that the universality of intentional cognition is the default assumption of nearly every soul reading this, I’m certain that what I’m about to say will sound thoroughly preposterous, but I assure it possesses its own, counterintuitive yet compelling logic (once you grasp the gestalt, that is!). I want to suggest that it makes no more sense to speak of qualia ‘existing’ than it does to speak of individual letters ‘meaning.’ Qualia are subexistential in the same way that phonemes are ‘subsemantic.’

But they must be something! your intuitions cry–and so they must, given that intentional cognition is blind to its heuristic limits, to the very possibility that it might be parochial. It has no other choice but to treat the first-person as a variant of the third, to organize it for the kinds of environmental troubleshooting it is adapted to do. After all, it works everywhere else: Why not here? Well, as we have seen, because qualia are neurofunctionally integral to the effective functioning of intentional cognition, they are a medial phenomenon, and as such are utterly inaccessible to intentional cognition, given the structure of informatic neglect that characterizes it.

But this doesn’t mean we can’t understand them, that McGinn and the Mysterians are correct. McGinn, you could say, glimpsed the way phenomenality might exceed the reach of intentional cognition while still assuming that the latter was humanly universal, that we couldn’t gerrymander ways to see around our intuitions, as we have, for example, with general relativity or quantum mechanics.

Consciousness presents us with precisely the same dilemma: cling to heuristic intuitions that simply do not apply, or forge ahead and make what sense of these things as we can. If the concept ‘existence’ belongs to some heuristic apparatus, then the notion that qualia are subexistential is merely counterintuitive. Otherwise, relieved of the need to force them into a heuristic never designed to accommodate them, we can make very clear sense of them as phenomemes, the combinatorial building blocks of ‘existence,’ the way phonemes are the combinatorial building blocks of ‘meaning.’ They do not ‘exist’ the way apples, say, exist in intentional cognition, simply because they belong to a different format. ‘What is redness?’ makes no sense if we ask it in the same intuitive way we ask, ‘What are apples?’ The key, again, is to avoid tripping over our heuristics. Though redness eludes the gross, categorical granularity of intentional cognition, we can nevertheless talk apples and rednesses together in terms of nonsemantic information–which is just to say, in terms belonging to what the life sciences take us to be: evolved, environmentally-embedded, information processing systems.

Because of course, the flip side of all this confusion regarding qualia is the question of how a mere machine can presume to ‘know truth,’ as opposed to happening to stand in certain informatic relationships with its environments, some effective, others not. When it comes to conundrums involving intentionality, qualia are by no means lonely.

Life as Alien Transmission

Aphorism of the Day: The purest thing anyone can say about anything is that consciousness is noisy.

.

In order to explain anything, you need to have some general sense of what it is you’re trying to explain. When it comes to consciousness, we don’t even have that. In 1983, Joseph Levine famously coined the phrase ‘explanatory gap‘ to describe the problem facing consciousness theorists and researchers. But metaphorically speaking, the problem resembles an explanatory cliff more than a mere gap. Instead of an explanandum, we have noise. So whatever explanans anyone cooks up, like Tononi’s IITC, for instance, is simply left hanging. Given the florid diversity of incompatible views, the consensus will almost certainly be that the wrong thing is being explained. The Blind Brain Theory offers a diagnosis of why this is the case, as well as a means of stripping away all the ‘secondary perplexities’ that plague our attempts to nail down consciousness as an explanandum. It clears away Error Consciousness, or the consciousness you think you have, given the severe informatic constraints placed on reflection.

So what, on the Blind Brain view, makes consciousness so frickin difficult?

Douglas Adams famously posed the farcical possibility that earth and humanity were a kind of computer designed to answer the question of the meaning of life. I would like to pose an alternate, equally farcical possibility: what if human consciousness were a code, a message sent by some advanced alien species, the Ring, for purposes known only to them? How might their advanced alien enemies, the Horn, go about deciphering it?

The immediate problem they would face is one of information availability. In normal instances of cryptanalysis, the coded message or ciphertext is available, as is general information regarding the coding algorithm. What is missing is the key, which is required to translate the message coded or plaintext from the ciphertext. In this case, however, the alien cryptanalysts would only have our reports of our conscious experiences to go on. Their situation would be hopeless, akin to attempting to unravel the German Enigma code via reports of its existence. Arguably, becoming human would be the only way for them to access the ciphertext.

But say this is technically feasible. So the alien enemy cryptanalysts transform themselves into humans, access the ciphertext in the form of conscious experience, only to discover another apparently insuperable hurdle: the issue of computational resources. To be human is to possess certain on-board cognitive capacities, which, as it turns out, are woefully inadequate. The alien cryptanalysts experiment, augment their human capacities this way and that, but they soon discover that transforming human cognition has the effect of transforming human experience, and so distorting the original ciphertext.

Only now do the Horn realize the cunning ingenuity of their foe. Cryptanalysis requires access both to the ciphertext and to the computational resources required to decode it. As advanced aliens, they possessed access to the latter, but not the former. And now, as humans, they possess access to the former, but at the cost of the latter.

The only way to get at the code, it seems, is to forgo the capacity to decode it. The Ring, the Horn cryptanalysts report, have discovered an apparently unbreakable code, a ciphertext that can only be accessed at the cost of the resources required to successfully attack it. An ‘entangled observer code,’ they call it, shaking their polyps in outrage and admiration, one requiring the cryptanalyst become a constitutive part of its information economy, effectively sequestering them from the tools and information required to decode it.

The only option, they conclude, is to destroy the message.

The point of this ‘cosmic cryptography’ scenario is not so much to recapitulate the introspective leg of McGinn’s ‘cognitive closure’ thesis as to frame the ‘entangled’ relation between information availability and cognitive resources that will preoccupy the remainder of this paper. What can we say about the ‘first-person’ information available for conscious experience? What can we say about the cognitive resources available for interpreting that information?

Explanations in cognitive science generally adhere to the explanatory paradigm found in the life sciences: various operations are ‘identified’ and a variety of mechanisms, understood as systems of components or ‘working parts,’ are posited to discharge them. In cognitive science in particular, the operations tend to be various cognitive capacities or conscious phenomena, and the components tend to be representations embedded in computational procedures that produce more representations. Theorists continually tear down and rebuild what are in effect virtual ‘explanatory machines,’ using research drawn from as many related fields as possible to warrant their formulations. Whether the operational outputs are behavioural, epistemic, or phenomenal, these virtual machines inevitably involve asking what information is available for what component system or process.

I call this process of information tracking the ‘Follow the Information Game’ (FIG). In a superficial sense, playing FIG is not all that different from playing detective. In the case of criminal investigations, evidence is assembled and assessed, possible motives are considered, various parties to the crime are identified, and an overarching narrative account of who did what to whom is devised and, ideally, tested. In the case of cognitive investigations, evidence is likewise assembled and assessed, possible evolutionary ‘motives’ are considered, a number of contributing component mechanisms are posited, and an overarching mechanistic account what does what for what is devised for possible experimental testing. The ‘doing’ invariably involves discharging some computational function, processing and disseminating information for subsequent, downstream or reentrant computational functions.

The signature difference between criminal and cognitive investigations, however, is that criminal investigators typically have no stake or role in the crimes they investigate. When it comes to cognitive investigations, the situation is rather like a bad movie: the detective is always in some sense under investigation. The cognitive capacities modelled are often the very cognitive capacities modelling. Now if these capacities consisted of ‘optimization mechanisms,’ devices that weight and add as much information as possible to produce optimal solutions, only the availability of information would be the problem. But as recent work in ecological rationality has demonstrated, problem-specific heuristics seem to be evolution’s weapon of choice when it comes to cognition. If our cognitive capacities involve specialized heuristics, then the cognitive detective faces the thorny issue of cognitive applicability. Are the cognitive capacities engaged in a given cognitive investigation the appropriate ones? Or, to borrow the terminology used in ecological rationality, do they match the problem or problems we are attempting to solve?

The question of entanglement is essentially this question of cognitive applicability and informatic availability. There can be little doubt that our success playing FIG depends, in some measure, on isolating and minimizing our entanglements. And yet, I would argue that the general attitude is one of resignation. The vast majority of theorists and researchers acknowledge that constraints on their cognitive and informatic resources regularly interfere with their investigations. They accept that they suffer from hidden ignorances, any number of native biases, and that their observations are inevitably theory-laden. Entanglements, the general presumption seems to be, are occupational hazards belonging to any investigative endeavour.

What is there to do but muddle our way forward?

But as the story of the Horn and their attempt to decipher the Ring’s ‘entangled observer code’ makes clear, the issue of entanglement seems to be somewhat more than a run-of-the-mill operational risk when consciousness is under investigation. The notional comparison between the what-is-it-likeness, or the apparently irreducible first-person nature of conscious experience, with an advanced alien ciphertext doesn’t seem all that implausible given the apparent difficulty of the Hard Problem. The idea of an encryption that constitutively constrains the computational resources required to attack it, a code that the cryptanalyst must become to simply access the ciphertext, does bear an eerie resemblance to the situation confronting consciousness theorists and researchers–certainly enough to warrant further consideration.

Philosophical Glossary

[this is woefully incomplete, but I thought I would post it as is to see what kind of critical feedback I can garner]

PHILOSOPHICAL GLOSSARY: Where I try to clarify for others what remains murky to myself

 

Afference – Term referring to non-inferential, non-associative ‘truth-preserving’ transitions between claims, the ‘sense’ that binds philosophical implicatures according to content as well as form. A specific form of what Sellars (1953) terms ‘material inference,’ only dealing with concepts possessing particularly far-ranging implications.

Afferentialism - The ‘in-between’ philosophy. Afferentialism is formally in-between insofar as it is neither inferential in any deductive or inductive sense, nor associative in any poetic or narrative sense (see, afference). Afferentialism is substantively in-between to the extent that its subject matter is virtual as opposed to real or ideal (see Blind Brain Theory). The primary goal of afferentialism is heuristic, to provide quasi-cognitive ways of understanding human experience in the wake of cognitive neuroscience, either because cognitive neuroscience has nothing to say, or because what it does say is apocrustic.

BBT - see, Blind Brain Theory (of Conscious Structuration)

Blind Brain Theory (of Conscious Structuration) - Proposal that the central, most perplexing features of consciousness are the result of thalamocortical ‘information horizons,’ in effect, the ways the conscious portions of the brain are blind to the complexities of their immediate neural environment. BBT hypothesizes that various phenomenal structural peculiarities such as presence, self-identity, and intentionality, are simply a consequence of informatic asymmetry, the fact that the thalamocortical system can only access a small fraction of the greater brain’s overall processing load.

Bottleneck Thesis - That we are what we are in such a way that we cannot know what we are. BBT poses the empirical possibility that human cognition as experienced is fundamentally non-cognitive. Only a fraction of the neural apparatus of cognition is available to the TCS. This means, 1) given encapsulation, the TCS will confuse that fraction for the whole; 2) given the systematic relation between that fraction and the whole, the fraction will appear ‘to function’ as well as the whole; and 3) we will be ‘trapped’ between our experience of cognition and the neurophysiological facts of cognition. Thus, even though something like ‘epistemic normativity’ seems to be the very condition of cognition, we can say there is no such thing. Given the evolutionary youth of consciousness and various structural constraints on information integration, we can even argue that it is empirically probable there is no such thing, that we should expect the Bottleneck to obtain, not only for ourselves, but for any biologically evolved intelligence.

Cartoons – Edifying pejorative used to refer to any speculative (non-scientific) theoretical implicature, given Phenomenal Incompleteness and Theoretical Incompetency.

CGII - see, Consciousness Generating Information Integration.

Consciousness Generating Information Integration - Guilio Tononi’s Information Integration Theory of Consciousness (IITC) postulates that “consciousness corresponds to the capacity of a system to integrate information,” (2004). Consciousness is the product of a certain density of information integration, quantified as a “phi value,” which Tononi originally characterized as a capacity, then revised as a measure of dynamics, the information generated when a system transitions between states (2008). The BBT takes the IITC as a possible explanatory model for its claims, since there is reason to believe that dynamic information integration is not sufficient for consciousness. IITC remains, however, a leading candidate in therapeutically oriented research attempting to ascertain the levelof consciousness that can be attributed to various subjects.

According to Tononi, computer simulations indicate that information integration is maximized in systems where a plurality of dedicated information processing systems find themselves thoroughly interconnected, a structure that corresponds to the mammalian brain. This also corresponds to a possible interpretation of phenomenal awareness, where the content of experience seems to involve the integration of different modalities. So for instance, visual information becomes vision, a ‘window on the world,’ only when integrated with information from a variety of other sources, such as memory, attention, expectation, timing, and so on. [Channel or modal integration, where a variety of information sources are accessed and harmonized, provides a means of explaining the ‘modality blindness problem.’]

Declusion - Sometimes derivative corollary of occlusion. Most generally, any kind of figuration within a phenomenal field. See, occlusion.

Determinativity - Efficacy in its most abstract sense.

Emphasis/Emphatics - Term referring to the systematic way patterns of theoretical focus impact estimations of explanatory and ontological priority and the afferential implicatures that fall out of them. The attribution and distribution of determinativity is one of the salient consequences of emphasis. Emphasis on social conditions tends to assign determinativity to those conditions: individuals become ‘expressions’ of their social conditions. Whereas emphasis on individuals tend to assign determinativity to those individuals: society becomes an aggregation of individuals. ‘Reductionism’ is a paradigmatic example of emphasis, perhaps the only kind that can appeal to empirical warrant. But emphasis plays a far more pervasive role in theoretical discourse, particularly in critical recontextualizations of theoretical positions, where adducing and emphasizing originally marginal considerations has the effect of reconfiguring existing implicatures. The Levinasian critique of Heidegger provides a striking example.

Encapsulation - The way CGII is ‘all or nothing,’ which is to say, the way the neurofunctional fractions accessed by the TCS are experienced as wholes in consciousness.

Existential Equivocation - A possible afference of medializing a decluded relationship: the way ‘to take the perspective of’ can also mean ‘to become.’

Frame - Human consciousness in its most general sense.

Gaming - Gerrymandering.

Greater Human, the - BBT stipulates a distinction between the human we experience versus the human that we are. Since the former subtends upon the latter, it is referred to as the ‘greater human.’

Informatic Asymmetry - A central premise of BBT. A principle referring to the relative information paucity of experience compared to the neural processing that makes it possible.

Information Horizons - Boundaries demarcating consciousness generating information integration, and which are the key to naturalizing certain fundamental, yet hitherto perplexing, structural features of consciousness, such as presence, self-identity, and intentionality.

Interpretative Asymmetry - The tendency, given theoretical incompetence, for philosophers to think their interpretations capture ‘more’ of a given problematic that the interpretations of their social competitors.

Interpretative Underdetermination - A consequence of theoretical incompetence: our inability (outside the prosthetic ambit of the natural sciences) to make our interpretations stick, and so end the regress of interpretation.

Lateral - The ‘across’ relation orthogonal to medial relations.

Limit with one side - The primary phenomenological expression of thalamocortical information horizons, and in different guises, a recurring fetish of philosophical speculation.

LWOS - see, Limit with one side.

Medial - The ‘through’ relation orthogonal to lateral relations.

Mereological Confusion - An effect of encapsulation.***

Metonymic Inflation/Incorporation - An effect of medialization. When discrete entities are medially interpreted they can

Medialization - The transformation of decluded, lateral relations into occluded, medial relations. The paradigmatic instance of this is simply ‘taking the perspective of…’ where the theorist moves from exterior, third person considerations of something to interior, first person considerations of that same something. This swapping of ‘subject positions’ is so common and so natural that it remains implicit even in much philosophical discourse. Medialization thematizes this operation as one possessing its own tendencies and characteristics ***

Now - The margin of out temporal field.

Null Frame - see, occluded frame.

Occluded Frame - In the strictest sense, this very moment now, which is to say, the appearance of these very words as they are read. The reason I don’t refer to you in the above statement is simply because ‘you’ possesses a bolus of implicit and explicit associations possessing their own implicatures (theoretical consequences). To say, ‘these very words as they appear now for you,’ is to ‘erode the occlusion’ of the frame. This is one reason so many philosophers–Heidegger perhaps most notoriously–actually sought out the ‘naivete’ of the ancient Greeks, the thought being that their intuitions were somehow more trustworthy for not having been sullied by the overlay of multiple philosophical declusions of the occluded frame. Thus the conceptual dilemma of the ‘occluded frame’: simply referring to it in this manner decludes it (within a subsequent occluded frame, which in turn…). The term ‘occluded frame’ refers to something that cannot be, in a sense, termed without erasing what seems to be its primary structural feature: the edgeless enclosure (see, LWOS) of the world as decluded. So, for instance

O

Declusion

presents the occluded field as occluded. Whereas

[                              O                             ]

Declusion

Occlusion (as decluded)                    Occlusion (as decluded)

presents the occluded field as decluded within another occluded frame. To simply name the occluded frame is to embed it in a semantic context whose implicature is likely skewed. Heidegger’s famous ‘turn’ is the result of a similar realization, as is his subsequent retreat into linguistic atavism: an attempt to find the mode of declusion most appropriate to the occluded frame.

In afferentialism, this is simply the cost of doing interpretative business. It is also a primary reason why afferentialism is experimental, why it focusses on mapping out various interpretative possibilities for the sake of evaluating their comparative theoretical liabilities and advantages, paramount among them, applications to scientific research.

Occlusion - Sometimes constitutive corollary of declusion. Structuring absence.

A decluded relation is any relation that takes the following form

D ———- O

Subject            Object

where each element is both discrete and externally related to the other. Decluded relations are lateral.

An occluded relation, on the other hand, is the relation you have with

D ———- O

Subject            Object

at the moment you regard the figure. You are the ‘occluded frame’ of the declusion of D and O. Occluded relations are medial. The fact that both of these figures are identical underscores at least three things: 1) the correlation of the medial and lateral, the occluded and the decluded; 2) the transparency of the medial, and the corresponding ease with which it can be misconstrued or simply overlooked; and 3) the emphatic opportunities this provides afferential interpretation.

As occluded relations, which is to say relations with one discrete term, medial relations are also occult, both in the pejorative sense of being refractory to cognition and in the ontological sense of being nothing in particular (prior to interpretation). One need only consider the myriad ways that you can be characterized, as a brain, a transcendental ego, Dasein, Spirit, It, a political or historical or psychological subject. (One of the things my theory of vantages tries to do is provide a framework wherein all these variants can be theoretically accommodated as ‘positions’). In addition to these global characterizations, there are additional local ways to ‘spin’ medial relations. If you imbue them with determinativity they become constitutive of decluded relations. If you drain them of determinativity they become ontologically transparent, and they vanish in the presentation of the lateral. Given that the ‘logic’ involved is afferential (a bringing together) as opposed to inferential (a bringing in), there are innumerable ways the ambiguities involved can be gamed. Given the structure of vantages, it actually follows that interpreters will succumb to interpretative asymmetry and think their interpretations canonical.

Occlusion refers to the ‘medialization’ of decluded or lateral relations. In other words, it refers to any ‘taking the position or view or perspective of…’ anything you happen to have a perspective on. So, to reconceive the relation of D and O from the standpoint of D would be to consider the relation like this

O

Object

where, in a strange sense, you have become D, which is to say, the occluded frame for the appearance of O.

There are several crucial things to note, here. The first is that what was discrete, D, has become encompassing. To ‘take the perspective of D’ is to, in some strange sense, become D. The matrix of lateral relational possibilities (however it is defined) is wiped clean, allowing for the interpretation (afferential gaming) of different relational possibilities, such as those epitomized by Kant and Heidegger. So, Care, to give a notorious example, can be something discrete like a ‘capacity’ that belongs to you. Or it can be defined as something that you simply are, either momentarily, as a kind of ‘event,’ or as something which is constitutive of what you are and so coextensive with you, a ‘mode of existence,’ such that every instance of being is always an instance of caring. Care-as-capacity is care as thoroughly decluded, which is to say, something existentially discrete. Care-as-event, on the other hand, you could say is care decluded as occluded, which is to say, as something existentially encompassing but temporally discrete. Care-as-mode-of-existence, however, you could say is care as occluded, which is to say, both existentially and temporally encompassing. It remains decluded in some sense, insofar as it can be referred to at all, but as something possessing a drastically different interpretative implicature than the previous two declusions.

This example clearly shows the kinds of frame hybridization that one finds throughout philosophy, the way thinkers rely on implicit conceptualizations that have a profound impact on the kinds of things that do or do not ‘follow.’ The present act of thematizing these kinds of moves is itself a declusion, and as such bound to warp or skew implicative consequences.

Open Superindexical - The term referring to thiswhich to say, this very moment now. It is ‘indexical’ because this… is always this… and it is ‘super’ because it ‘reflexively’ refers to the performance of its reference.

Paradox – An experiential side-effect of encapsulation.

Parastruction – A form of philosophical interpretation that self-consciously games the ambiguities involved in afferential reasoning with an eye for deflating the cognitive pretensions (as opposed to the conceptual utilities) of various philosophical discourses.

Phenomenal Adequacy/Inadequacy - The question whether a phenomenal experience, given informatic asymmetry and encapsulation, can be said to be ‘synoptic,’ or ‘myopic’–which is to say, adequate or inadequate.

Phenomenal Incompleteness - Thesis that directly follows from informatic asymmetry, namely, that the TCS, and therefore phenomenal awareness, only accesses a fraction of the information processed by the greater brain. The question of phenomenal adequacy, therefore, is something that only a mature neuroscience can definitively answer. Given the kinds of structural and developmental constraints faced by CGII, however, one can hypothesize that phenomenal incompleteness entails radical phenomenal inadequacy.

Priority Agnosticism - The principled refusal to assign ontological and/or epistemological priority to philosophical interpretations, to assume that, all things being equal, any given philosophical interpretation is a product of gerrymandering, and to so avoid the priority illusion.

Priority Illusion - The way afferential interpretation typically convinces philosophers to assign ontological and/or epistemology priority to the objects of interpretation. A consequence of theoretical incompetence and interpretative asymmetry. The only known cure, falsification, remains a daring and elusive foe.

POA - see, Problematic Ontological Assumption.

Problematic Ontological Assumption -

Process Asymmetry -

Recapitulation -

Saturation -

TCS - Thalamocortical System. The posited locus for consciousness in the brain for the purposes theoretical speculation, not unlike the way ‘C-fibres’ are used in the philosophical literature on pain and qualia more generally.

This… – The open superindexical, which is to say, a way to refer to the occluded frame in the most deflationary sense, as this very moment now.

TI – see, Theoretical Incompetence

Theoretical Incompetence -

Vantages – An afferentially (psychologically and neuroscientifically) informed interpretation of perspectives. Given the BBT, consciousness (as it appears) is a misapprehension. Given that consciousness is a misapprehension that we are, the issue literally becomes one of ‘making the best out of a bad situation.’ Like most all previous philosophical considerations of the ‘human,’ vantages constitute an attempt to extract as much sense as possible out of the ‘human conundrum,’ to provide ‘a way to see ourselves,’ that maximizes our intelligibility while maintaining meaningful contact with the world as it is described by science.

Vicarity, Principle of -

Virtuality – Informally, a word used to flatten the abject mystery of being a misapprehension into something that can be easily slung around in theoretical discourse. Formally, the ontological status attributed to any experiential frame of reference that is also a misapprehension.

How to Squeeze an Entire Universe into Three Seconds or Less: An Answer to ‘Brassier’s Problem’

The discovery that a number of people are not only mining the problematic that obsessed me when I was a graduate student, but actually homing in on the same family of texts that I had gravitated to has… well, to put it honestly, convinced me that maybe I’m not so crazy after all. I’m not talking about the new ‘Continental Realists,’ but rather philosophers who are mining parallel, but quite critical tracks: David Roden, Martin Hagglund, and Ray Brassier.

I’m presently reading Brassier’s Nihil Unbound, which is turning into one of those rare syncretic works that handily outruns the ‘original philosophies’ that it explicates, critiques, and attempts to synthesize. It’s certainly not a book I would write, and I actually think it’s unfortunate that Brassier’s imagination became mired in works like After Finitude, which (as far as I can tell) succeeds in being every bit as antiquated as it attempts to be. (I’ll be posting on Meillassoux in the near future, but ‘correlation’ strikes me as a dull attempt to foist the epistemological dilemma as a profound and novel diagnosis of the tradition (Laruelle’s ‘decision’ is much more interesting), backed up with an egregious misreading of Heidegger, and a very curious (given his use of set theory) second-order blindness to way his claim-making cuts against the claims made).

Brassier strikes me as one of those hardy philosophical souls that can actually revel in the ugliness and dread of what ‘it’ discovers/gerrymanders. He has no ‘cherished affirmative misconceptions’ to bring the dove of discursive reason back to the ark of manifest necessity. Nihil Unbound, for instance, argues that nihilism presents a profound opportunity, a way to unshackle philosophy from intentional parochialism. Given my commitment to theoretical incompetence, I’m always inclined to favour claims that cut against human vanity.

In a recent interview for Thauma, Brassier articulates what he thinks is the philosophical question (and which I would argue is the socio-cultural question as well):

“The problem consists in articulating the relation between the dialectical structure of the conceptual and the non-dialectical structure of the real in such a way as to explain how real negativity fuels dialectics even as it prevents dialectics from incorporating its own negativity.”

This is the question I think I ‘answered’ just before abandoning philosophy ten years ago. The jargon is different from the mongrelized terms that I use, which is what might make it seem so alien at first, but it translates readily, I think, into a crucial criterion: any causal explanation of intentionality should also explain why intentionality seems causally inexplicable.

I imagine Brassier would be uncomfortable with this expression because of the way it elides any explicit reference to ‘negativity,’ which is to say, time. The reason I buried time was simply to tease out the way his question brushes up against the Hard Problem, the naturalization of Consciousness, something which mystifies me as much now as it ever did. Why should there be Consciousness at all? I haven’t the fucking foggiest. Why does consciousness have the structure it does? Ah…

We are our brains in such a way that we cannot recognize ourselves as our brains. This is the claim I’ve been flogging for quite some time–the Blind Brain Theory. And if you reread Brassier’s quote, you’ll see the very same ‘in such a way.’ You’ll see, in other words, that Brassier is asking a question about blindness. How can negativity drive dialectics in such a way that dialectics remains curiously blind to negativity?

Enter the null frame theory of the now.

The (dissertation-killing) question I’ve been asking myself for these past ten years is how might the perspectival structure of consciousness hook up with the structure of the thalamocortical system. Since the visual field is the primary way we ‘envisage’ perspective, I asked myself what could neurostructurally account for the peculiar structure of the visual field. The most difficult-to-fathom thing about our visual field, I realized, is the way our periphery fades into ‘edgeless oblivion.’ It’s neurostructural correlate, one can assume, is simply the point at which the information available to integration runs out–an information horizon.

The thalamocortical system (TCS) is literally carved up by information horizons–it simply has to be. If the visual information horizon explains the peripheral vanishing act that structures our visual field, what other structural peculiarities might be explained in information horizonal terms?

The null frame theory of the now (a more complete formulation can be found here) proposes that the most peculiar feature of temporal awareness, the now, is a temporal analogue to the most peculiar feature of visual awareness, the visual margin. The TCS only has access to so much temporal information. If you think about it, the temporal field has to possess a point where ‘timing runs out’ (an LWOS) the same way our visual field possesses a point where vision runs out. So what possible structural consequences might obtain?

Well, where the visual margin enforces visual locality you might expect a temporal margin to enforce temporal locality. We only visually differentiate this and nothing more, because the TCS only has certain visual information and nothing more. Likewise, we only temporally differentiate this and nothing more, because the TCS only has certain temporal information and nothing more. The same way the visual margin has to be the sightless frame of seeing, the temporal margin has to be the timeless frame of timing. The TCS can’t ‘time timing’ any more than it can ‘see seeing,’ though it can access supplementary channels of information to transform both fields into windows.

The temporal field can only discriminate this temporal locality (the TCS can only integrate the temporal information it can access) and nothing more. Thus presence. Since the temporal field cannot discriminate the time of its own temporal discriminating it cannot differentiate itself from itself. Thus abiding presence, which is to say (as crazy as it sounds), self-identity. Since the TCS accesses and integrates this information with supplementary channels, the temporal field becomes an atemporal window onto a temporal world, the same way the visual field becomes a blind window onto a visual world. The latter we call our eyes, the former, our soul.

Even after all these years my skin still pimples when I think about this. The analogy I sometimes use (and worked into an aphorism for The Prince of Nothing) is that of a spiral, how it becomes a perfect circle when viewed on end. The illusion of the circle is a consequence of missing a crucial extra dimension of information. The world projects itself through us, and we confuse the projection for something self-contained, abussos in the sense of Dennett’s ‘skyhooks.’ Abussos, bottomless, everywhere we turn our medial origins are hidden from us.

Information horizons are a consequence of information integration. As Guilio Tononi[1] puts it: “Because integrated information can only be generated within a complex and not outside its boundaries, it follows that consciousness is necessarily subjective, private, and related to a single point-of-view or perspective.” Well, not quite. The only thing that actually follows is that consciousness is private. If concepts as incredibly difficult as subject, POV, and perspective simply ‘fell out of’ information integration then you wouldn’t be reading this. Information horizons follow from information integration, and what I’m saying is that the vexing perspectival structure of consciousness follows from information horizons.

So, to return to Brassier:

“The problem consists in articulating the relation between the dialectical structure of the conceptual and the non-dialectical structure of the real in such a way as to explain how real negativity fuels dialectics even as it prevents dialectics from incorporating its own negativity.”

So the global dynamics of information integration change moment to moment, but this information is not available for integration (in that modality), so the ‘negativity’ (as Brassier terms it) that renders the ‘complex’ (as Tononi terms it) non-self-identical in fact, cannot be taken up as a datum within that complex. So even though we remain utterly ignorant as to why information integration (of any kind) should give rise to consciousness, we can nevertheless explain why consciousness appears to remain self-identical despite the continual transformation of its contents–why Hume bumped into oblivion searching for the Self. ‘Real negativity fuels dialectics even as it prevents dialectics from incorporating its own negativity,’ because the information pertaining to the global process of information integration cannot itself be integrated. Consciousness is literally reflexive because it is irreflexive. The resulting consciousness perceives itself to be a circle it is not rather than the spiral that it is. You, as you read this, are a structurally mandated ‘cognitive illusion’ (whatever the hell that means anymore). What the TCS misapprehends, we become, and because everywhere we look we find the spiral, we are perplexed as to what we can be. So we begin interrogating the circle. Philosophizing.

Problem solved?

It seems to me that this is an empirical question, and I invite anyone in the sciences reading this to come up with possible ways this can be empirically pursued. Otherwise, if we simply assume (as we do every time we move between philosophical claims) that this does solve Brassier’s Problem, then a veritable cornucopia of speculative possibilities open up. Enough to make me dizzy, at least.

T-Zero

So when we normally think about time we tend to think in terms like this:

t1 > t2 > t3 > t4 >t5

which is to say, in terms of a linear succession of times. This happens, then that and that and that and so on. What we tend to forget is the moment that frames this succession in simultaneity – the Now, which might be depicted as:

T0 (t1 > t2 > t3 > t4 >t5)

I call this an instant of declusion, where you make the implicit perspectival frame of one moment explicit within the implicit perspectival frame of another, subsequent moment. (Linguistically, the work of declusion is performed by propositional attitudes, which suggest that it plays an important role in truth - but more on this below).

Given that the Now characterizes the structure of lived time, we can say (with Heidegger) that our first notation, as unassuming as it seems, does real representational violence to the passage of time as we actually experience it. (This is a nifty way of conceptualizing the metaphysics of presence, for you philosophy wonks out there.)

The lived structure of time, I would hazard, looks something more like this:

T0 (t5 (t4 (t3 (t2(t1)))))

where the stacking of parentheses represents the movement of declusion. In this notation, the latest moment, t5, decludes t4, which decludes t3, which decludes t2, which decludes t1. Looked at this way, lived time becomes a kind of meta-inclusionary tunnel, with each successive frame figured within the frame following. (Of course, the ‘laws of temporal perspective’ are far muddier than this analogy suggests: a kind of myopic tunnel would be better, where previous moments blur into mnemonic mush rather than receding in an ordered fashion toward any temporal vanishing point).

T0, of course, is ‘superindexical,’ a reference to this very moment now, to the frameless frame that you somehow are. It’s a kind of ‘token declusion,’ a reference to the frame of referring – or what I sometimes call the ‘occluded frame.’ I would argue that you actually find versions of this structure throughout philosophy, only conceptualized in drastically different ways. You can use it as a conceptual heuristic to understand things as apparently disparate as Derrida’s differance, Nietzsche’s Will to Power, Heidegger’s Being, and Kant’s transcendence. Finding an ‘adequate’ conceptualization (rationally regimented declusion) of the occluded frame is the philosophical holy grail, at least in the continental tradition.

Just for example: if you emphasize the moment to moment nonidentity of the occluded frame, the fact that T0 is in fact t5, then declusion becomes exclusion, and every act of framing becomes an exercise in violence. No matter how hard we try to draw the world within our frame, we find ourselves deflected, deferred. Deconstruction is one of the implicatures that arise here.

If, however, you emphasize the identity of the occluded frame, the fact that T0 is the very condition of t5, declusion becomes inclusion, and we seem to become ‘transparent,’ a window onto the world as it appears, the very ‘clearing of Being’ as that fat old Nazi, Heidegger might say.

It would help, I think, to unpack the above notation a little.

T0 (t1)

T0 (t2(t1))

T0 (t3 (t2(t1)))

T0 (t4 (t3 (t2(t1))))

T0 (t5 (t4 (t3 (t2(t1)))))

This, I think, nicely represents the paradox of the Now, the way it frames difference in identity, an identity founded upon absence. (Consider Aristotle:”it is not easy to see whether the moment which appears to divide the past and the future always remains one and the same or is always distinct”) If we had perfect recall, this is the way our lives would unfold, each moment engulfing the moment previous without loss. But we don’t, so the orderly linear bracketing of moment within moment dissolves into soup.

(This also shows the difficulties time poses for language, which bundles things into discrete little packages. Thus the linguistic gymnastics you find in a thinker like Heidegger. This is why I think you need narrative to press home the stakes of this account – which is one of the reasons why I wrote Light, Time, and Gravity.)

So what could explain this structure? Is it the result of devoted T0 circuits within the brain? Temporal identity circuits?

Or is it, like the occluded boundary of our visual field, a positive structural feature arising from a brute neurophysiological incapacity?

T0, I’m suggesting, is a necessary result of the thalamocortical system’s temporal information horizon, an artifact of the structural and developmental limits placed on the brain’s ability to track itself. Since the frame of our temporal field cannot be immediately incorporated within our temporal field, we hang ‘motionless.’ Our brain is the occluded frame. The same way it has difficulty situating itself as itself in its environment (for the structural and developmental reasons I enumerated previously), it has difficulty tracking the time of its temporal tracking. In other words, reflexivity is the problem.

The severe constraints placed on neurophysiological reflexivity (or ‘information integration,’ as Tononi calls it) are the very things that leverage the illusion of reflexivity that is the foundation of lived experience. And this illusion, in turn, leverages so very much, a cornucopia of semantic phenomena, turning dedicated neural circuits that interact with their variable environments in reliable ways into ethereal, abiding things like concepts, numbers, generalizations, axioms, and so on. Since the brain lacks the resources to track its neural circuitry as neural circuitry it tracks them in different, cartoonish guises, ones shorn of history and happenstance. Encapsulation ensures that we confuse our two-dimensional kluges with all there is. So, for instance, our skin-deep experience of the connectionist morass of our brain’s mathematical processing becomes the sum of mathematics, an apparently timeless realm of apparently internal relations, the basis of who knows how many Platonic pipedreams.

We are the two-dimensional ghost of the three-dimensional engine that is our brain. A hopelessly distorted cross-section.

Of course none of this addresses the Hard Problem, the question of why the brain should give rise to consciousness at all, but it does suggest novel ways of tackling that problem. What we want from a potential explanation of consciousness is a way to integrate it into our understanding of other natural phenomena. But like my daughter and her car seat, it simply refuses to be buckled in.

Part of the Hard Problem, I’m suggesting, turns on our illusory self-identity, the way the thalamocortical system’s various information horizons continually ‘throw’ or ‘strand’ it beyond the circuit of what it can process. We continually find ourselves at the beginning of our lives for the same reason we think ‘we’ continually ‘author’ ourselves: because the neurophysiological antecedents of the thalamocortical system do not exist for it. Because it is an ‘encapsulated’ information economy, and so must scavenge pseudo-antecedents from within (so that thought seems to arise from thought, and so on).

We are our brains in such a way that we cannot recognize ourselves as our brains. Rather than a product of recursive information processing, perhaps consciousness simply is that processing, and only seems otherwise because of the way the limits of recursive processing baffle the systems involved.

In other words (and I would ask all the Buddhists out there to keep a wary eye on their confirmation bias here), there is no such thing as consciousness. The Hard Problem is not the problem of explaining how brains generate consciousness, but the dilemma of a brain wired to itself in thoroughly deceptive ways. We cannot explain what we are because we literally are not what we ‘are.’

As bizarre as this all sounds, it’s not only empirically possible, but (given that neural reflexivity is the basis of consciousness) it’s empirically probable. The extraordinary, even preposterous, assumption, it seems to me, would be that our brains would evolve anything more than an environmentally and reproductively ‘actionable’ self-understanding.

I get this tingling feeling sometimes when I ponder this, a sense of contorted comprehension reaching out and out… I have this sense of falling flush with the cosmos, a kind of filamentary affirmation. And at the same time I see myself as an illusion, a multiplicity pinched into unitary selfhood by inability and absence. A small, silvery bubble–a pocket of breathlessness–rising through an incomprehensible deep.

Like I say, I think there is an eerie elegance and parsimony to this account, one with far-reaching interpretative possibilities. Not only do I think it provides a way to tether traditional continental philosophical concerns to contemporary cognitive neuroscience, I think it provides an entirely novel conceptual frame of reference for, well… pretty much everything.

For example: Why do propositional attitudes wreck compositionality? Because language evolved around the fact of our thalamocortical systems and their information horizons. Think of the ‘view from nowhere’: Is it a coincidence that truth is implicated in time and space? Is it a coincidence that the more we situate a claim within a ‘context,’ the more contingent that claim’s truth-value intuitively seems? Could it be that language, in the course of its evolution, simply commandeered the illusion of consciousness as timeless and placeless to accommodate truth-value? This would explain why its ‘truth function’ breaks down whenever language ‘frames frames,’ which is to say, makes claims regarding the intentional states of others. Since your ‘linguistic truth system’ turns on the occlusion of your frame, linguistically embedding the frame of another would have the apparent result of cutting the truth-function of language in two, something that seems difficult to comprehend, given that truth is grounded in nowhere… How could there be two nowheres?

Another example: Why do paradoxes escape logical resolution? All paradoxes seem to involve mathematical or linguistic self-reference in some form. Could these breakdowns occur because there is no such thing as self-reference at the neural level, only the illusion that arises as a structural consequence of our blinkered brains? So what we might have are two cognitive systems–one largely unconscious, the other largely conscious–coming to loggerheads over the latter’s inability to relinquish what the former simply cannot compute.

And the list goes on.

T-Zero… and counting.

The Elephant in Our Skull

I am not a ‘Metzingerian.’ Like him, I think we are what we are in such a way that we cannot intuit what we are, but I came to this inkling by a far different route (Continental Philosophy). I’m not a representationalist, for one. I don’t think the brain has a Phenomenal Self Model, and I think that the sense that we do is largely a cultural artifact. What we have are a collection of kluges, a chaotic intentional palette that socialization then shapes into something that seems more definite and utile–like the mighty ‘Individual’ in our society

In the old proverb of the three blind Indian gurus and the elephant, one grabs the tail and says the elephant is a rope, the other grabs a leg and says the elephant is a tree, while the third grabs the trunk and says the elephant is a snake. In each case, the gurus mistake the part for a whole. This is the Blind Brain Thesis (which I simultaneously can’t stop arguing and can’t bring myself to believe): the thalamocortical system is the guru and the greater brain is the elephant. Intentional concepts such as belief, desire, good, perception, volition, action–all the furniture of conscious life–are simply ropes and trees and snakes. Misapprehensions. According to BBT, there are literally no such things.

The reason they function is simply that they are systematically related to the elephant, who does the brunt of the work. They have to count as ‘insight’ or ‘understanding’ simply because they are literally the only game in town.

This makes me an ‘eliminativist.’ Now some eliminativists, like Dan Dennett, want to argue that the problem doesn’t lie with the concepts so much as with the definitions (with the notable exception of ‘qualia’). It’s not that we don’t have beliefs, desires, volitions, and so on, it’s that we require a more mature neuroscience to understand what they really were all along. But note how the guru analogy makes hash of this approach: Dennett wants to say we still have our rope and tree and snake, but now we know that a rope is really a tail, a tree is really a leg, and a snake is really a trunk. Semantics rushes to the rescue.

Enter what I call Encapsulation, the strange mereological inflation that characterizes consciousness. Mistaking parts for wholes, I want to argue, is constitutive of experience. Dennett wants to say we are actually experiencing the elephant. But as a matter of empirical fact, the thalamocortical system only has access to a fraction of the information processed by the brain, a fraction it cannot but mistake for wholes. We are experiencing elephant parts as opposed to the elephant, and we’re experiencing them as wholes, something they are not.

The virtue of Encapsulation is that it allows us to explain why intentional concepts seem to have such an antipathy to causal explanation: intentional concepts are like magic tricks insofar as they depend on the absence of information to work. Their phenomenal character is the product of a lack, and like magic, that character evaporates once the information is provided. The salient difference between intentionality and magic, however, is that we are wired to the magician in the former case, and so are a prisoner of his tricks. Where magic is the exception, intentionality is the invariant frame of any experience whatsoever. Small wonder we conceived the world in our own image for the whole of prescientific history. Only as science scrubbed intentionality from the natural world, only as human consciousness came to seem more and more exceptional, could we countenance the possibility that we were magic–which is to say, something not real.

Magic tricks are special kinds of mistakes, what happens when certain information is withheld. Dennett thinks we can have it both ways: the explanation and the magic. It’s this, I think, that makes him so unconvincing to so many people, despite his obvious eloquence and brilliance: his stubborn gaming of conceptual ambiguities against the experiential grain. He literally seems to think that once the mistake has been empirically identified, it can be argued away.

Consciousness is structured by what might be called Informatic Deprivation and Informatic Asymmetry. At every turn the supercomplexity of the brain is either concealed from consciousness or sopped away. (There’s a number of structural and developmental explanations for why this might or must be). But how could this be constitutive of experience? I’m not sure, though I do think it has to be. Just think of the margin of your visual field: experience simply has to peter out where information peters out (though just how it ‘peters’ can be expressed in multiple ways).

This is where the BBT shows what I think is almost breathtaking potential. Consider the Now, which has perplexed philosophers through the ages by somehow remaining the same despite being different. According to the BBT, the Now is simply the margin of our temporal field, an experiential artifact of the way our temporal awareness peters out. We are always ‘here and now,’ according to the BBT, for the same reason the camera never appears in any of the images it captures. There’s the fact that the camera (like homo sapiens) is primarily designed to process distal as opposed to proximate environmental information. Immediate environmental information, which to say, information about the information processing (image formation) itself, simply cannot enter the image without generating interference. I call this Process Asymmetry.

In my novels you’ll find references here and there to how spirals become circles if you look at them from along their axis of elevation. This is the human soul: a spiral that cannot but see itself as a circle. Process Asymmetry means there must always be an interval, variable or absolute, between process information and information processing. A given processor cannot process second-order information pertaining to its own processing simultaneous to that processing. A secondary processor is required. Second-order ‘blindness’ necessarily characterizes all information processors. The camera always vanishes into the frame.

Our temporal field hangs outside time: thus it is always ‘now.’ Our spatial field hangs nowhere: thus it is always ‘here.’ In both cases, related systems provide supplementary second-order information, plugging us into narrative time and local space, so that we have the strangely paradoxical sense of always being ‘here and now’ and being at different times and places.

And this leverages the greatest magic trick of all, what I call Default Identity. What we call ‘identity,’ I’m suggesting, is simply the absence of second-order self-differentiation. So much has been made of reflexivity and recursion and its possible relationship to consciousness. I personally think that linguistic evolution is the primary (but by no means sole) catalyst of human consciousness, the brain rendering more and more information available for potential translation into auditory code. Guili Tononi* has shown that you can predict which regions of the brain are accessible to consciousness and which are not by analysing the information integration value of various circuits. I have this image of the brain wiring itself to itself in novel and unexpected ways, making more and more of its information available for communication to other brains.

But the question is one of how this ‘information integration’ maps onto experience. The BBT suggests that it is the inability of the thalamocortical system to globally autodiscriminate that generates the illusion of global autoreflexivity. It is Process Asymmetry, the impossibility of informatic reflexivity, in other words, that foists the illusion of personal identity–perfect reflexivity–upon consciousness. (Could this have anything to do with the curious inversion you find between etiology (causation) and teleology? Why does consciousness seem to flip so much of the natural on its head? Is there some kind of ‘camera obscura effect’ snoozing around here?) Our basic sense of self-identity is not a ‘representation,’ a product of NCC’s as Metzinger and so many others would have it. It’s simply another margin, the way the information horizons of the thalamocortical system are expressed in experience.

As magicians well know, the brain makes default identity mistakes all the time: In “The Mark of Gideon,” Captain Kirk unknowingly beams into a perfect replica of the Enterprise, and so assumes that the transporter has malfunctioned and that his entire crew has been abducted. His inability to discriminate between the real Enterprise and the replica leads to their thoughtless conflation. The BBT suggests that experience seems to unfold across a substrate of self-identity simply because its margins, those points where the absence of information are expressed, must always remain the same.

By marking the limit of differentiation they endow us with the illusion of a soul.

This is what I mean when I say that consciousness is flat: the thalamocortical system’s inability to track the causal histories of its own processes simply means that those histories do not exist for it whatsoever. Encapsulation means those absences are utterly elided: various agnosiac ‘scotomata’ permeate consciousness at every level and every turn. So where the evolutionarily ancient and powerful perceptual processors discriminate externally-related, high-resolution processes in our environment, the evolutionarily young and crude intuitive processors have to make due with much less information, thus ‘flattening’ the field of awareness for the lack of discriminations. External relations, which require more information to discriminate, feel like internal relations. Sharp boundaries become fuzzy and inchoate. 

Consciousness becomes a South Park episode.

We are the elephant in such a way that we are a rope, tree, and snake. Anything but an elephant.

[Postscript disclosure: After writing this, I feel like a tremendous idiot for not pursuing publication in some journal. The problem is 1) this is just a hobby for me, so I’m nowhere near well read enough; 2) everybody but everybody seems to have some crackpot theory of consciousness, and I know from past experience that my lack of credentials will consign me to the crackpot pile (where I could very well belong!) even if there is something important here; and 3) I can’t bring myself to believe  in the damn thing! And yet I can’t help but wonder at the interpretative doors the BBT opens, ways to naturalistically reinterpret various past philosophical giants, and most importantly, ways to move forward, to forge radically different conceptions of lord knows how many contemporary debates and conceptual staples. For over ten years now this thought has been stuck in my craw…]

 

T-ZERO

Aphorism of the Day: Beware those who prize absurdity over drama: they are the enlightened dead.

The Enlightened Dead, just so you know, is the title of the next Disciple novel.

I like to thank those who chimed in with their support, though I can’t help but feel you are the vocal exception to the silent rule. As it stands, I’ve come to realize these uber-philosophical posts will be buried in due course anyway as the blog continues to grow. It’s the balance that’s important, I think. With this in mind, allow me one final elaboration of the previous entries. 

So when we normally think about time we tend to think in terms like this:

t1 > t2 > t3 > t4 >t5

which is to say, in terms of a linear succession of times. This happens, then that and that and that and so on. What we tend to forget is the moment that frames this succession in simultaneity – the Now, which might be depicted as:

T0 (t1 > t2 > t3 > t4 >t5)

I call this an instant of declusion, where you make the implicit perspectival frame of one moment explicit within the implicit perspectival frame of another, subsequent moment. (Linguistically, the work of declusion is performed by propositional attitudes, which suggest that it plays an important role in truth - but more on this below).

Given that the Now characterizes the structure of lived time, we can say (with Heidegger) that our first notation, as unassuming as it seems, does real representational violence to the passage of time as we actually experience it. (This is a nifty way of conceptualizing the metaphysics of presence, for you philosophy wonks out there.)

The lived structure of time, I would hazard, looks something more like this:

T0 (t5 (t4 (t3 (t2(t1)))))

where the stacking of parentheses represents the movement of declusion. In this notation, the latest moment, t5, decludes t4, which decludes t3, which decludes t2, which decludes t1. Looked at this way, lived time becomes a kind of meta-inclusionary tunnel, with each successive frame figured within the frame following. (Of course, the ‘laws of temporal perspective’ are far muddier than this analogy suggests: a kind of myopic tunnel would be better, where previous moments blur into mnemonic mush rather than receding in an ordered fashion toward any temporal vanishing point).

T0, of course, is ‘superindexical,’ a reference to this very moment now, to the frameless frame that you somehow are. It’s a kind of ‘token declusion,’ a reference to the frame of referring – or what I sometimes call the ‘occluded frame.’ I would argue that you actually find versions of this structure throughout philosophy, only conceptualized in drastically different ways. You can use it as a conceptual heuristic to understand things as apparently disparate as Derrida’s differance, Nietzsche’s Will to Power, Heidegger’s Being, and Kant’s transcendence. Finding an ‘adequate’ conceptualization (rationally regimented declusion) of the occluded frame is the philosophical holy grail, at least in the continental tradition.

Just for example: if you emphasize the moment to moment nonidentity of the occluded frame, the fact that T0 is in fact t5, then declusion becomes exclusion, and every act of framing becomes an exercise in violence. No matter how hard we try to draw the world within our frame, we find ourselves deflected, deferred. Deconstruction is one of the implicatures that arise here.

If, however, you emphasize the identity of the occluded frame, the fact that T0 is the very condition of t5, declusion becomes inclusion, and we seem to become ‘transparent,’ a window onto the world as it appears, the very ‘clearing of Being’ as that fat old Nazi, Heidegger might say.

It would help, I think, to unpack the above notation a little.

T0 (t1)

T0 (t2(t1))

T0 (t3 (t2(t1)))

T0 (t4 (t3 (t2(t1))))

T0 (t5 (t4 (t3 (t2(t1)))))

This, I think, nicely represents the paradox of the Now, the way it frames difference in identity, an identity founded upon absence. (Consider Aristotle:”it is not easy to see whether the moment which appears to divide the past and the future always remains one and the same or is always distinct”) If we had perfect recall, this is the way our lives would unfold, each moment engulfing the moment previous without loss. But we don’t, so the orderly linear bracketing of moment within moment dissolves into soup.

(This also shows the difficulties time poses for language, which bundles things into discrete little packages. Thus the linguistic gymnastics you find in a thinker like Heidegger. This is why I think you need narrative to press home the stakes of this account – which is one of the reasons why I wrote Light, Time, and Gravity.)

So what could explain this structure? Is it the result of devoted T0 circuits within the brain? Temporal identity circuits?

Or is it, like the occluded boundary of our visual field, a positive structural feature arising from a brute neurophysiological incapacity?

T0, I’m suggesting, is a necessary result of the thalamocortical system’s temporal information horizon, an artifact of the structural and developmental limits placed on the brain’s ability to track itself. Since the frame of our temporal field cannot be immediately incorporated within our temporal field, we hang ‘motionless.’ Our brain is the occluded frame. The same way it has difficulty situating itself as itself in its environment (for the structural and developmental reasons I enumerated previously), it has difficulty tracking the time of its temporal tracking. In other words, reflexivity is the problem.

The severe constraints placed on neurophysiological reflexivity (or ‘information integration,’ as Tononi calls it) are the very things that leverage the illusion of reflexivity that is the foundation of lived experience. And this illusion, in turn, leverages so very much, a cornucopia of semantic phenomena, turning dedicated neural circuits that interact with their variable environments in reliable ways into ethereal, abiding things like concepts, numbers, generalizations, axioms, and so on. Since the brain lacks the resources to track its neural circuitry as neural circuitry it tracks them in different, cartoonish guises, ones shorn of history and happenstance. Encapsulation ensures that we confuse our two-dimensional kluges with all there is. So, for instance, our skin-deep experience of the connectionist morass of our brain’s mathematical processing becomes the sum of mathematics, an apparently timeless realm of apparently internal relations, the basis of who knows how many Platonic pipedreams.

We are the two-dimensional ghost of the three-dimensional engine that is our brain. A hopelessly distorted cross-section.

Of course none of this addresses the Hard Problem, the question of why the brain should give rise to consciousness at all, but it does suggest novel ways of tackling that problem. What we want from a potential explanation of consciousness is a way to integrate it into our understanding of other natural phenomena. But like my daughter and her car seat, it simply refuses to be buckled in.

Part of the Hard Problem, I’m suggesting, turns on our illusory self-identity, the way the thalamocortical system’s various information horizons continually ‘throw’ or ‘strand’ it beyond the circuit of what it can process. We continually find ourselves at the beginning of our lives for the same reason we think ‘we’ continually ‘author’ ourselves: because the neurophysiological antecedents of the thalamocortical system do not exist for it. Because it is an ‘encapsulated’ information economy, and so must scavenge pseudo-antecedents from within (so that thought seems to arise from thought, and so on).

We are our brains in such a way that we cannot recognize ourselves as our brains. Rather than a product of recursive information processing, perhaps consciousness simply is that processing, and only seems otherwise because of the way the limits of recursive processing baffle the systems involved.

In other words (and I would ask all the Buddhists out there to keep a wary eye on their confirmation bias here), there is no such thing as consciousness. The Hard Problem is not the problem of explaining how brains generate consciousness, but the dilemma of a brain wired to itself in thoroughly deceptive ways. We cannot explain what we are because we literally are not what we ‘are.’

As bizarre as this all sounds, it’s not only empirically possible, but (given that neural reflexivity is the basis of consciousness) it’s empirically probable. The extraordinary, even preposterous, assumption, it seems to me, would be that our brains would evolve anything more than an environmentally and reproductively ‘actionable’ self-understanding.

I get this tingling feeling sometimes when I ponder this, a sense of contorted comprehension reaching out and out… I have this sense of falling flush with the cosmos, a kind of filamentary affirmation. And at the same time I see myself as an illusion, a multiplicity pinched into unitary selfhood by inability and absence. A small, silvery bubble–a pocket of breathlessness–rising through an incomprehensible deep.

Like I say, I think there is an eerie elegance and parsimony to this account, one with far-reaching interpretative possibilities. Not only do I think it provides a way to tether traditional continental philosophical concerns to contemporary cognitive neuroscience, I think it provides an entirely novel conceptual frame of reference for, well… pretty much everything.

For example: Why do propositional attitudes wreck compositionality? Because language evolved around the fact of our thalamocortical systems and their information horizons. Think of the ‘view from nowhere’: Is it a coincidence that truth is implicated in time and space? Is it a coincidence that the more we situate a claim within a ‘context,’ the more contingent that claim’s truth-value intuitively seems? Could it be that language, in the course of its evolution, simply commandeered the illusion of consciousness as timeless and placeless to accommodate truth-value? This would explain why its ‘truth function’ breaks down whenever language ‘frames frames,’ which is to say, makes claims regarding the intentional states of others. Since your ‘linguistic truth system’ turns on the occlusion of your frame, linguistically embedding the frame of another would have the apparent result of cutting the truth-function of language in two, something that seems difficult to comprehend, given that truth is grounded in nowhere… How could there be two nowheres?

Another example: Why do paradoxes escape logical resolution? All paradoxes seem to involve mathematical or linguistic self-reference in some form. Could these breakdowns occur because there is no such thing as self-reference at the neural level, only the illusion that arises as a structural consequence of our blinkered brains? So what we might have are two cognitive systems–one largely unconscious, the other largely conscious–coming to loggerheads over the latter’s inability to relinquish what the former simply cannot compute.

And the list goes on.

T-Zero… and counting.

Follow

Get every new post delivered to your Inbox.

Join 586 other followers