Three Pound Brain

No bells, just whistling in the dark…

Month: January, 2011

T-ZERO

by rsbakker

Aphorism of the Day: Beware those who prize absurdity over drama: they are the enlightened dead.

The Enlightened Dead, just so you know, is the title of the next Disciple novel.

I like to thank those who chimed in with their support, though I can’t help but feel you are the vocal exception to the silent rule. As it stands, I’ve come to realize these uber-philosophical posts will be buried in due course anyway as the blog continues to grow. It’s the balance that’s important, I think. With this in mind, allow me one final elaboration of the previous entries. 

So when we normally think about time we tend to think in terms like this:

t1 > t2 > t3 > t4 >t5

which is to say, in terms of a linear succession of times. This happens, then that and that and that and so on. What we tend to forget is the moment that frames this succession in simultaneity – the Now, which might be depicted as:

T0 (t1 > t2 > t3 > t4 >t5)

I call this an instant of declusion, where you make the implicit perspectival frame of one moment explicit within the implicit perspectival frame of another, subsequent moment. (Linguistically, the work of declusion is performed by propositional attitudes, which suggest that it plays an important role in truth – but more on this below).

Given that the Now characterizes the structure of lived time, we can say (with Heidegger) that our first notation, as unassuming as it seems, does real representational violence to the passage of time as we actually experience it. (This is a nifty way of conceptualizing the metaphysics of presence, for you philosophy wonks out there.)

The lived structure of time, I would hazard, looks something more like this:

T0 (t5 (t4 (t3 (t2(t1)))))

where the stacking of parentheses represents the movement of declusion. In this notation, the latest moment, t5, decludes t4, which decludes t3, which decludes t2, which decludes t1. Looked at this way, lived time becomes a kind of meta-inclusionary tunnel, with each successive frame figured within the frame following. (Of course, the ‘laws of temporal perspective’ are far muddier than this analogy suggests: a kind of myopic tunnel would be better, where previous moments blur into mnemonic mush rather than receding in an ordered fashion toward any temporal vanishing point).

T0, of course, is ‘superindexical,’ a reference to this very moment now, to the frameless frame that you somehow are. It’s a kind of ‘token declusion,’ a reference to the frame of referring – or what I sometimes call the ‘occluded frame.’ I would argue that you actually find versions of this structure throughout philosophy, only conceptualized in drastically different ways. You can use it as a conceptual heuristic to understand things as apparently disparate as Derrida’s differance, Nietzsche’s Will to Power, Heidegger’s Being, and Kant’s transcendence. Finding an ‘adequate’ conceptualization (rationally regimented declusion) of the occluded frame is the philosophical holy grail, at least in the continental tradition.

Just for example: if you emphasize the moment to moment nonidentity of the occluded frame, the fact that T0 is in fact t5, then declusion becomes exclusion, and every act of framing becomes an exercise in violence. No matter how hard we try to draw the world within our frame, we find ourselves deflected, deferred. Deconstruction is one of the implicatures that arise here.

If, however, you emphasize the identity of the occluded frame, the fact that T0 is the very condition of t5, declusion becomes inclusion, and we seem to become ‘transparent,’ a window onto the world as it appears, the very ‘clearing of Being’ as that fat old Nazi, Heidegger might say.

It would help, I think, to unpack the above notation a little.

T0 (t1)

T0 (t2(t1))

T0 (t3 (t2(t1)))

T0 (t4 (t3 (t2(t1))))

T0 (t5 (t4 (t3 (t2(t1)))))

This, I think, nicely represents the paradox of the Now, the way it frames difference in identity, an identity founded upon absence. (Consider Aristotle:”it is not easy to see whether the moment which appears to divide the past and the future always remains one and the same or is always distinct”) If we had perfect recall, this is the way our lives would unfold, each moment engulfing the moment previous without loss. But we don’t, so the orderly linear bracketing of moment within moment dissolves into soup.

(This also shows the difficulties time poses for language, which bundles things into discrete little packages. Thus the linguistic gymnastics you find in a thinker like Heidegger. This is why I think you need narrative to press home the stakes of this account – which is one of the reasons why I wrote Light, Time, and Gravity.)

So what could explain this structure? Is it the result of devoted T0 circuits within the brain? Temporal identity circuits?

Or is it, like the occluded boundary of our visual field, a positive structural feature arising from a brute neurophysiological incapacity?

T0, I’m suggesting, is a necessary result of the thalamocortical system’s temporal information horizon, an artifact of the structural and developmental limits placed on the brain’s ability to track itself. Since the frame of our temporal field cannot be immediately incorporated within our temporal field, we hang ‘motionless.’ Our brain is the occluded frame. The same way it has difficulty situating itself as itself in its environment (for the structural and developmental reasons I enumerated previously), it has difficulty tracking the time of its temporal tracking. In other words, reflexivity is the problem.

The severe constraints placed on neurophysiological reflexivity (or ‘information integration,’ as Tononi calls it) are the very things that leverage the illusion of reflexivity that is the foundation of lived experience. And this illusion, in turn, leverages so very much, a cornucopia of semantic phenomena, turning dedicated neural circuits that interact with their variable environments in reliable ways into ethereal, abiding things like concepts, numbers, generalizations, axioms, and so on. Since the brain lacks the resources to track its neural circuitry as neural circuitry it tracks them in different, cartoonish guises, ones shorn of history and happenstance. Encapsulation ensures that we confuse our two-dimensional kluges with all there is. So, for instance, our skin-deep experience of the connectionist morass of our brain’s mathematical processing becomes the sum of mathematics, an apparently timeless realm of apparently internal relations, the basis of who knows how many Platonic pipedreams.

We are the two-dimensional ghost of the three-dimensional engine that is our brain. A hopelessly distorted cross-section.

Of course none of this addresses the Hard Problem, the question of why the brain should give rise to consciousness at all, but it does suggest novel ways of tackling that problem. What we want from a potential explanation of consciousness is a way to integrate it into our understanding of other natural phenomena. But like my daughter and her car seat, it simply refuses to be buckled in.

Part of the Hard Problem, I’m suggesting, turns on our illusory self-identity, the way the thalamocortical system’s various information horizons continually ‘throw’ or ‘strand’ it beyond the circuit of what it can process. We continually find ourselves at the beginning of our lives for the same reason we think ‘we’ continually ‘author’ ourselves: because the neurophysiological antecedents of the thalamocortical system do not exist for it. Because it is an ‘encapsulated’ information economy, and so must scavenge pseudo-antecedents from within (so that thought seems to arise from thought, and so on).

We are our brains in such a way that we cannot recognize ourselves as our brains. Rather than a product of recursive information processing, perhaps consciousness simply is that processing, and only seems otherwise because of the way the limits of recursive processing baffle the systems involved.

In other words (and I would ask all the Buddhists out there to keep a wary eye on their confirmation bias here), there is no such thing as consciousness. The Hard Problem is not the problem of explaining how brains generate consciousness, but the dilemma of a brain wired to itself in thoroughly deceptive ways. We cannot explain what we are because we literally are not what we ‘are.’

As bizarre as this all sounds, it’s not only empirically possible, but (given that neural reflexivity is the basis of consciousness) it’s empirically probable. The extraordinary, even preposterous, assumption, it seems to me, would be that our brains would evolve anything more than an environmentally and reproductively ‘actionable’ self-understanding.

I get this tingling feeling sometimes when I ponder this, a sense of contorted comprehension reaching out and out… I have this sense of falling flush with the cosmos, a kind of filamentary affirmation. And at the same time I see myself as an illusion, a multiplicity pinched into unitary selfhood by inability and absence. A small, silvery bubble–a pocket of breathlessness–rising through an incomprehensible deep.

Like I say, I think there is an eerie elegance and parsimony to this account, one with far-reaching interpretative possibilities. Not only do I think it provides a way to tether traditional continental philosophical concerns to contemporary cognitive neuroscience, I think it provides an entirely novel conceptual frame of reference for, well… pretty much everything.

For example: Why do propositional attitudes wreck compositionality? Because language evolved around the fact of our thalamocortical systems and their information horizons. Think of the ‘view from nowhere’: Is it a coincidence that truth is implicated in time and space? Is it a coincidence that the more we situate a claim within a ‘context,’ the more contingent that claim’s truth-value intuitively seems? Could it be that language, in the course of its evolution, simply commandeered the illusion of consciousness as timeless and placeless to accommodate truth-value? This would explain why its ‘truth function’ breaks down whenever language ‘frames frames,’ which is to say, makes claims regarding the intentional states of others. Since your ‘linguistic truth system’ turns on the occlusion of your frame, linguistically embedding the frame of another would have the apparent result of cutting the truth-function of language in two, something that seems difficult to comprehend, given that truth is grounded in nowhere… How could there be two nowheres?

Another example: Why do paradoxes escape logical resolution? All paradoxes seem to involve mathematical or linguistic self-reference in some form. Could these breakdowns occur because there is no such thing as self-reference at the neural level, only the illusion that arises as a structural consequence of our blinkered brains? So what we might have are two cognitive systems–one largely unconscious, the other largely conscious–coming to loggerheads over the latter’s inability to relinquish what the former simply cannot compute.

And the list goes on.

T-Zero… and counting.

Advertisements

Exhibit X

by rsbakker

Aphorism of the Day: There’s a time to ponder and there’s a time to communicate. So long as we don’t ask the what of the first, and the who of the second, we can pretend that art is the sum of their confusion.

I toyed with the idea of actually using this blog as a platform to “publish” some of my philosophical writing. But the last three posts have reminded me just how tribal philosophy is.

I literally have dozens of essays, a MA thesis, one aborted dissertation, another well on the way to completion, on a whole variety of philosophical topics. When it came to papers critiquing various philosophers on various topics, I was urged by many of my instructors to publish (and thus prepare my CV for the dreaded Job Wars) but I could never bring myself to follow through. When it came to my original stuff, no one knew what the hell I was talking about. My success with the former convinced me that I wasn’t simply crazy, that I was cutting a path that others could potentially follow and elaborate, but I had difficulty playing the game the way I was supposed to. As a couple of my professors told me, I needed to earn my bona fides practicing straight philosophy first – a sensible enough admonition. And yet I just couldn’t bring myself to stand still. One year I’m a (quasi)Derridean, the next I’m an (quasi)Adornian, then I’m a (quasi)Wittgensteinian, then I’m … something nobody seems to quite recognize. And I continue to be – for some reason – thoroughly ashamed of all my philosophical output.

Ashamed of things I don’t even believe… 

Which is probably why I bolted the way I did when the first offers for The Prince of Nothing came in. (A decision I may live to regret, given the creeping growth of illegal downloads).

So, if even professional philosophers, that most absurd and rarefied of all hothouse tribes, are squinting their eyes and shaking their heads, what about all the tribes of real people?

I mean, I still suffer the urge to shake my head in summary disdain and dismissal whenever I encounter something I can’t readily understand and appreciate, even though I’ve forced my way past that instinct more times than I can count. I mean, I’ve read Difference and Repetition closely, for Christ’s sake! I have no doubt whatsoever that these last three posts have convinced any number of potential readers to leave their ‘Bakker itch’ unscratched.

So I don’t know what to do with my more ‘technical’ musings. Bury them, I suppose, like all things precious and problematic…

Or incriminating.

Paint Chip Salad

by rsbakker

Aphorism of the Day: The mind is simply the dim shadow of what the brain sees peering through the glutinous fog of itself.

One last eye-crosser…

When Metzinger’s Being No One came out, I snapped it up thinking that at last I had found a theoretically kindred spirit. Metzinger himself told me he thought the differences between my position and his were ‘insignificant’ after he had read Neuropath. But Metzinger is a representationalist (a very open-minded one), whereas I see ‘representations,’ the notion of ‘things’ standing in causal-cum-logical relationships to other ‘things,’ as being precisely the kind of conceptual confusion blind brains are apt to indulge in. Because we are trapped with the products as given, the tendency is to think of them as distinct from the processes that underwrite them–to think ‘tree for me here’ is linked to ‘tree in itself out there.’ The intuitive tendency, in other words, is to conceptualize all the intervening processes under the conceptual rubric of relations, something which possesses an implicature all its own–one which could very well be a blind alley. To risk running afoul a kind of product/process ambiguity.

Thanks to conceptual path dependency, the differences between me and Metzinger stack up from there. So Metzinger, for instance, likes to talk about models (such as the famous ‘phenomenal self model’). Even though there is no such thing as a ‘self,’ in his account, there is a phenomenal self, which is to say, something illusory. In my account, I’m not even sure there’s even this!

Do we experience selfhood? Certainly. So the pivotal question then becomes one of quiddity: Just what do we experience? A kind of simulation, Metzinger would say (one requiring NCC’s). As crazy as this sounds common-sense-wise, it makes wonderful intuitive sense at a conceptual level. My position doesn’t even enjoy this intuitive advantage (which is probably why no one seems to know what the hell I’m talking about–me included). Does it make sense to say that the ‘trailing into absent oblivion’ of our visual field is a kind of simulation? Not at all. And yet I’m suggesting that nothing less than self-identity is a version of this. Pile onto this a welter of other functions, some possessing NCC’s, some not, and you arrive at the morass we ‘experience’ as selfhood. The self is neither real nor a simulation.

Then just what is it? Got me. Confused, maybe? Incoherent. Given its evolutionary youth, perhaps this is what we should expect.

As someone, I think, pointed out in the comments, the thumbnail ‘explanation’ of transparency I provided earlier has been around a long time. All I’m adding is a different conceptual spin, and suggesting that the blindness that enables the brain to open up a window on the world within itself, also mandates many other things, the frame for a ‘self’ among them.

Encapsulation Theory, you might say, attempts to explore what happens when the skin of things is constitutively confused for the meat. (You could imagine an ‘encapsulation account’ of say, mathematics, moral reasoning – pretty much anything). It is an attempt to correlate the peculiarities of experience with the structural and developmental facts of our blind brains. Why is today always the first day of the rest of my life? Why is it always somehow the same now, the same here, even though it is most definitely not the same now or the same here? Because a corresponding temporal oblivion accompanies the visual oblivion that encloses our visual field. Because the conscious brain hangs in temporal oblivion, the result of an information environment it has no access to. Because, in a strange sense, we’re bubbles without an outside.

So, IF consciousness is the product of neural reflexivity, the brain tracking itself, then, because of blindness and encapsulation, we should expect it to run into difficulty placing itself in its environments, for one. Since the brain cannot see itself as another object in the environment, it has to see itself as something else–like a soul, mind, Dasein, transcendental ego, and so on.

We should also expect it to have difficulty relating itself to its environments. Given the complexity of its inner environment, its relative evolutionary youth, and so on, there is no way it can use the machinery it uses to track causal transactions in its outer environment to track itself–or other brains for that matter. Heuristic kluges are all the conscious brain possesses, things such as purpose, morality, aboutness, and so on. Since these kluges are its cognitive baseline, they are literally what it means to ‘comprehend’ (to enjoy the feeling of understanding) whether they are in any sense ‘accurate’ or not. Since these kluges turn on the actual machinery of our environmental interaction, they will always appear ‘adequate,’ no matter how they distort they actual processes. Since these heuristic kluges exhaust the conscious brain’s access to its own inner workings, they will always seem the ‘most real,’ and therefore the primary explananda.

Because of all this, and since these heuristic kluges are just that, heuristic kluges, the conscious brain will be perpetually mystified, even moreso as it begins thoroughly decoding the causal complexities of its external environments. Some conscious brains will affirm the priority of the heuristic kluge (everything has a purpose), while others will affirm the priority of the causal environment (like, shit just happens, dude), and still others will continually attempt to reconcile the two (enter Dennett).

We should expect, in other words, something like the philosophy of mind. What is more, we should expect that any extra-terrestrial intelligence will also have its own philosophy of mind, with its own debates regarding its own heuristic kluges, which may or may not resemble our own.

High on the long list of Books-I-Want-to-Write, is an SF piece where the aliens possess genuinely alien categories of consciousness. Imagine a species who evolved to ‘own’ their behavioural outputs not with the ‘feeling of willing’ as we did, but with something different, a ‘feeling of accompanying’ say. Imagine something like ‘morpose,’ a category that fuses purposiveness and propriety/morality. Or how about a consciousness that experiences its environment under the rubric of from instead of about, so that the witness catches a glimpse from the murderer, rather than of

The list goes on and on. Playing the philosophy game might be like eating paint-chips, but you have to admit, there’s something to be said for barfing art…

Shrink-wrapping Consciousness

by rsbakker

Aphorsim of the Day I: Perception is simply introspection with strategic agnosia.

Aphorism of the Day II: What does the brain look like when viewed from within? The world.

Another philosophical eye-crosser alert. I find all this stuff embarrassing for some reason. Evidence of my crackpotitude, perhaps…

In order to linguistically communicate with other brains, the brain needs to first track its own processing, then condense and translate it into a linear code. I see experience as a kind of translational possibility space, where everything that can be spoken about is ‘rendered’ for possible translation into speech–this is the working hypothesis I’ve used for years now, anyway. Consciousness as the staging area for dynamic data compression and linguistic transmission. Given the opportunistic vagaries of evolution, it has doubtless been yoked to many other uses, but I see this as the primary developmental engine of consciousness, you might say. This has been my guiding fable.

For some reason, the hominid brain developed a secondary brain, a neural fifth column, to infiltrate and monitor the most reproductively pertinent functions of the original. So I am interested in neural reflexivity: Hofstadter’s Metamagical Themas and Godel, Escher, and Bach, which I read with avid excitment in the late 80’s, have undoubtedly influenced me in innumerable ways. But I know when Strange Loops came out I was very excited to see where his musings had led him, but actually never bothered purchasing the thing after thumbing through it at the book store. 

For me, the seminal question was one of what we might expect when a brain that has been successfully tracking its environments over millions of years begins, in a relatively wholesale fashion, tracking itself. This is what led me to the Blind Brain Hypothesis: the idea that the structure of experience is the result of a brain that is structurally and developmentally unable to see itself as another brain in its environment. Why each brain, although part of the environment it tracks, comprises a kind of environmental blindspot–and why it finds it so difficult to reconcile its third-person and first-person versions of itself (or why there is a mind-body problem). I came up with a number of things: process asymmetry, the way growing more circuits to track existing circuits simply adds to the amount of untracked circuitry; evolutionary youth, the way these new circuits lack the hundred million plus year pedigree of the circuits used to track external environments; evolutionary serendipity, the way these circuits had to earn their keep across the caprice of environmental change; positional invariability, the way the brain is hardwired to its internal environment, and so cannot sample it the way it can its external. There’s others that I can’t remember…

And a lot of interesting things seemed to fall out of these musings: the possibility, for instance, that intentionality could be structurally and developmentally mandated such that we could assume that any ETI we encounter possesses its own version of it. Or the interesting possibility of multiple awarenesses inhabiting the same brain, each, perhaps, as convinced as the others that they are the sole owner/operator.

Encapsulation Theory, which I have in no way explained, arose from this as well.

But time is short, so it’ll have to wait for my next (eye-crossing) post.

Heavy Petting

by rsbakker

Aphorism of the Day: A pet theory is a lot like a pet cat, except that it never dies, always purrs, and craps all over your imagination instead of in the conceptual litter box.

The whole Dennett thing has me back in a philosophical frame of mind. For those of you interested only in my narrative (as opposed to my theoretical) fictions, I’m afraid this might be another eye-crosser.

I’ve been thinking how I never asked Dennett the question I wanted to ask him. Instead, I asked about the rise of ‘neuromarketing’ and how the problem of ‘creeping manipulation’ might compare to the problem of ‘creeping exculpation’ (where lawyers, educators, and the like use neuroscience to make ‘diminished capacity’ arguments to deflect responsibility) he talked about in his presentation. His answer, which was simply a kind of caveat emptor, was so bad that it spawned a couple of other questions far more critical and biting than my own.

What I wanted to ask him about was how he could say we simply ‘are’ our brains when we experience only a fraction of them. The follow up question I wanted to ask was how he thought this ‘fractional identity’ conditions conscious experience. This all ties into a pet theory of mine: that the apparent structure of conscious experience is more a product of what the brain lacks than what it possesses. That much of what we experience, in other words, possesses no NCC’s, neural correlates of consciousness.

The example I always come back to is the way the visual field is both finite and unbounded. The way sight simply trails away into a kind of absolute absence. This, I think, is an obvious structural feature of visual awareness that obviously possesses no NCC’s, no neural circuits that generate the experience of ‘trailing into absence.’ It simply comes with the structural territory. Those little swatches of brain tissue called retinas simply feed forward what they can: the absence of any further information is experientially expressed, in visual awareness, as the absent oblivion that rings our periphery.

Now what I think, vainglorious fool that I am, is that most of the more perplexing features of conscious experience can be ‘explained’ – or at least understood – via this analogy, in the way the various ‘information horizons’ of the various neural systems behind consciousness ‘encapsulate’ and so profoundly structure various experiences. Transparency, for instance, the way we see through our experience, so that we see trees and cars and so on rather than seeing trees and cars and so on causing us to see trees and cars, is an easy one. The information horizons of those regions responsible for conscious experience do not encompass anything more than the ‘products’ of perceptual processing – so the world comes to us as ‘given,’ rather than as a neural construct.

But it also offers possible ‘explanations’ of more difficult things, such as self-identity and the now. It’s always ‘now,’ no matter how much time passes, because our temporal awareness is encapsulated much the same way our visual awareness is. While our brains have no difficulty discriminating times within our ‘temporal field,’ the time of the field itself cannot be discriminated, and so seems to hang in timelessness. The neural circuits responsible for temporal discrimination fall outside of temporal discrimination. Sure, we have a variety of subsystems (such as those involved in memory and narrative) that allow us to stitch our momentary ‘specious presents’ into a greater timeframe, personal histories and whatnot, the same way we have a variety of subsystems that allow us to cobble our momentary visual fields into visual world. But the primary experience of timelessness, the abiding identity of the now, always characterizes the experience in the first instance. The same way we can’t see ‘seeing,’ we literally can’t time ‘timing,’ and so find ourselves hanging in a kind of timeless oblivion while the world rushes about us (within us). The present is literally an artifact of an inability, one grounded in structural and evolutionary constraints placed on our brain.

So many explanatory possibilities fall out of this ‘Encapsulation Theory’ that I don’t know where to begin. For instance, I think it actually offers an explanation of perspective: what it is, why we have it, as well as why things become so bewildering as soon as it attempts to ‘gain perspective’ on itself. Believe it or not, I actually think I’ve stumbled across a possible, quasi-naturalistic explanation of paradox.

The primary problem with this pet theory of mine, however, is simply that so many other amateurs have pet theories of their own, it’s pretty much impossible to get any experts (who all happen to be pursuing their pet theories) to relinquish the time and effort required to grasp its Gestalt, the global sense-making that makes it so compelling to me.

That said, as much as I think it satisfies the theoretical virtues of simplicity, fecundity, and explanatory scope, I still refuse to believe the thing. It’s consequences are nothing short of catastrophic. It really does render us nothing more than absurd fictions.

Determined to Disagree

by rsbakker

So I had the privilege of seeing Dan Dennett speak for the first time. It’s nice to discover than an author you’ve followed your entire adult life possesses genuine charisma. The presentation, which was entitled “My Brain Made Me Do It” was as much a comedy act as a philosophical discursus. The auditorium was packed, but thanks to my friend Nandita, I enjoyed everything from the second row.

His argument was that all the neuroscientists making alarmist claims regarding volition and freedom were being socially irresponsible in addition to getting the philosophy wrong. He cited a recent study where college students became more inclined to cheat after reading that responsibility is an illusion, the suggestion being that a post-responsibility society wouldn’t be much of a society at all. Then he basically repeated several of the arguments he made in Elbow Room years back, and more recently in Freedom Evolves.

Dennett is an exceedingly slippery thinker. Depending on the frame of reference you take to him, he’ll sound like an eliminitivist (someone who thinks all our psychological categories are so mistaken that we need to replace them wholesale) one minute, then an intentional realist (someone who thinks our psychological categories are generally right on the button) the next.

He’s also brilliant at expressing his ideas: reading him, I often find myself nodding and nodding, thinking that it all sounds so obvious, only to screw my face up in confusion while I’m making a coffee several moments later.

But he’s neither an eliminitivist nor an intentional realist. He’s a kind of Quinenan pragmatist. He doesn’t care so much whether intentionality is real, as he cares whether its useful–and there’s no denying the latter. It simply doesn’t pay to consider others as machines, even though that’s what they are. What does pay, is taking what he calls the ‘intentional stance,’ treating things and others as agents, as a kind of cognitive shorthand, a way to successfully manipulate and interact with monstrously complicated systems. For all intents and purposes, the ‘metaphysical reality’ of the intentional is beside the point (do you smell the circularity here?).

So when it comes to the issue of free will, he argues in numerous ways that we have the only free will that matters, so allowing him to preserve all the concepts further down the implicative foodchain. So for instance, it makes no sense to say “my brain made me do it” because we are our brains. We’re literally just saying that we did something.

Of course, the problem is that ‘we’ are just a small part of our brains.

Dennett is after a kind of ‘semantic compatibilism’: he wants to find ways to make our old psychological vocabulary fit with the findings of cognitive neuroscience, and so preserve the institutions raised upon the former. Over the years, he has waged an ingenious guerrilla campaign of equivocation. So with free will for instance, he takes our ‘common sense’ understanding, shows how it’s so ridiculous that it can’t be the ‘free will’ we want, then redefine into something that gives us all the things we really want, even if we didn’t realize as much in the first place. If you say, “No, that’s not what I wanted,” he just shrugs his shoulders and says, “Well, good luck with your magic. I’m quite fine, thank you.”

For me, the traditional philosophical debates about determinism are now beside the point. The problem is the chasm that seems to be opening between the world we experience versus the world we know, thanks to the accumulating horror that is science. More and more, the intuitions of the former jar against the findings of the latter. Dennett seems to assume that our intuitions turn on our concepts: if we could just get clear on our concepts, then the conflict between our experience and our knowledge would simply dissappear. Personally, I think the situation is muddier: that our concepts turn on our intuitions turn on our concepts turn on… and so on. In the particular case of free will, I think the intuitions drive the concepts more than vice versa.

So, for instance, I think the intuition that tells me my sense of willing is behind my actions, rather than something that happens to accompany them (as the research suggests), is damn near universal. I find the notion that my sense of willing could be selectively shut down out and out terrifying. And I would suggest that the reason so many people intellectually agree with Dennett, only to suffer a subsequent experiential revolt, is a result of ‘mandated intuitions’ like this.

I think we are hardwired to believe in magic of various kinds, and that an immense amount of specialized training is required to get us believing otherwise. Far too much for Dennett’s prescriptive conceptual approach to even begin commanding the kind of consensus he needs to justify–let alone realize–his social project.

Like I say in the Afterword to Neuropath: what Dennett is doing is like telling us at the funeral of our beloved Gramma Mildred to simply begin calling our dog ‘Mildred.’ When we object, he just shrugs and reminds us that the dog was Gramma Mildred all along anyway

But he never quite explains the body in the coffin next to him.

The Global Sycophant

by rsbakker

Definition of the Day – Facebook: a clever distraction for the masses designed to secure the invisibility of the poor, the anti-social, and the technologically retarded.

Just a note to those who were surprised by my take on sanitizing Huck Finn: this site and my project are dedicated to walking the tightrope between actual difficulty and actual accessibility. Terms like ‘selling out’ and ‘dumbing down’ are cornerstones of our indoctrination into literary culture, the fig-leaves we use to avoid mingling with our intellectual and aesthetic ‘lessers.’ To continue writing for ourselves (the easiest thing to do) under the pretence of writing for eternity (the most difficult thing to do).

Over and over again we’re told that altering literary expression for commercial or ideological reasons is an essential aesthetic evil, something that devalues works no matter what the contexts. The most immediate counter-example I can think of would be Underground Man, an originally Christian tract that was thankfully secularized by the Tsar’s censors.

If we had the luxury of a long, stable future, then maybe I would be more inclined to sanctify Twain’s intention (or my interpretation of it), but as it stands… Is Twain’s purity more important than his popularity? If you think so, then you have a much more sanguine view of the future than I do.

I watched The Social Network last night, and it got me thinking how strange it is to live in a time when five years ago counts as history. It also got me thinking about social compartmentalization.

It’s not that the world is flat or small: both of these metaphors emphasize the global availability provided by information technology. Framed in these terms, things like the internet seem unambiguously good. Information technology renders the world more transparent to desire.

The crazy thing is that this global availability was almost immediately identified as the primary problem. When everything is equally available, everything obscures everything else. People want certain things, not everything, and thus was the industry of web intermediaries born, companies that specialize in fetching what we want from the global information warehouse. Google. Facebook. WordPress. LexusNexis.

Each of these intermediaries turn on specialized programs that use all the data you have explicitly or inadvertently provided them to pluck things out of the stochastic soup of information. These algorithms are often their most closely guarded secrets.

Enter human nature, and the darker implications of our information future. So long as you leave human nature out of the picture, the kinds of specialized programs these companies use seem unambiguously good. Hell, I even appreciate the spam Amazon sends me. Why? Because my interests are the most interesting (and important) interests going, so the more Amazon feeds those interests, the better!

The problem is that human nature is adapted to environments where the access to information was geographically indexed, where its accumulation exacted a significant caloric toll. We don’t call private investigators ‘gumshoes’ for no reason. We are adapted to environments where the info-gathering workload continually forced us to ‘settle,’ which is to say, make due with something other than what we originally desired, when it comes to information.

This is what makes the ‘global village’ such a deceptive misnomer. In the preindustrial village, where everyone depended upon one another, our cognitive selfishness made quite a bit of adaptive sense: in environments where scarcity and interdependency force cognitive compromise, you can see how cognitive selfishness–finding ways to justify oneself while impugning potential competitors–might pay real dividends in terms of in-group prestige. Where the circumstantial leash is tight, it pays to pull and pull, and perhaps reach those morsels that escape others.

In the industrial village, however, the leash is far longer. But even still, if you want pursue your views, geographical constraints force you to engage individuals who do not share them. Who knows what Bob across the road believes? (My Bob was an evangelical Christian, and I count myself lucky for having endlessly argued with him).

In the information village the leash is cut altogether. The likeminded can effortlessly congregate in innumerable echo chambers. Of course, they can effortlessly congregate with those they disagree with as well, but… The tendency, by and large, is not only to seek confirmation, but to confuse it with intelligence and truth–which is why right-wingers tend to watch more Fox than PBS.

Now, enter all these specialized programs, which are bent on moulding your information environment into something as pleasing as possible. Don’t like the N-word? Well, we can make sure you never need to encounter it again–ever.

The world is sycophantic, and it’s becoming more so all the time. This, I think, is a far better cartoon generalization than ‘flat,’ insofar as it references the user, the intermediary, as well as the information environment.

The contemporary (post-posterity) writer has to incorporate this radically different social context into their practice (if that practice is to be considered even remotely self-critical). If you want to produce literary effects, then you have to write for a sycophantic world, find ways not simply to subvert the ideological defences of readers, but to trick the inhuman, algorithmic gate-keepers as well.

This means being strategically sycophantic. To give people what they want, sure, but with something more as well.

 

 

  
 
 
 

 



The Other ‘N-word’

by rsbakker

Aphorism of the Day I: The mild feelings that accompany your presumption have no bearing on the mildness of your presumptions. Even Nazis wonder about all the fuss.

Aphorism of the Day II: If a word offends thee, pluck it, sure. If a word really offends thee, say it over and over again, until its nonsense is revealed. So, repeat after me: deconstruction, deconstruction, deconstruction, deconstruction…

Censoriousness is part of the human floor-plan. Everybody thinks certain people shouldn’t be allowed to say certain things. We instinctively understand that controlling actions–power–turns on controlling beliefs. If you let the latter get out of hand…

When I was studying in Nashville, one of my classmates married this Polish guy who got a job working in construction. Shortly after getting the job he apparently approached one of his coworkers and said, “Excuse me. Please. Could you tell me? What is the difference between redneck and white-trash, and which one are you?”

On another occasion, I found myself debating two fellow PhD students, both from the deep south, who argued that the word ‘nigger’ was simply the word they grew up using, that they didn’t ‘mean anything’ by it. The resulting argument, as you might expect from philosophy grads, led nowhere, though it did sketch a couple of interesting circles. I argued that what they thought they meant had precious little to with anything. Words were social and historical–and most importantly, bearers of value. In other words, words were huge, and some were larger than others. ‘Nigger,’ I suggested, was about as big as they come.

They argued a variety of face-saving things before petering out. The social authority gradient was skewed against them, and they could feel it. This is what shuts most people up, when you think about it. Numbers, not reasons. There was just more of me.

I mention this because of all the hoopla surrounding the new edition of Mark Twain’s Adventures of Huckleberry Finn, where the word ‘nigger’ has apparently been replaced by the more palatable ‘slave.’ Whatever you think about sanitizing works like Huck Finn, trimming and tucking them to facilitate the ease of consumption or whatever, what you can’t say is that its ‘just a word,’ you’re taking out. ‘Nigger’ is a social historical bearer of value, a monstrously huge one. So huge that we’ve invented a name–the ‘N-word’–for the name, to spare us the back-breaking effort of actually lifting the thing.

If you concentrate on the ink of the word, you can convince yourself that the stakes are low, insignificant compared to the various advantages, such as not having to worry about irate mothers on parent-teacher night and the like. If you concentrate on the meaning, you suddenly find yourself trying to wrestle history itself to the mat.

Where the ink is externally related to the text, the meaning is internally related. Plucking the former is like fishing a bagel out a bakeshop bin. Plucking the latter is like ripping a skein of nerves out of meat. Imagine going through the Bible and replacing every instance of ‘knew’ with ‘fuck.’  “And Abraham came unto Sarah and fucked her…” Gonzo scripture, baby. Everything changes where charged language is concerned.

This asymmetry, the lightness of the ink versus the heaviness of the meaning, explains the inevitability of censorship, as well as its attraction. It’s powerful stuff: all you need is a Sharpie and you can black out whole swathes of the world. The beautiful is rendered ugly, and the ugly, beautiful. It’s just too damn easy not to be utilized. As any parent who spells words rather than speaking them knows, there’s nothing like managed ignorance to keep a child on task.

Which brings me to my point: the good and the bad of it depends on the task. The question of removing ‘nigger’ from Huck Finn, I think, ultimately turns on how you define the task of literature. Is it supposed to do, or is supposed to be? If you see literature as a kind of tool, as something to be judged according the utility of its effects, then who cares how you modify the thing, so long as it gets the job done. If you see it as a sacrosanct object, as quasi-scriptural, then modification becomes sacrilege. What? Change the Prophet’s words?

(I can’t help but pause and think just how gnarly all the competing intuitions are: purity, utility, respect, courage, pollution, embarrassment, guilt… And here I am, trying to dress them up with ‘reasons’ like everyone else!)

“Believe!” the blockbuster cries. “Believe!” the commercial whispers. “Believe!” the schoolteacher smiles. Self-deception has become our greatest cultural good, and in an age when we can least afford it. Given my commitment to cultural triage, I’m inclined to chuck principle, and to say that anything that gets more kids reading Huck Finn is a good thing–if this indeed is the consequence. There’s plenty of gristle to chew on, otherwise. The original is always there.

And anything that popularizes Twain, the Great Sage of Human Stupidity, is even better.

Otherwise, I find myself wondering what Twain himself would think. As hard as he’s laughing at all the semantic hygienists, my guess is that a part of him would be both heartened and honoured. Heartened that things have changed so much as to make an international issue of the role language plays in bigotry, and honoured that after all this time we’re still paying so much attention to an old fool such as him.

Ultimately, his genius was to write books that show us for the tender idiots we are.

Again and again and again.

Doughnuts for the Heart, Broccoli for the Soul

by rsbakker

Aphorism of the Day: Literature is the cage where writers rattle empty cages.

So, true to his character, Disciple continues to hobble onward. A nice little capsule review appeared in The New York Times, and Drowning Machine actually picked him for their “Damn with Faint Praise Award,” given to the best, most overlooked book of 2010. Hopefully this drip-drip will continue convincing people out in the noir blogosphere to take a looksee.

For the curious among you, the slip from Docx to ‘Bocx’ in the piece I posted last week was entirely Freudian–if only all my mistakes were so clever!

I wanted to elaborate a bit on the ‘cages all the way down’ argument I provided in the previous post, talk about a corollary assumption that seems to afflict literary culture: the Myth of the Outside.

The Myth of the Outside describes the assumption, prevalent among many in the literary community, that literature is defined by the absence of generic constraint, that it somehow happens outside all the generic boxes you find crowding the warehouse of popular culture.

Since conventionality is a necessary condition of communication, they can’t be talking about the absence of conventionality. This particular ‘outside’ is unintelligible, plain and simple. No, Bocx and his ilk have to be talking about a different kind of ‘outside,’ one within the sphere of conventionality (to the degree it communicates anything at all), but distinguished in a manner that renders it ‘special,’ that exempts it from the myriad problems belonging to generic conventionality.

The Myth of the Outside, in other words, is a kind of lazy front for what might be called the Myth of a ‘Better Inside.’

So what makes literary conventionality so special? If you look at, say, the literary saws I import into my fantasy work, several obvious arguments come to mind. You could say, for instance, that the literary emphasis on interior action attunes people to their own inner lives, that the literary penchant for lyricism opens readers to the possibilities of language, or that the literary preference for moral ambivalence better represents the moral complexities of our day to day lives.

Saying broccoli is better than doughnuts means nothing so long as you are talking about taste. Saying broccoli is better for you, on the other hand, is saying something quite different. And it seems pretty easy to argue that the above conventions are in fact ‘healthier’ for readers, even if they don’t particularly like the taste. Indeed, this is largely why I imported them into the ‘epic fantasy cage’ in the first place.

So doesn’t this suggest that the literati are right? That, like broccoli, their writing is ‘just better for you’ than what you find in genre?

As I keep saying, if you stubbornly refuse to ignore the communicative dimension of conventionality, if you obnoxiously insist on pairing readers with your writers, the conceptual landscape is radically transformed. Once we do this, we can see, for instance, that the broccoli metaphor is quite misleading. Broccoli is healthy because the link between what it is and what it is does is more or less fixed. Fiction, on the other hand, possesses no such stability. As ‘semantic objects,’ books quite literally do not exist independently of readers, at least not the way broccoli exists independently of eaters.

The literary conventions I enumerate above, in other words, are broccoli or doughnuts depending on who happens to be reading them. There is very little Bocx could provide an English professor, say, aside from a succession of intellectual and aesthetic buzzes–entertainment for a specialized palette. He may advertise broccoli, but his dogged fidelity to literary conventions assures him a literary audience, and so he primarily sells doughnuts instead. The only difference between him and the generic writers he derides (often with backhanded compliments) is honesty.

This fact is invisible to him simply because he buys into the Myth of the Outside. Because his yardstick frames all his measurements, it seems to fall outside of the possibility of measurement altogether, to be as gargantuan as Truth. Thus the preposterous hubris.

The only way to spin broccoli out of doughnuts is to game the generic expectations of actual readers. The only way to game the generic expectations of actual readers, at least as far as I can see, is to game genre. The only way to be truly ‘outside’ is to be inside everything.

And this, I’m arguing, is the literature of future: one where writers interested in genuinely challenging readers range across the bookstore, section to section, skew to skew. Things aren’t looking good as it stands: posers like Bocx still occupy the cultural high ground, and as a result the wannabes continue to be herded into the faux-literary cage, convinced that turning their back on popular culture is the only way to be taken seriously by those who matter.

But the sense of exhaustion is mounting, and the dwindling relevance of the book is luring more and more gifted voices into the new media. The collapse is coming…

Maybe I’m describing the orthodoxy that will replace it, maybe not. 

 

The Myth of the Vulgar Cage (II)

by rsbakker

Definition of the Day – Pretentiousness: If you are smart, the knack for making other people feel stupid. If you are stupid, the knack for making yourself feel smart.

Here’s that piece I sent away to The Guardian some time back. The reason I keep flogging this horse, and will continue to do so, certainly has something to do with my own sense of resentment and status anxiety. I can feel it in the way I grit my teeth.

But it also has to do with the way I continually find myself trapped between cultures: the kinds of attitudes espoused by Docx and his clan do real damage to the Cause. Far from encouraging and desseminating criticality, they shut it down. People are hardwired to overgeneralize: so when a character like Docx comes along talking about ‘simpler psychologies,’ they not only reject him – there’s few things more pathetic than claiming authority where none is recognized – they also tend to reject intellectualism and criticality more generally. Docx’s column was literally an argument for why his practice was superior in kind to the practices of genre writers – with the upshot being that his readers are somehow superior as well.

On the other hand, I’m arguing that my particular, peculiar practice is superior in effect – and that in the world of ‘market segmentation,’ these effects can only be brought about by gaming genre. Otherwise you make your living reinforcing, rather than challenging assumptions, which is all well and fine so long as there’s enough muckrakers to keep things interesting. The idea is that literary culture has managed to secure the comforts of genre, writing the same things for the same readers, while pretending to produce the effects of literature. And so it is the souls who claim to be the most enlightened, stumble through the most embarassing dark. Everyone walks away confirmed in their flattering views.

The picture is drastically more complicated, I know, but I’m convinced this captures the dilemma in sum, or at least enough of it to warrant real experimentation. The bottomline, I think, is that it is impossible to write literature in the 21st century without ‘literary evangelicism,’ which is to say, absent any awareness of the actual assumptions of your actual audience. Given market segmentation, the ‘post-posterity’ writer no longer has the luxury of writing for him or herself.

Docx’s piece can be found here: http://www.guardian.co.uk/books/2010/dec/12/genre-versus-literary-fiction-edward-docx

 
 
THE MYTH OF THE VULGAR CAGE
 
In his recent article “Are Stieg Larsson and Dan Brown a match for literary fiction?” (The Guardian, 12/12/10), Edward Docx unfortunately demonstrates that the myths that cripple literary culture are alive and well.
 
His argument and attitude are familiar enough: as a connoisseur of literary fiction, he is dismayed by the explosive popularity of Stieg Larson and his posthumus Millennium trilogy. Troubled by the prospect that people might confuse mediocrity for excellence, he believes that “we need urgently to remind ourselves of … the difference between literary and genre fiction.” Apparently, culture is in danger of forgetting “that even good genre … is by definition a constrained form of writing. There are conventions and these limit the material.”
 
He invokes, in other words, what I like to call the Myth of the Vulgar Cage, wherein conventions are understood as constraints, and genre, therefore, is characterized by the absence of freedom. This, we are supposed to believe, is bad, very bad.
  
Despite Docx’s assertion to the contrary, the ‘Vulgar Cage’ metaphor is far and away the most pervasive way proponents of literary culture conceptualize the conventionality of genre fiction. One finds it everywhere, invoked as though it were as obvious as can be, even though the slightest examination reveals even more obvious problems. Like most self-aggrandizing myths, it is little more than a conceit founded upon a misconception. Not only does it have the happy consequence of glorifying those who write and read literary fiction, it also strategically distorts what conventions are in actuality.
 
 The conceit is straightforward, and unfortunately all too human. In genre, Docx is saying, the reader’s expectations are more regimented, which means that far too many choices “are already made.” Genre, he claims, “tends to rely on a simpler reader psychology.” The presumption, apparently, is that his books engage a more sophisticated psychology, and therefore possess more aesthetic value. His books are better, he seems to be saying, because his readers are better. They are more ‘sophisticated.’
 
 But note how easy it is to radically transform the above claim with a simple change of terms. For instance, I would entirely agree that commercial fiction tends to rely on more natural reader expectations, and that literary fiction engages more specialized expectations, which is to say, learned values. Expressed in these, non-question begging terms, one can then proceed to debate the merits of these ‘expectation sets,’ where and when the natural trumps the specialized or vice versa. The issue is shifted to more retail ground, one where the advantages and liabilities of both can be balanced one against the other. Certainly Bocx isn’t suggesting that all literary expectations are better all the time, is he?
 
But here’s the thing: Bocx doesn’t seem to think that literary fiction possesses any constraining conventions. This is tantamount to saying that literary readers do not possess overlapping sets of expectations–when, as a rather well-defined group of consumers, they most certainly do. Make no mistake, literary fiction is rife with rules, only a fraction of which Larsson violates! One can only assume that Bocx has been duped the way all of us, thanks to our psychology, often find ourselves duped. Since the values and expectations we use to rate and measure the world are generally implicit, invisible, we are inclined to think we are generally unconstrained, while those who follow explicit values and expectations seem to be thoroughly trapped.
 
It’s always the ‘other guy,’ isn’t it?
 
So much for the conceit. The shape of the misconception should already be clear from the way I have consistently paired conventions with expectations: the analogy of constraints entirely misses the communicative dimension of conventions. The Vulgar Cage is an out and out horrible conceptual metaphor. After years spent arguing this, I still find myself marvelling at just how sticky obvious falsehoods can be, so long as they flatter and exonerate. Convention is the bedrock of communication. Quite simply, there is no language, no culture, no storytelling–nothing intelligible at all–short of conventionality and its evil constraints.

This is why I much prefer to use the metaphor of the specialty channel when conceptualizing genre. Unlike, the Vulgar Cage, it captures the constraint without sacrificing the communication. The problem for Docx is that this formulation is anything but friendly to the attitude he is attempting to promote, primarily because of the way it binds authors to their audiences.

Literature, you see, is supposed to be a special kind of fiction, one that, arguably, has some kind of salutary effect on its readers. Literature is defined, in other words, not so much by what it is (or worse yet, what it resembles) as by what it does. Literature changes people, typically by challenging their assumptions.

So if you ‘write for yourself’ under the blithe assumption that you, unlike every other human on the planet, are not the conduit of innumerable implicit conventions, then you are essentially writing for people like yourself. But writing for the likeminded means writing for those who already share the bulk of your values and attitudes–for the choir, in effect. And this suggest that writers like Docx are actually in the entertainment business, which is to say, writing to confirm the attitudes of their audience, not to challenge them. 

Far from rendering you literary, repeating the moves of past masterpieces merely identifies you as the producer of a certain kind of reliable product. Thanks to market segmentation, the more homogenous culture that once made the production of literary effects possible in the past has vanished. Now literary writers have to hide behind the fiction of the Ideal Philistine, the person who would be challenged were they to read their books (but for some, typically flattering, reason never do), to convince themselves of their relevance.

All of this has resulted in what I think is an unmitigated cultural catastrophe. Articles such as Docx’s spur so many howls of protest because they amount to a kind of thinly-disguised bigotry. And like most bigotries, they possess a number of untoward consequences. Not only do they convince new talent that they must write for one channel, one audience, to be taken seriously, they convince everyone else, those with simpler psychologies, to distrust intellectualism more generally.

Too much critical talent is being wasted on what amounts to a single specialty channel, the ‘literary mainstream,’ where all the forms of what once was literary are endlessly repeated, and few of the results of what was once literary are produced. Where the notion of actually challenging readers has either been conveniently forgotten, strategically foresworn (as in the case of Franzen), or made the grist for posturing and pretence.

If Docx really were interested in literature, then instead of bemoaning all those people reading Larsson, he would be trying to reach them.

How does one do that? Turn your back on the flattering choir, for one. Reach out to dissenting audiences by embracing sets of conventions, different specialty channels, rather than gaming rules piece-meal to impress one’s peers with this or that obscure semantic effect–which is to say, the conventional thing.

Write genre, where the future of literature in fact lies. If, as Docx suggests, writing good genre is hard, and writing good literature is harder still, then writing something that combines both should constitute the greatest challenge of all.