The Blind Mechanic
by rsbakker
Thus far, the assumptive reality of intentional phenomena has provided the primary abductive warrant for normative metaphysics. The Eliminativist could do little more than argue the illusory nature of intentional phenomena on the basis of their incompatibility with the higher-dimensional view of science. Since science was itself so obviously a family of normative practices, and since numerous intentional concepts had been scientifically operationalized, the Eliminativist was easily characterized as an extremist, a skeptic who simply doubted too much to be cogent. And yet, the steady complication of our understanding of consciousness and cognition has consistently served to demonstrate the radically blinkered nature of metacognition. As the work of Stanislaus Dehaene and others is making clear, consciousness is a functional crossroads, a serial signal delivered from astronomical neural complexities for broadcast to astronomical neural complexities. Conscious metacognition is not only blind to the actual structure of experience and cognition, it is blind to this blindness. We now possess solid, scientific reasons to doubt the assumptive reality that underwrites the Intentionalist’s position.
The picture of consciousness that researchers around the world are piecing together is the picture predicted by Blind Brain Theory. It argues that the entities and relations posited by Intentional philosophy are the result of neglect, the fact that philosophical reflection is blind to its inability to see. Intentional heuristics are adapted to first-order social problem-solving, and are generally maladaptive in second-order theoretical contexts. But since we lack the metacognitive werewithal to even intuit the distinction between any specialized cognitive device, we assume applicability where their is none, and so continually blunder at the problem, again and again. The long and the short of it is that the Intentionalist needs some empirically plausible account of metacognition to remain tenable, some account of how they know the things they claim to know. This was always the case, of course, but with BBT the cover provided by the inscrutability of intentionality disappears. Simply put, the Intentionalist can no longer tie their belt to the post of ineliminability.
Science is the only reliable provender of theoretical cognition we have, and to the extent that intentionality frustrates science, it frustrates theoretical cognition. BBT allays that frustration. BBT allows us to recast what seem to be irreducible intentional problematics in terms entirely compatible with the natural scientific paradigm. It lets us stick with the high-dimensional, information-rich view. In what follows I hope to show how doing so, even at an altitude, handily dissolves a number of intentional snarls.
In Davidson’s Fork, I offered an eliminativist radicalization of Radical Interpretation, one that characterized the scene of interpreting another speaker from scratch in mechanical terms. What follows is preliminary in every sense, a way to suss out the mechanical relations pertinent to reason and interpretation. Even still, I think the resulting picture is robust enough to make hash of Reza Negarestani’s Intentionalist attempt to distill the future of the human in “The Labor of the Inhuman” (part I can be found here, and part II, here). The idea is to rough out the picture in this post, then chart its critical repercussions against the Brandomian picture so ingeniously extended by Negarestani. As a first pass, I fear my draft will be nowhere near so elegant as Negarestani’s, but as I hope to show, it is revealing in the extreme, a sketch of the ‘nihilistic desert’ that philosophers have been too busy trying to avoid to ever really sit down and think through.
A kind of postintentional nude.
As we saw two posts back, if you look at interpretation in terms of two stochastic machines attempting to find some mutual, causally systematic accord between the causally systematic accords each maintains with their environment, the notion of Charity, or the attribution of rationality, as some kind of indispensible condition of interpretation falls by the wayside, replaced by a kind of ‘communicative pre-established harmony’—or ‘Harmony,’ as I’ll refer to it here. There is no ‘assumption of rationality,’ no taking of ‘intentional stances,’ because these ‘attitudes’ are not only not required, they express nothing more than a radically blinkered metacognitive gloss on what is actually going on.
Harmony, then, is the sum of evolutionary stage-setting required for linguistic coupling. It refers to the way we have evolved to be linguistically attuned to our respective environmental attunements, enabling the formation of superordinate systems possessing greater capacities. The problem of interpretation is the problem of Disharmony, the kinds of ‘slippages’ in systematicity that impair or, as in the case of Radical Interpretation, prevent the complex coordination of behaviours. Getting our interpretations right, in other words, can be seen as a form of noise reduction. And since the traditional approach concentrates on the role rationality plays in getting our interpretations right, this raises the prospect that what we call reason can be seen as a kind of noise reduction mechanism, a mechanism for managing the systematicity—or ‘tuning’ as I’ll call it here—between disparate interpreters and the world.
On this account, these very words constitute an exercise in tuning, an attempt to tweak your covariational regime in a manner that reduces slippages between you and your (social and natural) world. If language is the causal thread we use to achieve intersystematic relations with our natural and social environments, then ‘reason’ is simply one way we husband the efficacy of that causal thread.
So let’s start from scratch, scratch. What do evolved, biomechanical systems such as humans need to coordinate astronomically complex covariational regimes with little more than sound? For one, they need ways to trigger selective activations of the other’s regime for effective behavioural uptake. Triggering requires some kind of dedicated cognitive sensitivity to certain kinds of sounds—those produced by complex vocalizations, in our case. As with any environmental sensitivity, iteration is the cornerstone, here. The complexity of the coordination possible will of course depend on the complexity of the activations triggered. To the extent that evolution rewards complex behavioural coordination, we can expect evolution to reward the communicative capacity to trigger complex activations. This is where the bottleneck posed by the linearity of auditory triggers becomes all important: the adumbration of iterations is pretty much all we have, trigger-wise. Complex activation famously requires some kind of molecular cognitive sensitivity to vocalizations, the capacity to construct novel, covariational complexities on the slim basis of adumbrated iterations. Linguistic cognition, in other words, needs to be a ‘combinatorial mechanism,’ a device (or series of devices) able to derive complex activations given only a succession of iterations.
These combinatorial devices correspond to what we presently understand, in disembodied/supernatural form, as grammar, logic, reason, and narrative. They are neuromechanical processes—the long history of aphasiology assures us of this much. On BBT, their apparent ‘formal nature’ simply indicates that they are medial, belonging to enabling processes outside the purview of metacognition. This is why they had to be discovered, why our efficacious ‘knowledge’ of them remains ‘implicit’ or invisible/inaccessible. This is also what accounts for their apparent ‘transcendent’ or ‘a priori’ nature, the spooky metacognitive sense of ‘absent necessity’—as constitutive of linguistic comprehension, they are, not surprisingly, indispensible to it. Located beyond the metacognitive pale, however, their activities are ripe for post hoc theoretical mischaracterization.
Say someone asks you to explain modus ponens, ‘Why ‘If p, then q’?’ Medial neglect means that the information available for verbal report when we answer has nothing to do with the actual processes involved in, ‘If p, then q,’ so you say something like, ‘It’s a rule of inference that conserves truth.’ Because language needs something to hang onto, and because we have no metacognitive inkling of just how dismal our inklings are, we begin confabulating realms, some ontologically thick and ‘transcendental,’ others razor thin and ‘virtual,’ but both possessing the same extraordinary properties otherwise. Because metacognition has no access to the actual causal functions responsible, once the systematicities are finally isolated in instances of conscious deliberation, those systematicities are reported in a noncausal idiom. The realms become ‘intentional,’ or ‘normative.’ Dimensionally truncated descriptions of what modus ponens does (‘conserves truth’) become the basis of claims regarding what it is. Because the actual functions responsible belong to the enabling neural architecture they possess an empirical necessity that can only seem absolute or unconditional to metacognition—as should come as no surprise, given that a perspective ‘from the inside on the inside,’ as it were, has no hope of cognizing the inside the way the brain cognizes its outside more generally, or naturally.
I’m just riffing here, but it’s worth getting a sense of just how far this implicature can reach.
Consider Carroll’s “What the Tortoise Said to Achilles.” The reason Achilles can never logically compel the Tortoise with the statement of another rule is that each rule cited becomes something requiring justification. The reason we think we need things like ‘axioms’ or ‘communal norms’ is that the metacognitive capacity to signal for additional ‘tuning’ can be applied at any communicative juncture. This is the Tortoise’s tactic, his way of showing how ‘logical necessity’ is actually contingent. Metacognitive blindness means that citing another rule is all that can be done, a tweak that can be queried once again in turn. Carroll’s puzzle is a puzzle, not because it reveals that the source of ‘normative force’ lies in some ‘implicit other’ (the community, typically), but because of the way it forces metacognition to confront its limits—because it shows us to be utterly ignorant of knowing, how it functions, let alone what it consists in. In linguistic tuning, some thread always remains unstitched, the ‘foundation’ is always left hanging simply because the adumbration of iterations is always linear and open ended.
The reason why ‘axioms’ need to be stipulated or why ‘first principles’ always run afoul the problem of the criterion is simply that they are low-dimensional glosses on high-dimensional (‘embodied’) processes that are causal. Rational ‘noise reduction’ is a never ending job; it has to be such, insofar as noise remains an ineliminable by-product of human communicative coordination. From a pitiless, naturalistic standpoint, knowledge consists of breathtakingly intricate, but nonetheless empirical (high-dimensional, embodied), ways to environmentally covary—and nothing more. There is no ‘one perfect covariational regime,’ just degrees of downstream behavioural efficacy. Likewise, there is no ‘perfect reason,’ no linguistic mechanism capable of eradicating all noise.
What we have here is an image of reason and knowledge as ‘rattling machinery,’ which is to say, as actual and embodied. On this account, reason enables various mechanical efficiencies; it allows groups of humans to secure more efficacious coordination for collective behaviour. It provides a way of policing the inevitable slippages between covariant regimes. ‘Truth,’ on this account, simply refers to the sufficiency of our covariant regimes for behaviour, the fact that they do enable efficacious environmental interventions. The degree to which reason allows us to converge on some ‘truth’ is simply the degree to which it enables mechanical relationships, actual embodied encounters with our natural and social environments. Given Harmony—the sum of evolutionary stage-setting required—it allows collectives to maximize the efficiencies of coordinated activity by minimizing the interpretative noise that hobbles all collective endeavours.
Language, then, allows humans to form superordinate mechanisms consisting of ‘airy parts,’ to become components of ‘superorganisms,’ whose evolved sensitivities allow mere sounds to tweak and direct, to generate behaviour enabling intersystematicities. ‘Reason,’ more specifically, allows for the policing and refining of these intersystematicities. We are all ‘semantic mechanics’ with reference to one another, continually tinkering and being tinkered with, calibrating and being calibrated, generally using efficacious behaviour, the ability to manipulate social and natural environments, to arbitrate the sufficiency of our ‘fixes.’ And all of this plays out in the natural arena established by evolved Harmony.
Now this ‘rattling machinery’ image of reason and knowledge is obviously true in some respect: We are embodied, after-all, causally embroiled in our causal environments. Language is an evolutionary product, as is reason. Misfires are legion, as we might expect. The only real question is whether this rattling machinery can tell the whole story. The Intentionalist, of course, says no. They claim that the intentional enjoys some kind of special functional existence over and above this rattling machinery, that it constitutes a regime of efficacy somehow grasped via the systematic interrogation of our intentional intuitions.
The stakes are straightforward. Either what we call intentional solutions are actually mechanical solutions that we cannot intuit as mechanical solutions, or what we call intentional solutions are actually intentional solutions that we can intuit as intentional solutions. What renders this first possibility problematic is radical skepticism. Since we intuit intentional solutions as intentional, it suggests that our intuitions are deceptive in the extreme. Because our civilization has trusted these intuitions since the birth of philosophy, they have come to inform a vast portion of our traditional understanding. What renders this second possibility problematic is, first and foremost, supernaturalism. Since the intentional is incompatible with the natural, the intentional must consist either in something not natural, or in something that forces us to completely revise our understanding of the natural. And even if such a feat could be accomplished, the corresponding claim that it could be intuited as such remains problematic.
Blind Brain Theory provides a way of seeing Intentionalism as a paradigmatic example of ‘noocentrism,’ as the product of a number of metacognitive illusions analogous to the cognitive illusion underwriting the assumption of geocentrism, centuries before. It is important to understand that there is no reason why our normative problem-solving should appear as it is to metacognition—least of all, the successes of those problem-solving regimes we call intentional. The successes of mathematics stand in astonishing contrast to the failure to understand just what mathematics is. The same could be said of any formalism that possesses practical application. It even applies to our everyday use of intentional terms. In each case, our first-order assurance utterly evaporates once we raise theoretically substantive, second-order questions—exactly as BBT predicts. This contrast of breathtaking first-order problem solving power and second-order ineptitude is precisely what one might expect if the information accessible to metacognition was geared to domain specific problem-solving. Add anosognosia to the mix, the inability to metcognize our metacognitive incapacity, and one has a wickedly parsimonious explanation for the scholastic mountains of inert speculation we call philosophy.
(But then, in retrospect, this was how it had to be, didn’t it? How it had to end? With almost everyone horrifically wrong. A whole civilization locked in some kind of dream. Should anyone really be surprised?)
Short of some unconvincing demand that our theoretical account appease a handful of perennially baffling metacognitive intuitions regarding ourselves, it’s hard to see why anyone should entertain the claim that reason requires some ‘special X’ over and above our neurophysiology (and prostheses). Whatever conscious cognition is, it clearly involves the broadcasting/integration of information arising from unknown sources for unknown consumers. It simply follows that conscious metacognition has no access whatsoever to the various functions actually discharged by conscious cognition. The fact that we have no intuitive awareness of the panoply of mechanisms cognitive science has isolated demonstrates that we are prone to at least one profound metacognitive illusion—namely ‘self-transparency.’ The ‘feeling of willing’ is generally acknowledged as another such illusion, as is homuncularism or the ‘Cartesian Theatre.’ How much does it take before we acknowledge the systematic unreliability of our metacognitive intuitions more generally? Is it really just a coincidence, the ghostly nature of norms and the ghostly nature of perhaps the most notorious metacognitive illusion of all, souls? Is it mere happenstance, the apparent acausal autonomy of normativity and our matter of fact inability to source information consciously broadcast? Is it really the case that all these phenomena, these cause-incompatible intentional things, are ‘otherworldly’ for entirely different reasons? At some point it has to begin to seem all too convenient.
Make no mistake, the Rattling Machinery image is a humbling one. Reason, the great, glittering sword of the philosopher, becomes something very local, very specific, the meaty product of one species at one juncture in their evolutionary development.
On this account, ‘reason’ is a making-machinic machine, a ‘devicing device’—the ‘blind mechanic’ of human communication. Argumentation facilitates the efficacy of behavioural coordination, drastically so, in many instances. So even though this view relegates reason to one adaptation among others, it still concedes tremendous significance to its consequences, especially when viewed in the context of other specialized cognitive capacities. The ability to recall and communicate former facilitations, for instance, enables cognitive ‘ratcheting,’ the stacking of facilitations upon facilitations, and the gradual refinement, over time, of the covariant regimes underwriting behaviour—the ‘knapping’ of knowledge (and therefore behaviour), you might say, into something ever more streamlined, ever more effective.
The thinker, on this account, is a tinker. As I write this, myriad parallel processors are generating a plethora of nonconscious possibilities that conscious cognition serially samples and broadcasts to myriad other nonconscious processors, generating more possibilities for serial sampling and broadcasting. The ‘picture of reason’ I’m attempting to communicate becomes more refined, more systematically interrelated (for better or worse) to my larger covariant regime, more prone to tweak others, to rewrite their systematic relationship to their environments, and therefore their behaviour. And as they ponder, so they tinker, and the process continues, either to peter out in behavioural futility, or to find real environmental traction (the way I ‘tink’ it will (!)) in a variety of behavioural contexts.
Ratcheting means that the blind mechanic, for all its misfires, all its heuristic misapplications, is always working on the basis of past successes. Ratcheting, in other words, assures the inevitability of technical ‘progress,’ the gradual development of ever more effective behaviours, the capacity to componentialize our environments (and each other) in more and more ways—to the point where we stand now, the point where intersystematic intricacy enables behaviours that allow us to forego the ‘airy parts’ altogether. To the point where the behaviour enabled by cognitive structure can now begin directly knapping that structure, regardless of the narrow tweaking channels, sensitivities, provided by evolution.
The point of the Singularity.
For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image.
This brings me to Reza Negarestani’s, “The Labor of the Inhuman,” his two-part meditation on the role we should expect—even demand—reason to play in the Posthuman. He adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes on to argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:
The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.
In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. This requires that Negarestani prognosticate, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the intentionality of the human. And this, as I hope to show in the following installment, is simply not plausible.
I sometimes feel like you’ve entered Jonathan Swift’s ‘Professoriat for the Delight and Instruction of Idiot Savants’. In the biomechanical civilization of the future they will look back on BBT and shake their heads wondering what took these creatures so long to get it. There must be some inherent stupidity factor in humans that escapes the registrars of knowledge that causes humans to eliminate the facts of science even as they acknowledge them. How else account for our inability to escape the intentional illusion?
Either way we salute you Professor Feather: let the aspidistras continue to fly!
It freaks me out, sometimes, the parsimony and explanatory scope of BBT. On the one hand, I’m troubled by the old adage, ‘The empty can rattles the loudest.’ It just seems too comprehensive to be real, and I assume I’m a victim of my own creativity, pressing thought out in a direction that only seems convincing for novelty, for the lack of critical alternatives. I wrack my brains, trying to think of problems, but the grooves are just too deep, and I can never jump the tracks. Then there’s those times where I think through the consequences of being right, and I become more uncomfortable still.
There’s nothing to be done but press forward. People still have no idea what the next two decades will bring, and the more conceptual tools I can craft to make (heartbreaking) sense of it, the better.
Yea, kinda like Dennett’s Intuition Pumps, I see yours more like blood rockets flashing in the brain! Maybe, more of a neuropop-agogo: the never-ending neural dance between despair and hope!
I went to a talk today whose results might be amusing for readers here, since it seems to engage the notion of “medial neglect” in an interesting way.
Neurophysiologists are increasingly using brain-machine interfaces (BMIs) as scientific tools, rather than prosthetic devices (though we are doing that, too). Today’s speaker described experiments where a mouse that has been genetically modified such that particular neurons fluoresce in response to calcium intake (which is coincident with and necessary for action potential generation) is placed – awake – in a head mount that monitors a small region of the cortex for such fluorescence changes.
Two small groups of neurons (~1-11 cells), E1 and E2 are defined by the experimenter such that increases in (the calcium-inferred) firing rate for E1 increase the frequency of a tone, while increases for E2 decrease it. A target frequency for the tone is set so if the mouse achieves it, the mouse receives a reward of sugared water.
Mice quickly learn to control the activity of small groups of neurons (or individual neurons, if the experimenters so choose). Flip E1 and E2 and the mice can adjust in 10 minutes or so. Cells that had been coactive cease being so and the mouse hones in on the necessary ensemble of cells.
Generally speaking, animals appear able to ‘will’ individual neurons to fire in nearly every brain area tested (according to a graduate thesis referenced by the speaker) when provided with appropriate feedback. In Bakker’s terminology, it would appear that we have astonishingly precise volitional control of our own brains if only we can “lateralize” the computational elements comprising them. Noting that this result requires that the architecture of the brain must support specific “addressing” of every neuron within it, a member of the audience said, after a brief preamble that he believed the basic result, “But how can this be?”
It is weird, no? I asked the speaker whether they had attempted to combine the BMI paradigm with the stimulation paradigm (this was the vibrissal system, so it would be precisely controlled deflections of individual whiskers) to determine if animals could volitionally remap sensory cortex (e.g., could a mouse learn to prevent a single neuron from firing when “its” whisker was stimulated, etc.). She thought it would be “very cool” but that has not yet been done (but it certainly will be, before long).
It seemed very much like BBT, everted. It also constrains the notion of “neglect” in medial neglect, perhaps. I thought it might be useful to call attention to this line of research since Bakker has in the past asked about ways to test BBT. I am not sure if this qualifies, exactly, but it seems to touch on some important parts of it.
Also, the phrase “free will” came up during the talk, though only as a joke, of course, because SCIENCE, bitches.
____________________________________________________________________
@Bakker:
Unrelated note regarding The Unholy Consult
Obviously, there are readers of this website who would love to get an advanced copy of the book. Have you considered an essay contest for the privilege? Competitors would submit a critical essay (ideally a short one) about some aspect of TAE or PoN to be posted either here or on a website dedicated more specifically to the books. It might be fun for fans of the books to get a sense of what really caught the attention of other fans, who likely noticed details others glossed over. I suppose there are SPOILERS issues, etc., but it might be a useful way to refocus attention on the release, and reward the dedication of fans planning rereads in anticipation of it.
(To avoid accusations of self-interest in proposing this, I promise not to participate).
Cheers.
Golden, ochlo (as usual). I actually reference Gallant and Nicolelis in the sequel to this post! The whole thing screams ‘field effects’ to me. Navigating a trillions of pathways to isolate a clutch of neurons sounds preposterous in a way similar to the idea of the brain calculating arc and velocity relative to some fixed reference grid in order to catch a fly-ball. But if the brain had a blunt way to ‘home in’ on those neurons the way fielders ‘home in’ on fly balls, it suddenly doesn’t sound so preposterous… Or maybe it does, I dunno.
I’ll need to sleep on this one!
The possibilities of neurofeedback is actually something I’ve thought about quite abit, particularly when working through the implications of ‘low-field’ brain imaging (for Neuropath). Imagine having a Google glass readout telling you any number of neurophysiological facts pertaining to whatever your activity. The experimental apparatus provides the animal a way to circumvent medial neglect, a way for the animal’s brain to isolate and yoke its own machinery on the basis of environmental feedback. If you think about it, endogeneous tinkering like this suggests that BMI’s alone would have a drastic, drastic impact on cognitive function. We already know EEG neurofeedback reliably allows the brain to repurpose existing functions as a component of new behaviours. The thing is that this is what it does anytime it solves some novel problem, sans information regarding its own activity. This is all the mouse is doing, ultimately, isn’t it, cobbling a certain set of neural functions into a novel sensorimotor loop? Mechanically, it doesn’t make much difference: typically brain activity generates physical activity generates brain activity (reward), but in this case, brain activity generates electronic activity generates brain activity (reward). The thing that makes it seem creepy isn’t the rollback of medial neglect so much as the transformation of the natural behavioural circuit, the way neurofeedback + BMI transforms the sensorimotor loop into a ‘sensoricomputer’ loop.
Lot’s to chew on here… Do you have any links to the lab responsible, ochlo?
And regarding the ARC giveaway, that sounds like an excellent idea, IF I could be trusted to orchestrate such a thing, which I’m not sure I am! But I’ve done this before with Pat’s help over at the Fantasy Hotlist, so I’ll ask him if he would like to run another one.
I think you should very much participate, Ochlo! 🙂
vox launched over the weekend and they’re pimping your books.
http://www.vox.com/cards/books-to-satisfy-your-game-of-thrones-cravings/need-another-fantasy-book-series-to-read-while-you-wait-for-the-next
which is appropriate because this article that is similar to your bugaboo rants is the most read article on the site.
http://www.vox.com/2014/4/6/5556462/brain-dead-how-politics-makes-us-stupid
At first I thought you were referring to the ‘other Vox’! But this is awesome, both the pimpage, and the piece on Kahan. My books need pimping!
Reading TSA until the next ASOIAF book comes out? For me it’s the other way around, mostly.
Possible exaptation of the motor neuron pairs with each pair made from a voluntary motor cortex neuron and an involuntary midbrain neuron? Inside the cerebrum are we building networks between these pole pairs which originate in the cerebral cortex and midbrain to form intentionality? Do these connective pole pair networks feedback by performing exaptation of the proprioceptive neurons at the attended spinal column levels to form the inner feeling of meaning?
Notice below how the motor cortex neurons for speech are closely linked to operation of the speaker’s hands:
Thanks vic. I’ll definitely check these out. And just a general note to all, I not only appreciate the links that people post here or email to me, I am thoroughly indebted. I am a crowd-source construct. Keep’m coming!
“like learning how to ride a bicycle”…. sensorimotor skill
“wee bits” of the apparatus are conscious..
This blog has discussed in some detail the ‘negative reasons’ why conscious access to what are now subconscious processes is a bad idea, such as the way additional computational power would impose new nutritional burdens for no clear benefit and the way such access could lead to a ‘who watches the watchers’ infinite regression. Now that the ability to deal with those negative reasons is on the horizon what kind of thought is being given to possible ‘positive reasons’ why such access might be a bad idea. Scott described one unpleasant reason in Neuropath, the ability to make yourself more ruthless. Some others that come to mind are the ability of anorexics to suppress hunger and starve themselves to death and the inability of some people to make a decision and act because they now have access to too much information. I feel like a Luddite even bringing these issues up, but people who have conscious access to their subconscious will be radically different than people who don’t and I wonder if anybody who is actually creating the technology to make this possible is thinking about the moral and social consequences. I know it’s early and we’re a good way away from seeing brain scanners next to the blood pressure monitors and pregnancy tests at the drugstore, but still…
It’s the end, as near as I can tell. I actually go into much more detail in the follow-up piece, but the fact is, if ‘humanity’ is entirely a contingent natural product (and what else could it be?), there is nothing necessarily human, which means technical transformation is as profound as any transformation can be, and the human will either go extinct, or be reduced to something technically obsolescent, soon to go extinct.
Obsolescence gets to retain it’s meaning?
I keep reading and commenting on your blog, looking for hope, but I guess you’re not in the hope business. But one thing I particularly like about BBT is how it suggests that everybody seems to experience a miracle in their own personhood. and that miracle that everybody experiences is our warrant for believing all the other miracles. This suggests that as personhood ceases to be perceived as miraculous all the other miracles will come to seem less plausible. By explaining how people come to see themselves as having souls BBT provides the best explanation for religion I’ve yet seen. BBT also reminds me that modern humans have existed for less than a million years while sharks and crocodiles have been around for nearly half a billion years each. The jury is still out regarding the evolutionary utility of human style intelligence.
Bit off topic but I came across some fiction called ‘Saving Thanehaven’ by Catherine Jinks. The blurb gives away some of the premise so I can repeat it here that it starts with a character from a medieval setting who runs into an individual called Rufus and finds he’s inside a computer simulation. Rufus tends to use words to liberate the medieval character and others from repeating the cycle of their lives, but it turns out with some limited information and perhaps not towards the end one might assume in hearing the words. In the end his full virus name is RuthlessRufus (sounds a sort of callous fellow?). Okay, it is younger fiction but I feel that doesn’t matter. It seems to have parralels to the PON series and its interesting that has cropped up independently (though it seems she married a Canadian, so maybe Canadians are the virus… 😉 )
[…] « The Blind Mechanic […]
Hi Scott, I’ve just finished reading your post and have some relatively clear comments about it. All in all, I think the picture your propose is still very sketchy even from a naturalist standpoint and my guess is that when you’ll start fleshing it out the intentionalist notions you strive to eliminate will come marching in. After all, the Devil is in the details.
My understanding of Eliminativism is influenced by Rorty: that neuroscience will provide a fancy way of speaking (involving brain-states, causal relationships and so on) which will be better at making sense of one another than our traditional, obsolete, intentional concepts. On your picture, you try to make sense of language-use in terms of complex coordinations of behaviour between machines/systems. So, language is a vehicle/mechanism for the transmission of information in the context of a cooperative activity. Let’s take a simple example of such an activity: hunting. A group of animals coordinate with the aim of catching a prey. Now, wouldn’t you say that the animals want to catch the pray? Or that they have beliefs about how to do it? Sure, you can call beliefs, informational states. But they still carry the aboutness of intentionality, the world-to-mind direction of fit that Searle talks about. And desires? They have the mind-to-world direction of fit. Sure, you can have a neuroscientific equivalent of those states, but will that neuroscientific story be more illuminating than the intentionalist story? Is the design stance better and more attractive (in Rorty’s terms) than the intentional stance? If, so, you still have to show it.
Back to the animals hunting. Let’s say that hunting is a mechanism embedded in the communal practice of hunting. What is the aim of this mechanism, it’s biological purpose? Obviously, food. But when you talk of a mechanism’s purpose or design, isn’t this intentional talk? As if the mechanism was designed by Mother-Nature, with a purpose in mind? Doesn’t even a blind mechanic have purposes? And aren’t these kinda like wants or projects or intentions? I think on this point, even a naturalist like Dennett defers to Brandom.
Now, you talk of reasons for one’s claims, in the context of communication. But why do reasons come into the picture at all? According to Dan Sperber, asking for reasons is a form of epistemic vigilance, when we can’t readily accept someone else’s claim. And that’s because they might e trying to deceive you. Like, back to the example of hunting, maybe one of the animals wants all the food for himself and has found a way to fool the others. So, asking for reasons is a way of checking information and filtering true from false information. That’s how inference and argument come into the picture. But my point is that, in order to make sense of the practice of giving and asking for reasons, you have to take into account the possibility of deception, and deception is an intentional notion.
One last note about your discussion of Modus Ponens. A lot of what you say reminds me of Wittgentein’s rule-following considerations. The idea that justifications have to come to an end. But I thought the morals of that story was the knowing-how cannot be reduced to knowing that. Knowing-how, in this context, just means, being able to participate in a linguistic practice. So, I think you agree with Brandom that inferential norms like modus ponens are making explicit norms which we implicitly follow in our communicational practices. Now, you say that these norms can be accounted in causal terms. But it’s not clear what exactly is causal about them. I mean, do you want to say that we’ve been conditioned to follow them? Wittgenstein would agree that we’re trained to participate in norm-governed practices, but training is not yet rule-following. For Brandom, these practices are normative, in the sense that they involve correctness and incorrectness, commitments and entitlements of speakers, and sanctions. Do you want to deny that making an claim involves a commitment to truth? Or that providing false information makes one a candidate for sanctions? And what is the naturalist equivalent of a commitment? Is there something causal about it? Surely, you don’t want to say that P and if P then Q, causes me to believe Q. Surely, there are cases when I just don’t see the inferential connection. Or, I might decide that, based on the implausibility of Q, I’ll give up my belief in P. But, those two beliefs commit me to either embracing Q or rejecting P. So, all in all, I don’t see how you can account for Brandom’s characterization of assertive practices in causal terms. But Brandom, taken together with Sperber, provides a comprehensive, evolutionary account of how assertional practices came about. So, if you don’t buy into that account, you have to offer a different story of how the practice of giving and asking for reasons came about, as well as offering a naturalistic account of the norms of correctness implicit in such a practice. (For a more comprehensive discussion of these issues see my paper “The Origins of Doxastic Commitments”, published under my incarnation as Andrei Buleandra: http://ualberta.academia.edu/AndreiBuleandra)
To give you a sense of just how far out of your intentional assumptions you need to step to charitably grasp BBT consider: On my view, language has no content, and so isn’t the ‘vehicle’ for anything. It’s a complicated synching mechanism for coordinating the behaviours of homo sapiens vis a vis their environments. On BBT, intentionality as traditionally theorized (such as language as something bearing content) is largely the artifact of how this synching mechanism becomes available to metacognition. We have the basic first-order vocabulary we do to facilitate synchronizations that involve the suite of heuristic mechanisms we possess to cognize systems too complicated to causally cognize – each other primarily. These heuristic systems are powerful, given that they are deployed within adaptive problem-ecologies.
So with reference to:
My claim, again, is that the intrinsic intentionality that you attribute to the usages of these terms is – to use just such a term – erroneous. There just is no such thing as such intentionality as theoretically metacognized – no matter how pragmatically deflated. ‘Erroneous’ – or for that matter, ‘deceived’ – refers to instances where the other is synched to something other than the world, if anything at all. ‘Epistemic vigilance’ doesn’t require there be any intrinsically normative function (believed-in or actual) so much as it requires neural systems dedicated to vetting instances of linguistic synching. In other words, there is no picture in the head hanging over and against a world that makes it true or false in the eyes of another, only instances of synching, embodied ways in which you others actually engage the world via the very mechanisms that we know, as a matter of empirical fact, are doing the heavy lifting. The reason philosophy, and now science, has been confounded by intentionality so long has to do with the functional robustness of intentional cognition, the way these heuristic mechanisms can accomplish so much with so little information, as well as the way they allow for numerous localized exaptations. They allow us to understand, even though that understanding itself is almost entirely opaque as well as incompatible with causal cognition (as we should expect, given that we evolved them to troubleshoot problem-ecologies in the absence of causal information!). So it makes sense that in sciences involving complicated causal systems you would find an uneasy reliance on these cognitive mechanisms – exactly what we do find, in effect.
So with Sperber and a great number of other researchers you see reliance on intentional terms and an assumption of some picture arising out of the intentional swamp of traditional philosophy. But I actually think a great many researchers are beginning to eschew, if not the old vocabularies, then the canonical philosophical interpretations given them. So regarding”
The question is one of why I should have to account for Brandom’s characterization. What you’re asking I do is account for competence, commitment, and so on as though they were real normative things. They’re not. Right now you have – as I once did – this assurance that the game of giving and asking for reasons is the obvious theoretical account of what’s ‘really’ going on. Okay, so… How do you know? What’s the evidence? Nobody has ever seen a ‘commitment’ in nature, or captured any ‘entitlement’ in laboratory experiments. I’ve already explained why the experimental gerrymandering of intentional terms takes the form it does, so the fact of it’s ‘uneasy fit’ in science is as much evidence for my view as yours (I think moreso mine, since I can actually explain how and why it works the way it does). The evidence definitive of your view, if there is any, has to be metacognitive, doesn’t it? If not, then what? If it is metacognitive, then I think it’s pretty clear we would need a magical brain to intuit the things Brandom claims to intuit. I’m more than willing to engage in that debate!
So, do I ‘want to deny that making a claim involves a commitment to truth?’ Not at all, so long as we are clear about what the problem ecology is, given that the heuristics involved are pretty clearly adapted to first-order contexts, the kinds of problems they are evolved to solve. If we want understand the nature of things like ‘claim-making,’ ‘committing,’ and ‘truth-telling’ we need to look to cognitive science, not to one of the innumerable philosophies of meaning, which provide much in the way of verbiage to throw at problems, but very little (once again, as BBT would predict) in the way of applied problem-solving. Brandom doesn’t know what he’s talking about, no more than any philosopher does. He’s just fielding theoretical guesses that lie beyond the pale of definitive arbitration. BBT is speculative as well, of course, but it will either become an instance of bonified theoretical cognition (likely in some modified form) or not.
[…] means that once a more effective organ is found, what we presently call ‘reason’ will be at an end. Reason facilitates linguistic ‘connectivity.’ Technology facilitates ever greater degrees of […]
[…] See, Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation, The Blind Mechanic, The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts, Zombie Interpretation: […]