Incomplete Cognition: An Eliminativist Reading of Terrence Deacon’s Incomplete Nature
by rsbakker
Goal seeking, willing, rule-following, knowing, desiring—these are just some of the things we do that we cannot make sense of in causal terms. We cite intentional phenomena all the time, attributing them the kind of causal efficacy we attribute to the more mundane elements of nature. The problem, as Terrence Deacon frames it, is that whenever we attempt to explain these explainers, we find nothing, only absence and perplexity.
“The inability to integrate these many species of absence-based causality into our scientific methodologies has not just seriously handicapped us, it has effectively left a vast fraction of the world orphaned from theories that are presumed to apply to everything. The very care that has been necessary to systematically exclude these sorts of explanations from undermining our causal analyses of physical, chemical, and biological phenomena has also stymied our efforts to penetrate beyond the descriptive surface of the phenomena of life and mind. Indeed, what might be described as the two most challenging scientific mysteries of the age—both are held hostage by this presumed incompatibility.” Incomplete Nature,12
The question, of course, is whether this incompatibility is the product of our cognitive constitution or the product of some as yet undiscovered twist in nature. Deacon argues the latter. Incomplete Nature is a magisterial attempt to complete nature, to literally rewrite physics in a way that seems to make room for goal seeking, willing, rule-following, knowing, desiring, and so on—in other words, to provide a naturalistic way to make sense of absences that cause. He wants to show how all these things are real.
My own project argues the former, that the notion of ‘absences that cause’ is actually an artifact of neglect. ‘We’ are an astronomically complicated subsystem embedded in the astronomically complicated supersystem that we call ‘nature,’ in such a way that we cannot intuitively cognize ourselves as natural.
The Blind Brain Theory claims to provide the world’s first genuine naturalization of intentionality—a parsimonious, comprehensive way to explain centuries of confusion away. What Intentionalists like Deacon think they are describing are actually twists on a family of metacognitive illusions. Crudely put, since no cognitive capacity could pluck ‘accuracy’ of any kind from the supercomplicated muck of the brain, our metacognitive system confabulates. It’s not that some (yet to be empirically determined) systematicity isn’t there: it’s that the functions discharged via our conscious access to that systematicity are compressed, formatted, and truncated. Metacognition neglects these confounds, and we begin making theoretical inferences assuming the sufficiency of compressed, formatted, and truncated information. Among many things, BBT actually predicts a discursive field clustered about families of metacognitive intuitions, but otherwise chronically incapable of resolving among their claims. When an Intentionalist gives you an account of the ‘game of giving and asking for reasons,’ say, you need only ask them why anyone should subscribe to an ontologization (whether virtual, quasi-transcendental, transcendental, or otherwise) on the basis of almost certainly unreliable metacognitive hunches.
The key conceptual distinction in BBT is that between what I’ve been calling ‘lateral sensitivity’ and ‘medial neglect.’ Lateral sensitivity refers to the brain’s capacity to be ‘imprinted’ by other systems, to be ‘pushed’ in ways that allow it to push back. Since behavioural interventions, or ‘pushing-back,’ requires some kind of systematic relation to the system or systems to be pushed, lateral sensitivity requires being pushed by the right things in the right way. Thus the Inverse Problem and the Bayesian nature of the human brain. The Inverse Problem pertains to the difficulty of inferring the structure/dynamics of some distal system (an avalanche or a wolf, say) via the structure/dynamics of some proximal system (ambient sound or light, say) that reliably co-varies with that distal system. The difficulty is typically described in terms of ambiguity: since any number of distal systems could cause the structure/dynamics of the proximal system, the brain needs some way of allowing the actual distal system to push through the proximal system, if it is to have any hope of pushing back. Unless it becomes a reliable component of its environment, it cannot reliably make components of its environments. This is an important image to keep in mind: that of the larger brain–environment system, the way the brain is adapted to be pushed, or transformed into a component of larger environmental mechanisms, so as to push back, to ‘componentialize’ environmental mechanisms. Quite simply, we have evolved to be tyrannized by our environment in a manner that enables us to tyrannize our environment.
Lateral sensitivity refers to this ‘tyranny enabling tyranny,’ the brain’s ability to systematically covary with its environment in behaviourally advantageous ways. A system that solves via the Inverse Problem possesses a high degree of reliable covariational complexity. As it turns out, the mechanical complexity required to do this is nothing short of mind-boggling. And as we shall see, this fact possesses some rather enormous consequences. Up to this point, I’ve really only provided an alternate description of the sensorimotor loop; the theoretical dividends begin piling up once we consider lateral sensitivity in concert with medial neglect.
The machinery of lateral sensitivity is so complicated that it handily transcends its own ‘sensitivity threshold.’ This means the brain possesses a profound insensitivity to itself. This might sound daffy, given that the brain simply is a supercomplicated network of mutual sensitivities, but this is actually where the nub of cognition as a distinct biological process is laid bare. Unlike the dedicated sensitivity that underwrites mechanism generally, the sensitivity at issue here involves what might be called the systematic covariation for behaviour. Any process that systematically covaries for behaviour is a properly cognitive process. So the above could be amended to, ‘the brain possesses a profound cognitive insensitivity to itself.’ Medial neglect is this profound cognitive insensitivity.
The advantage of cognition is behaviour, the push-back. The efficacy of this behavioural push-back depends on the sensory push, which is to say, lateral sensitivity. Innumerable behavioural problems, it turns out, require that we be pushed by our pushing back: that our future behaviour (push-back) be informed (pushed) by our ongoing behaviour (pushing-back). Behavioural efficacy is a function of behavioural versatility is a function of lateral sensitivity, which is to say, the capacity to systematically covary with the environment. Medial neglect, therefore, constitutes a critical limit on behavioural efficacy: those ‘problem ecologies’ requiring sensitivity to the neurobiological apparatus of cognition to be solved effectively lay outside the capacity of the system to tackle. We are, quite literally, the ‘elephant in the room,’ a supercomplicated mechanism sensitive to most everything relevant to problem-solving in its environment except itself.
Mechanical allo-sensitivity entails mechanical auto-insensitivity, or auto-neglect. A crucial consequence of this is that efficacious systematic covariation requires unidirectional interaction, or that sensing be ‘passive.’ The degree to which the mechanical activity of tracking actually impacts the system to be tracked is the degree to which that system cannot be reliably tracked. Anticipation via systematic covariation is impossible if the mechanics of the anticipatory system impinge on the mechanics of the system to be anticipated. The insensitivity of the anticipatory system to its own activity, or medial neglect, perforce means insensitivity to systems directly mechanically entangled in that activity. Only ‘passive entanglement’ will do. This explains why so-called ‘observer effects’ confound our ability to predict the behaviour of other systems.
So the stage is set. The brain quite simply cannot cognize itself (or other brains) in the same high-dimensional way it cognizes its environments. (It would be hard to imagine any evolved metacognitive capacity that could achieve such a thing, in fact). It is simply too complex and too entangled. As a result, low-dimensional, special purpose heuristics—fast and frugal kluges—are its only recourse.
The big question I keep asking is, How could it be any other way? Given the problems of complexity and complicity, given the radical nature of the cognitive bottleneck—just how little information is available for conscious, serial processing—how could any evolved metacognitive capacity whatsoever come close to apprehending the functional truth of anything ‘inner’? If you are an Intentionalist, say, you need to explain how the phenomena you’re convinced you intuit are free of perspectival illusions, or conversely, how your metacognitive faculties have overcome the problems posed by complexity and complicity.
On BBT, the brain possesses at least two profoundly different covariational regimes, one integrated, problem-general, and high-dimensional, mediating our engagement in the natural world, the other fractious, problem-specific and low-dimensional, mediating our engagements with ourselves and others (who are also complex and complicit), and thereby our engagement in the natural world. The twist lies in medial neglect, the fact that the latter fractious, problem-specific, and low-dimensional covariational regime is utterly insensitive to its fractious, problem-specific, and low-dimensional nature. Human metacognition is almost entirely blind to the structure of human cognition. This is why we require cognitive science: reflection on our cognitive capacities tells us little or nothing about those capacities, reflection included. Since we have no way of intuiting the insufficiency of these intuitions, we assume they’re sufficient.
We are now in a position to clearly delineate Deacon’s ‘fraction,’ what makes it vast, and why it has been perennially orphaned. Historically, natural science has been concerned with the ‘lateral problem-ecologies,’ with explicating the structure and dynamics of relatively simple systems possessing functional independence. Any problem ecology requiring the mechanistic solution of brains lay outside its purview. Only recently has it developed the capacity to tackle ‘medial problem-ecologies,’ the structure and dynamics of astronomically complex systems possessing no real functional independence. For the first time humanity finds itself confronted with integrated, high-dimensional explications of what it is. The ruckus, of course, is all about how to square these explications with our medial traditions and intuitions. All the so-called ‘hard problems’ turn on our apparent inability to naturalistically find, let alone explain, the phenomena corresponding to our intuitive, metacognitive understanding of the medial.
Why do our integrated, high-dimensional, explications of the medial congenitally ‘leave out’ the phenomena belonging to the medial-as-metacognized? Because metacognitive phenomena like goal seeking, willing, rule-following, knowing, desiring only ‘exist,’ insofar as they exist at all, in specialized problem-solving contexts. ‘Goal seeking’ is something we all do all the time. A friend has an untoward reaction to a comment of ours, so we ask ourselves, in good conscience, ‘What was I after?’ and the process of trying to determine our goal given whatever information we happen to have begins. Despite complexity and complicity, this problem is entirely soluble because we have evolved the heuristic machinery required: we can come to realize that our overture was actually meant to belittle. Likewise, the philosopher asks, ‘What is goal-seeking?’ and the process of trying to determine the nature of goal-seeking given whatever information he happens to have begins. But the problem proves insoluble, not surprisingly, given that the philosopher almost certainly lacks the requisite heuristic machinery. The capacity to solve for goal-seeking qua goal-seeking is just not something our ancestors evolved.
Deacon’s entire problematic turns on the equivocation of the first-order and second-order uses of intentional terms, on the presumption that the ‘goal-seeking’ we metacognize simply has to be the ‘goal-seeking’ referenced in first-order contexts—on the presumption, in other words, of metacognitive adequacy, which is to say something we now know to be false as a matter of empirical fact. For all its grand sweep, for all its lucid recapitulation and provocative conjecture, Incomplete Nature is itself shockingly incomplete. Nowhere does he consider the possibility that the only ‘goal-seeking phenomenon’ missing, the only absence to be explained, is this latter, philosophical goal-seeking.
At no point in the work does he reference, let alone account for, the role metacognition or introspection plays in our attempt to grapple with the incompatibility of natural and intentional phenomena. He simply declares “the obvious inversion of causal logic that distinguishes them” (139), without genuinely considering where that ‘inversion’ occurs. Because this just is the nub of the issue between the emergentist and the eliminativist: whether his ‘obvious inversion’ belongs to the systems observed or to the systems observing. As Deacon writes:
“There is no use denying there is a fundamental causal difference between these domains that must be bridged in any comprehensive theory of causality. The challenge of explaining why such a seeming reversal takes place, and exactly how it does so, must ultimately be faced. At some point in this hierarchy, the causal dynamics of teleological processes do indeed emerge from simpler blind mechanistic dynamics, but we are merely restating this bald fact unless we can identify exactly how this causal about-face is accomplished. We need to stop trying to eliminate homunculi, and to face up to the challenge of constructing teleological properties—information, function, aboutness, end-directedness, self, even conscious experience—from unambiguously non-teleological starting points.” 140
But why do we need to stop ‘trying to eliminate’ homunculi? We know that philosophical reflection on the nature of cognition is woefully unreliable. We know that intentional concepts and phenomena are the stock and trade of philosophical reflection. We know that scientific inquiry generally delegitimizes our prescientific discourses. So why shouldn’t we assume that the matter of intentionality amounts to more of the same?
Deacon never says. He acknowledges “there cannot be a literal ends-causing-the-means process involved” (109) when it comes to intentional phenomena. As he writes:
“Of course, time is neither stopped nor running backwards in any of these processes. Thermodynamic processes are proceeding uninterrupted. Future possible states are not directly causing present events to occur.” 109-110
He acknowledges, in other words, that this ‘inversion of causality’ is apparent only. He acknowledges, in other words, that metacognition is getting things wrong, just not entirely. So what recommends his project of ontologically meeting this appearance halfway over the project of doing away with it altogether? The project of rewriting nature, after all, is far more extravagant than the project of theorizing metacognitive shortcomings.
Deacon’s failure to account for observation-dependent interpretations of intentionality is more than suspiciously convenient, it actually renders the whole of Incomplete Nature an exercise in begging the question. He spends a tremendous amount of time and no little ingenuity in describing the way ‘teleodynamic systems,’ as the result of increasingly recursive complexity, emerge from ‘morphodynamic systems’ which in turn emerge from standard thermodynamic systems. Where thermodynamic systems exhibit straightforward entropy, morphodynamic systems, such as crystal formation, exhibit the tendency to become more ordered. Building on morphodynamics, teleodynamic systems then exhibit the kinds of properties we take to be intentional. A point of pride for Deacon is the way his elaborations turn, as he mentions in the extended passage quoted above, on ‘unambiguously non-teleological starting points.’
He sums this patient process of layering causal complexities in the postulation of what he calls an autogen, “a form of self-generating, self-repairing, self-replicating system that is constituted by reciprocal morphodynamic processes” (547-8), and arguably his most ingenious innovation. He then moves to conclude:
“So even these simple molecular systems have crossed a threshold in which we can say that a very basic form of value has emerged, because we can describe each of the component autogenic processes as there for the sake of autogen integrity, or for the maintenance if that particular form of autogenicity. Likewise, we can describe different features of the surrounding molecular environment as ‘beneficial’ or ‘harmful’ in the same sense that we would apply these assessments to microorganisms. More important, these are not merely glosses provided by a human observer, but intrinsic and functionally relevant features of the consequence-organized nature of the autogen itself.” 322
And the reader is once again left with the question of why. We know that the brain possesses suites of heuristic problem solvers geared to economize by exploiting various features of the environment. The obvious question becomes: How is it that any of the processes he describes do anything more than schematize the kinds of features that trigger the brain to swap out its causal cognitive systems for its intentional cognitive systems?
Time and again, one finds Deacon explicitly acknowledging the importance of the observer, and time and again one finds him dismissing that importance without a lick of argumentation—the argumentation his entire account hangs on. One can even grant him his morphodynamic and teleodynamic ‘phase transitions’ and still plausibly insist that all he’s managed to provide is a detailed description of the kinds of complex mechanical processes prone to trigger our intentional heuristics. After all, if it is the case that the future does not cause the past, then ‘end directedness,’ the ‘obvious inversion of causality,’ actually isn’t an inversion at all. The fact is Deacon’s own account of constraints and the role they play in morphodynamics and teleodynamics is entirely amenable to mechanical understanding. He continually relies on disposition talk. Even his metaphors, like the ‘negentropic ratchet’ (317), tend to be mechanical. The autogen is quite clearly a machine, one that automatically expresses the constraints that make it possible. The fact that these component constraints result in a system that behaves in ways far different than mundane thermodynamic systems speaks to nothing more extraordinary than mechanical emergence, the fact that whole mechanisms do things that their components could not (See Craver, 2007, pp. 211-17 for a consideration of the distinction between mechanical and spooky emergence). Likewise, for all the ink he spills regarding the holistic nature of teleodynamic systems, he does an excellent job explaining them in terms of their contributing components!
In the end, all Deacon really has is an analogy between the ‘intentional absence,’ our empirical inability to find intentional phenomena, and the kind of absence he attributes to constraints. Since systematicity of any kind requires constraints, defining constraints, as Deacon does, in terms of what cannot happen—in terms of what is absent—provides him the rhetorical license he needs to speak of ‘absential causes’ at pretty much any juncture. Since he has already defined intentional phenomena as ‘absential causes’ it becomes very easy thing indeed to lead the reader over the ‘epistemic cut’ and claim that he has discovered the basis of the intentional as it exists in nature, as opposed to an interpretation of those systems inclined to trigger intentional cognition in the human brain. Constraints can be understood in absential terms. Intentional phenomena can only be understood in absential terms. Since the reader, thanks to medial neglect, has no inkling whatsoever of the fractionate and specialized nature of intentional cognition, all Deacon needs to do is comb their existing intuitions in his direction. Constraints are objective, therefore intentionality is objective.
Not surprisingly, Deacon falls far short of ‘naturalizing intentionality.’ Ultimately, he provides something very similar to what Evan Thompson delivers in his equally impressive (and unconvincing) Mind in Life: a more complicated, attenuated picture of nature that seems marginally less antithetical to intentionality. Where Thompson’s “aim is not to close the explanatory gap in a reductive sense, but rather to enlarge and enrich the philosophical and scientific resources we have for addressing the gap (x), Deacon’s is to “demonstrate how a form of causality dependent on specifically absent features and unrealized potentials can be compatible with our best science” (16), the idea being that such an absential understanding will pave the way for some kind of thoroughgoing naturalization of intentionality—as metacognized—in the future.
But such a naturalization can only happen if our theoretical metacognitive intuitions regarding intentionality get intentionality right in general, as opposed to right enough for this or that. And our metacognitive intuitions regarding intentionality can only get intentionality right in general if our brain has somehow evolved the capacity to overcome medial neglect. And the possibility of this, given the problems of complexity and complicity, seems very hard to fathom.
The fact is BBT provides a very plausible and parsimonious observer dependent explanation for why metacognition attributes so many peculiar properties the medial processes. The human brain, as the frame of cognition, simply cannot cognize itself the way it does other systems. It is, as a matter of empirical necessity, not simply blind to its own mechanics, but blind to this blindness. It suffers medial neglect. Unable to access and cognize its origins, and unable to cognize this inability, it assumes that it accesses all there is to access—it confuses itself for something bottomless, an impossible exception to physics.
So when Deacon writes:
“These phenomena not only appear to arise without antecedents, they appear to be defined with respect to something nonexistent. It seems that we must explain the uncaused appearance of phenomena whose causal powers derive from something nonexistent! It should be no surprise that this most familiar and commonplace feature of our existence poses a conundrum for science.” 39
we need to take the truly holistic view that Deacon himself consistently fails to take. We need to see this very real problem in terms of one set of natural systems—namely, us—engaging the set of all natural systems, as a kind of linkage between being pushed and pushing back.
On BBT, Deacon’s ‘obvious inversion of causality’ is merely an illusory artifact of constraints pertaining to the human brain’s ability to cognize itself the way it cognizes its environments. They appear causally inverted simply because no information pertaining to their causal provenance is available to deliberative metacognition. Rules constrain us in some mysterious, orthogonal way. Goals somehow constrain us from the future. Will somehow constrains itself! Desires, like knowledge, are somehow constrained by their objects, even when they are nowhere to be seen. These apparently causally inverted phenomena vanish whenever we search for their origins because they quite simply do not exist in the high-dimensional way things in our environments exist. They baffle scientific reason because the actual neuromechanical heuristics employed are adapted to solve problems in the absence of detailed causal information, and because conscious metacognition, blind to the rank insufficiency of the information available for deliberative problem-solving, assumes that it possesses all the information it needs. Philosophical reflection is a cultural achievement, after all, an exaption of existing, more specialized cognitive resources; it seems quite implausible to assume the brain would possess the capacity to vet the relative sufficiency of information utilized in ways possessing no evolutionary provenance.
We are causally embedded in our environments in such a way that we cannot intuit ourselves as so embedded, and so intuit ourselves otherwise, as goal seeking, willing, rule-following, knowing, desiring, and so on—in ways that systematically neglect the actual, causal relations involved. Is it really just a coincidence that all these phenomena just happen to belong to the ‘medial,’ which is to say, the machinery responsible for cognition? Is it really just a coincidence that all these phenomena exhibit a profound incompatibility with causal explanation? Is it really just a coincidence that all our second-order interpretations of these terms are chronically underdetermined (a common indicator of insufficient information), even though they function quite well when used in everyday, first-order, interpersonal contexts?
Not at all. As I’ve attempted to show in a variety of ways the past couple years a great number of traditional conundrums can be resolved via BBT. All the old problems fall away once we realize that the medial—or ‘first person’—is simply what the third person looks like absent the capacity to laterally solve the third person. The time has come to leave them behind and begin the hard work of discovering what new conundrums await.
“The fact is BBT provides a very plausible and parsimonious observer dependent explanation for why metacognition attributes so many peculiar properties the medial processes. The human brain, as the frame of cognition, simply cannot cognize itself the way it does other systems. It is, as a matter of empirical necessity, not simply blind to its own mechanics, but blind to this blindness. It suffers medial neglect. Unable to access and cognize its origins, and unable to cognize this inability, it assumes that it accesses all there is to access—it confuses itself for something bottomless, an impossible exception to physics.”
If it could cognize its own origins what would that look like? What would be the scope of that awareness?
In the “cell colony” that is human society, aren’t you through this inquiry the bit of grey matter that answers that first question, and BBT the answer itself? What limitations in the “colony” cognizing itself are implied by the way you formulate BBT?
Your first question is one that I’ve pondered for years – Jorge, I think, was the first to begin pressing me on it. The fact is, unless we build a human neglect into artificial consciousness, some kind of nonintentional phenomenology will be what it possesses. It’s a difficult one to be sure.
I’m not sure I understand your second question, Otto, but then, I’ve never really pondered the way BBT might ‘scale up’ to the social. What did you have in mind?
This post seems to almost be your Swan Song, a sort of final summation of BBT, as if in your last sentence you are finally saying: “The time has come to leave them behind and begin the hard work of discovering what new conundrums await.”
I would agree. I think we grasp the basic notion of BBT, of our ‘medial neglect’, our ignorance in the face of the brain’s ability to cognize or metacognize itself.
Now that that is out of the way: What are these new conundrums that await? Where do you go from here?
I definitely had the ‘here we go again’ feeling writing this, noir, so your observation is quite perceptive. At any point in time I’ll have several potential posts in the works – this one has been in the blocks for months now. The problem with doing book reviews/commentaries I find is the awareness of new readers trying to fathom BBT. But I’ve been coming across some great books of late… one’s that are pushing in post-intentional directions on their own. So my upcoming posts on Stephen Turner and Lambros Malafouris will break out of the recap mode. And hopefully I’ll finish my monster demolition of the Implicit soon, where I attempt to develop possible ways of looking at ‘language without rules’… But I’ve yet to crack open Knott’s new neurolinguistics book… and I want to do a big reread of Davidson.
Always wrestling with projects bigger than me – Christ.
If I had to guess, one of the new conundrums will turn on ways to characterize ‘supermechanisms’ at varying spatiotemporal scales. I realize a number of theorists are already throwing themselves at these issues, but short of some way of naturalizing intentionality, they can only gerrymander their way around Deacon’s ‘vast fraction’ and it shows. BBT lets us chuck the notion of essential boundaries, allows the political to seamlessly dovetail into the evolutionary, physiological, technological, and so on. The biggest conundrum of all, however, will be one of trying to find some way to make the post-intentional ‘livable’ from the standpoint of creatures doomed – pending the posthuman – to view themselves through a funhouse mirror. How does one live in the age AFTER the debunking of noocentrism? What should philosophy look like after ‘should’ is removed from the second-order conceptual lexicon?
The list goes on and on.
“How does one live in the age AFTER the debunking of noocentrism?”
Sounds like a Science Fiction novel in the making to me… 🙂 Maybe a satiric swan song for the noocentric collapse…
Unlocking the language trap.
This is very interesting stuff. I’ll definitely read your other post about the Blind Brain idea, because I found it difficult to get a firm grasp on it from this post. At a basic level, it seems to be a form of mysterianism (though it seems to be concerned with the “why” and “how” in a way that most mysterian viewpoints aren’t).
I do think there’s an answer to the question you posed:
“When an Intentionalist gives you an account of the ‘game of giving and asking for reasons,’ say, you need only ask them why anyone should subscribe to an ontologization.. on the basis of almost certainly unreliable metacognitive hunches.”
While it’s definitely true that our cognitive models of reality are both indirect and simplified, we can say that they’re “true” insofar as they give us three things:
1. They allow us to reliably predict what will happen (or has happened) within their sphere of conjecture — including, sometimes, how we will *perceive* what happens.
2. They are consistent with the other models we have that do #1.
3. They allow us to develop further models concerning parts of reality for which we don’t yet have good models.
So I’d say that a theory of teleological “causality” doesn’t need to have any sort of “reality correspondence” beyond the condition that it “works”. And it will always work in a provisional and incomplete way.
After reading the later Wittgenstein, I’m always sensitive to the parts of a conjecture that seem to be leading to a conceptual fly-bottle (usually in the form of category problems). The need for a “turn” to transition between small, local, efficient causality and large-scale teleological causality does activate that sensitivity. It’s possible that there’s a trap in trying to meet condition #2.
Welcome Asher. My viewpoint isn’t mysterian insofar as I use ‘cognitive closure’ in a far different way than someone like McGinn, say, does. I think consciousness will be explained, but that our metacognitive intuitions will balk in myriad ways. It’s the consciousness we think we have that no one will ever be able to explain (as opposed to explain away).
With respect to your answer (1), what I would like to see are examples of the systematic predictive utility of, say, Brandom’s inferentialism, over and above the systematic predictive utility belonging to our first order uses of intentional terms. Without (1), (2) and (3) fall by the wayside, it seems to me. The heuristic efficacy of our intentional cognitive toolbox is such that it should come as no surprise that we can gerrymander experimental contexts around them, but the prediction would be that the intentional posits so used (like ‘content’ in psychology) would remain ‘unexplained explainers’ that bar the integration of the resulting theories with natural science more generally. What BBT provides is way to see these posits in terms of medial neglect: in the case of representation, as a way to fathom systematic covariance absent access to the machinery that makes such systematic covariance possible. The point is roughly the Dennettian one, albeit tied to a theory (BBT) that actually says what’s going on (check out this ). But the result is that we have thoroughly naturalistic way of understanding the kinds of short cuts our brain is prone to take, as well as the kinds of different kinds of metacognitive illusions we should expect will plague theoretical reflection. It lets us send the last of the ghost packing. You might want to check out this …
Defending Brandom and defending Deacon are two completely different beasts, but I think that it’s exactly the right question to ask. That is, if inferentialism doesn’t let us do anything that representationalism (or even folk-psychological semantics) doesn’t let us do just as well, there’s not much point.
Anyway – I’ll definitely have a look at your links. It’s nice to find an “outsider” who’s thinking about many of the same things that have been preoccupations for me all these years.
I think Deacon’s book does a good job of explaining “goal seeking, willing, rule-following, knowing, desiring.” Most of the book is dedicated to that problem. But it seems to me the last part on consciousness allows the homunculus back in.
Where specifically do you see him delineating between the observed and the observer-dependent?
To me, it seemed more a matter of intuition priming than actual demonstration of the kind of emergence he’s speaking of. There is no inverse causality in his account. It’s still all cranes, as Dennett would say, only rhetorically goosed to make things seem kinder to intentionality than they actually are. They’re kinder to intentionality because they seem to softsell the actual functional input of everything driving the superordinate system, and all the mangled loops between them. So causality gets kinked this way and that, the process remains entirely irreflexive. Like he says himself, agency is ‘freedom to,’ not ‘freedom from,’ which is essentially Dennett’s account. Strip away all that warm and friendly rhetoric and he is describing something every bit as alien as the brain we are presently discovering. All he really has is his definition of intentionality as something absent, and then his reconceptualization of constraints in ‘absential’ terms, doesn’t he? This is why I think Dennett was so sympathetic to his project in his review: for him the success in ‘intuition priming’ could be construed as related to the ‘realness of the patterns.’ From the standpoint of BBT there is simply no question of their being some kind of complicated problem solved; it’s just wildly unlikely that it will look anything like what metacognition tells us it does. It predicts that ‘goal-seeking’ will suffer the same fate as memory say, or as ‘concept’ seems to be now.
Deacon is not a Dennet. Dennet does not understand the role of chance in self-organizing processes.
Deacon’s argument depends upon the notion that interactions between living beings and their environments are not strictly determined by transformations and matter and energy—and thus are not fully describable by physical laws (Newtonian, quantum mechanical, or thermodynamic) alone. Living beings, organisms as well as single cells, tend to interact with matter as signs of potentially useful (or confirming) processes. Responses are “purposeful” and “interpretive.” Interpretive responses occur as localized biases upon the probabilities of contingent outcomes and involve, therefore, an element of chance. Intention might be best described as “making your own luck.”
The abilities of organisms to respond in useful ways exist because they were selected for in the past. A living entity is not simply programmed by natural selection, provided with set algorithmic ways of interfacing with the environment. If this were so, living beings would be mere robots, and unlike robots, organisms behave in unpredictable ways. A living entity’s next state is not strictly determined by the entity’s internal state plus the state of its detectable environment. Life seems to be able to make use of (transform) noisy information in its environment, transcending any inherited algorithmic mode. How is this possible?
Part of the answer involves attributing the determination of interpretive acts to other types of causes than material and/or efficient causes, namely formal and final causes (See Alicia Juarrero 1999, whom Deacon should have cited), whose outcomes are not predictable in the sense that they involve biases developing through relations (e.g. physical similarity or proximity) and associations, which cannot be assigned a numerical value or transformed in lawful ways (i.e., x’s degree of similarity to y cannot be expressed in terms, e.g., of an amount of energy). Here Deacon should have cited work from the field of biosemiotics. These kinds of relations can be imprecise, unpredictable and context dependent or contingent.
I’ve been planning to give Juarrero a looksee – I only heard of her because of the ‘scandal’ surrounding IN, which is a scandal in its own right I suppose. I’ll give it a close look. But I fear I just don’t see what difference noise makes when it comes to intentionality – or how noise isn’t itself just another component of mechanistic processes (as it is in evolution and neural selection). I have to admit to a certain amount of head-scratching when emergentists discuss indeterminacy. ‘Determinism’ has never been the problem for intentional phenomena (nor a necessary condition for robotics: noisy robots are not a contradiction in terms). Contingent irreflexivity is the problem, up and down the list of intentional phenomena – and this is the bullet that Deacon bites from the outset. It scuttles everything from the now to truth to rule-following. And this is no problem whatsoever for BBT.
What work in biosemiotics are you thinking of in particular, VN?
Re Biosemiotics. See http://media.uoregon.edu/channel/2013/05/07/the-emergence-of-biosemiotics-from-physiochemical-dynamics-terrence-deacon/
But let me leave that point for now and back up a bit to address your earlier question about where I assume the argument went a little astray. In IN p527. Taking about organisms with brains, Deacon claims “A separate dynamical component of its teleodynamic organization must continually generate a model of both its overall vegetative integrity and the degree to which this is (or might be) compromised with respect to other contingent factors. A dynamical subprocess evolved to analyze whatever might impact persistence of the whole organism, and determine an appropriate organism level response, must play a primary role in structuring its overall teleodynamic organization.”
On 528, he continues, “It requires a self that creates within itself a teleodynamic reproduction of itself. This emergent dynamical homunculus is constituted by a central teleodynamically organized, global pattern of network activity.”
The difficulty with this is that “model” is a poor choice of terms and not explained. The intention of the organism is represented in its habitual actions which tend to sustain it. There is no other “representation” in the organism.
I don’t have serious arguments with the rest of the book, though I think the absence rhetoric was unnecessary and maybe counterproductive–reminded me a bit too much of Derrida!
Thanks for the link, VN. For me, the final chapters just exemplified the weakness of the entire approach: he had misconstrued the problem of intentionality from the outset, so he really had no hope of making headway regarding the big issues. The autogen (which I think is brilliant – did he borrow this as well?) is the perfect place to see where it goes all wrong, where you can really see that redescription is all that he’s engaged in, and that short the ‘constraint as absence’ metaphor, he really isn’t talking about emergent intentionality at all, but rather levels of recursive mechanical complexity, which, although they complicate the reductive paradigm, by no means defeat it. Do you agree with this? Does Juarrero have the real case I should consider? For that matter, do you know of anyone who has countered the kind of general criticism I’m making here (which applies to Thompson as well as Deacon)?
I agree with you about the absence rhetoric: it sure gave Fodor cause to romp in his review (which will stick with me as one of the worst reviews written). I just don’t see how Deacon has any case whatsoever without it.
rsbakker says “…he had misconstrued the problem of intentionality from the outset, so he really had no hope of making headway regarding the big issues.”
Perhaps you expected Deacon to present a quasi-supernatural explanation of intention, and therefore you are surprised at how “reductive” it is?
Let me say something about the evolving history of reductionism. Bear with me as I state the obvious at first. I’ll be getting to something less obvious. Enlightenment Philosphes thought all causality could be reduced to descriptions of direct matter and energy transfers: particles hitting each other. Thermodynamics introduced the idea of simple emergence, like temperature. In this case, an effect (overall temperature) can’t be reduced to individual efficient causes, but is the outcome of a statistical average of multiple effects of particles hitting one another and moving around. Natural selection introduced the idea that adaptedness could be reduced to variation and selection for function. Quantum mechanics introduced the idea that deterministic causality emerges from indeterminate, non-particulate states. Thus everything can be “reduced” to probabilistic determinism (not strict material determinism of the Enlightenment). Complexity science introduced the idea that in open dissipative systems, effects are not proportional to causes. That is, even in deterministic systems, the degree of unpredictability of the initial conditions is disproportionate to the degree of unpredictability of outcomes because effective factors (i.e. new contexts) are generated by the system. Biosemiotics attempts to define (or “reduce”) those effective factors in terms of relational causality: similarity and proximity. (And also arbitrarity, but I’m getting ahead of myself.) When Deacon describes the effects of shapes of molecules on outcomes (morphodynamics in his terms) he is appealing to Biosemiotics. The relative similarity and proximity of the components has an effect on the probability of the outcomes. Reduction to formal cause is different from these earlier evolving forms of reductionism: relations (e.g. physical similarity or proximity) cannot be assigned a numerical value or transformed in lawful ways (i.e., x’s degree of similarity to y cannot be expressed in terms, e.g., of an amount of energy). These kinds of relations cannot be captured statistically. They can imprecise, unpredictable and context dependent or contingent. Bringing in formal cause is not simply a redescription of old reductionism: it is adds another kind of reductionism, very similar to the way that other new sciences have enlarged our understanding of what reduction means.
Deacon’s descriptions still “reduce” to natural causes, but in addition to material/efficient cause (direct energy transfer) his “reductive” description also includes thermodynamic effects, morphodynamics effects and teleodynamics effects. I don’t care for Deacon neologisms, so let’s just say formal cause and final cause (we effectively re-interpret the history of these issue by doing so). Purposeful behavior does not reduce to material causes (particles hitting one another). Other factors effect the probability of outcomes.
Regarding his “absence” metaphor, Biosemioticians don’t find it useful or necessary at all. It’s a metaphor only and if it were cut from his book, the argument would be much more sound.
Alicia Juarrero handles the complexity part of the issue better than Deacon. But I think (I’m biased of course) that Biosemiotics takes the issue much farther.
I think the worse thing you can say about Deacon’s book (other criticizing his absence metaphor or the return of the homunculus in last chapter) is that he doesn’t say much of anything new (to complexity scientists or biosemioticians). Nevertheless, I do think it’s an important work because he really takes the time to explain thermodynamics to the general reader, and he illustrates formal cause and final cause visually with his autogen. This is very useful.
“Perhaps you expected Deacon to present a quasi-supernatural explanation of intention, and therefore you are surprised at how “reductive” it is?”
No, not surprised at all, in fact. What surprises me is that he doesn’t see the argumentative burden arising out of this (and I think this is why he seems to intuitively think that absence is so conceptually important). The fact is, he structures the whole problematic around intentionality–so he needs some way of showing the reader that it’s more than a ‘heuristic gloss’ that allows him to progressively ramp up his use of intentional vocabulary. What I would want to say (given BBT) is that researchers into complex systems should actually quarantine as many intentional concepts as they can manage–see it as an issue to be visited at a later point, perhaps. IN, I would argue, is an example of precisely the kind of problems that arise when such work is cast as apologia, as a way to somehow rationalize our metacognitive intentional intuitions. We have no cause to trust them, and we’re just too good a rationalizing what we ‘feel’ to be right. Meanwhile, we know that machinery of various kinds is making it happen, no matter how one complicates and divvies up the causal pie.
I like talk of complicating reduction, complicating causality, and so on, because things always tend to be more complicated. But why talk about ‘purposeful’ as opposed to, say, ‘convergent’ behaviour – or ‘autonomy’ as opposed to, say, ‘variable componency.’ Clear out all the old terms, all the old ghosts, all the duped intuitions, and just try to describe and explain what’s actually there. We know that intentional modes of cognition are heuristic, but we don’t know their adaptive problem-ecologies, and this is a recipe for theoretical trouble: it means that things that strike us as painfully clear could be dupes, plain and simple. (Just look at the mess of cognitive science more generally!). So tackle these issues post-intentionally, and then, after this is said and done, look at the possible ways our traditional, metacognitively informed understanding can be mapped across this new terrain. Otherwise, the problems are complicated enough, the issues abstract enough, and our intentional intuitions false-positive-prone enough, that most anything can be shoehorned into our traditional conceits.
It’s systematicity of living behaviours that we’re ultimately trying to get a grip on, isn’t it, VN? Not our almost certainly blinkered metacognitive appraisals of that systematicity. Talk of ‘inverted causes,’ ‘teleology,’ ‘final causes,’ and so on, is talk beholden to intuitions that deliver us to the supernatural quite effortlessly.
The fact is, unless we build a human neglect into artificial consciousness, some kind of nonintentional phenomenology will be what it possesses. It’s a difficult one to be sure.
I’m not sure why it’s taken that such a device would do anything any more than ones personal computer does anything by itself?
Taking this on, also it’s possible part of the human mind already ‘recognise’ its processes – and it does nothing like a PC does nothing by itself. Further there may be a gradient between this knowing and doing nothing scaling up to the other end of the spectrum, not knowing, but acting (acting upon ‘hungers’). Perhaps the ‘subconcious’ exists in the lower end of that spectrum but without it being the extreme lower end. Controversially one might suggest autistic people tend to be toward the lower end of this spectrum.
“How does one live in the age AFTER the debunking of noocentrism?”
Three things. First, you’ve given the geocentrism parallel several times and I’m wondering how it applies here. While the fall of geocentrism changed the way humans think of themselves and had ripple effects that impacted material human history, it seems to me that people keep waking up, pissing, screwing, killing each other, etc. mostly the same way as before. We live our daily lives, necessarily, as if the fall of geocentrism had never happened. If I fully incorporated the fact that everything is kind of flying all over the place around a ball of fire (or whatever is happening), I’d be too dizzy to get out of bed, obviously. I’ve only read a little, but it looks like an extreme case of Todd and Gigarenzer’s less-is-more effect.
Second, how depressing will the fall of noocentrism as a way of framing the world be and for who (will everyone understand it fairly well)? Enough to make antinatalism a mainstream position or somesuch?
Third, how will it be used as a means of control? Sociopathic rulers have been controlling the rest for thousands of years but schooling and propaganda breakthroughs in the past 150 years have given them powerful new weapons. The thought of them physically manipulating dissident brains is terrifying. In other words, the shock treatment and lobotomy approach, but more effective — turning people into tools as opposed to vegetables. Already, you have children diagnosed with “stop imposing rules on me and pretending they’re for my own good” (oppositional defiant) disorder and “can’t pay attention in class because it’s mind-numbingly boring disorder” (ADHD), etc., and drugged accordingly. This would be another, game-changing, weapon.
So I’m not clear whether you’re talking about the depressingness, the potential of brain engineering, the incorporation of a new non-noocentric framework in consciousness itself, or something else. In the short term, I’ve been under the assumption that humans will simply continue to behave “as if teleology” and “as if free will” because it works (and lacking alternatives) just as they continue to behave as if they lived in a mostly still world with objects moving in contrast to this background because that works. Until they harvest our brains or reverse engineer them or whatever.
All great questions. For me all of them deliver us to the lap of what I’ve been calling ‘Akratic Society’ – basically the world I give narrative form to in Neuropath, a continual ratcheting up of the contradictions of Neil’s ‘Disney World,’ where the fantasies of pseudo-agency deepens for the masses even as the ‘nudge tactics’ already utilized by administrative institutions become more and more targeted and refined. I agree: debunking noocentrism is the narcissistic wound that the vast majority simply will not be able to bear. The majority can’t even relinquish biocentrism. Medicalization will proceed apace, so at least our spiritual euthanasia will be ‘humane.’It’s the ‘consistency hounds’ who are in trouble, the people who need to be able square all their commitments with a single rule.
Personally, I don’t see how BBT bears on antinatalism either way.
“In the short term, I’ve been under the assumption that humans will simply continue to behave “as if teleology” and “as if free will” because it works (and lacking alternatives) just as they continue to behave as if they lived in a mostly still world with objects moving in contrast to this background because that works. Until they harvest our brains or reverse engineer them or whatever.”
I actually think ‘as if teleology’ and ‘as if free will’ are metacognitive artifacts, cultural constructs, albeit ones thoroughly stamped into the manifest image. One of the (many) things that troubles me about BBT is the way it seems to be haunted with an incipient atavism, suggesting that we enjoyed an age of metacognitive innocence, where we were not deceived about ourselves because we lacked the virus of philosophical reflection. We argued without the shadow of a mythical ‘game of giving and asking for reasons,’ referred without the confusions of reference as theorized, and so on. This is where I wonder if their might not be a way out via Pyrrhic skepticism, if it isn’t possible to scrub all the second-order accretions away and to find some kind of first-order ataraxia.
“The Blind Brain Theory claims to provide the world’s first genuine naturalization of intentionality—a parsimonious, comprehensive way to explain centuries of confusion away. What Intentionalists like Deacon think they are describing are actually twists on a family of metacognitive illusions.”…RSB
Not that I totally disagree with you Scott but you may be giving him short shrift here. Absences may be metacognitive illusions but they are Deacon’s central point because he believes he has discovered the key illusion which reflects in mundane cognitive illusions like his statement that the space between the spokes is the wheel…But maybe in fact real absences like hunger prolifierates fast food signs everywhere and the need for companionship and procreation promotes a culture of relationships and on-line dating. Maybe the key illusion is the demarcation between our neocortex (our language cortex which language eliminativists miserable fail at eliminating) and our old brains which we share with every other species.
Reblogged this on synthetic_zero.
I read Neuropath and Consciousness and the Brain together and I can’t help but wonder why Dehaene is so hopeful and you’re so grim. That having been said, I think that most humans don’t really believe scientific theories until they give us technology. We might not understand general relativity in a mathematical way but a thermonuclear fireball makes us believe it all the way to our alligator brains. In the same way I think eventually the technology that Dehaene and his colleagues are using to understand the human brain will lead to honest to God cures for diseases like autism and schizophrenia, then lead to cheap hardware to let people tweak their personalities and give themselves highs that no drug can match. The technology will both convince us that souls don’t really exist and help us get over the loss. My biggest concern is that some people will choose to defend their souls the way some people have chosen to defend their God, with arms. I can imagine terrorist threats to the people doing this science and I can imagine terrorist threats to the people thinking these thoughts. It might be several years down the line but someday you may have to start watching your back.
I actually think you end up answering your own question, Michael! It must have been a strange back and forth.
Forgive me if this seems like an elementary question that you’ve already addressed somewhere, but if we can’t trust our brain-mind’s understanding of intentional causes, how can we be so sure that our brain-mind’s understanding of physical causes is any better? Isn’t all our human scientific understanding of supposedly non-intentional (physical) causality based on an analogical extension or abstract generalization of our understanding of human agency? Same goes for the trope “mechanism”: isn’t this the projection of something human beings design and build onto a natural world that grows and evolves itself? There is plenty to be worried about regarding anthropomorphism, which is precisely why I’m also worried about mechanomorphism (which is just another variety of the former).
If you’ve written about these questions already, I’d be happy to accept a link. It just seems to me that the argument you’re making regarding our apparent perception of intentional causes in ourselves and the world could also be made about our apparent perception of physical causation. The blade cuts both ways, does it not?
“Isn’t all our human scientific understanding of supposedly non-intentional (physical) causality based on an analogical extension or abstract generalization of our understanding of human agency?”
Is it? Then why would the two modes be incompatible? Why would anthropomorphization be problematic? One of the many things I like about IN is the way Deacon owns up to that incompatibility.
There’s mechanisms for star formation, plate movement, climate, and so on. Any causal system is a mechanism. That said, ‘mechanomorphism’ is also a threat – yes! Human cognition consists of heuristics all the way down. But the extended heuristic paradigms of natural science, one would presume, are miraculously powerful for a reason. Given our theoretical incompetence outside the sciences, I take them as a baseline for factual theoretical claims. It provides our best, highest dimensional understanding of nature – don’t you agree? If so, then why bother with salvaging ontological (original) intentionality? As I said to VN, it’s the systematicity of living behaviours that we’re ultimately trying to get a grip on.
I was impressed by Deacon’s attempt to naturalize intentionality in “Incomplete Nature,” but I don’t think he goes far enough. I’ve developed a critique of his attempt from a Whiteheadian perspective here: http://matthewsegall.files.wordpress.com/2010/05/physics-of-the-world-soul-whitehead-and-cosmology.pdf
From my perspective, human consciousness and the processes of the physical world need not be understood as incompatible. I’m in agreement with Steven Shaviro (http://ecologywithoutnature.blogspot.com/2011/09/oooiii-video-archive-1.html), who argues that there are currently only two coherent possibilities remaining if we hope to overcome the correlationist or intentionalist circle: 1) eliminativism of the sort you seem to be defending, or 2) panexperientialism of the sort articulated by Whitehead. Deacon flirts with the latter, but fails to commit himself to it as an ontological thesis and so despite all his effort to make value and purpose emerge at the biological level he still ends up with the same explanatory gap he started with.
Even if the universe is experiential to the core, this still makes anthropomorphism a potential pitfall, since not all experience is conscious or human-like. Even if the whole of nature is alive in some sense, this doesn’t mean we can assume that other organisms think and perceive in the way that we do.
The methods of natural science are powerful indeed. But I think it is a mistake to conflate power with knowledge. That we can instrumentally manipulate the brain so as to produce certain experiential effects does not necessarily mean experience is caused mechanically in the skull. The ability to skillfully manipulate a system does not require that we understand the ontology of the system in question. What neuroscience has provided for us up to this point is what Owen Barfield called “dashboard knowledge.” We know what happens when we push this or that button. But neuroscience has not yet been able to pop the hood in order to really understand what is going on causally (and here “causation” may turn out to include more than just efficient causes).
It seems to me that you’re drawing a rather arbitrary line between the theoretical incompetence of extra-scientific reflection vs. that of intra-scientific reflection. Its reflection in both cases. Its not like the scientific process somehow takes place outside the context of human reflection and communal interpretation.
Steven was here just a few weeks ago giving a talk and seminar on Neuropath and I had a chance to pick his brain on this issue: I just don’t find ‘fundamental experience’ approaches convincing, simply because I just don’t know how any brain could be in the position of cognizing such a thing short of magic. You tell me: what kind of brain could cognize such a thing? To me, it seems pretty clear that this intuition arises out or our brain’s inability to cognize the ‘stuff of experience.’ As such, it does what it often does: confuses its incapacity for an extraordinary property belonging to whatever it failed to cognize. Either way, the explanatory burden is cleary on the fundamentalist.
Neuroscience is definitely just getting off the ground – I take that as a given. But experience isn’t supernatural, which means that is caused mechanically. This is why emergentists have the larger explanatory burden to bear: not only are they arguing to complicate our ontology, a good number of them (like Chalmers) are arguing that we need to do so fundamentally. The fact that Deacon shies from this, that he seeks to simply redescribe the physics we already have, is both what makes his approach more sober and why he needed to decisively engage (as opposed to rhetorically dismiss) the interpretavist/eliminativist approach. There’s a big difference between priming intuitions and validating him. Since the latter is his project, he needs some way to put the former to bed.
Otherwise, power is what distinguishes knowledge from opinion, is it not?
No. Only the line you draw yourself here. Intra-scientific reflection is intra-scientific precisely because it belongs to the larger institutional machinery of the sciences, and so finds arbitration that extra-scientific reflection does not.
Asher Kay commented early in this thread; you might be interested in his posts on the Deacon book from a couple of years ago. In the first one the alleged plagiarism scandal intrudes directly in the discussion, with a couple of Juarrero advocates jumping in with accusations of plagiarism leveled at Deacon. Pretty soon Deacon himself shows up on the thread, interacting with the ideas and apologizing for the distractions. In Asher’s second post he critiques a review of Deacon by Colin McGinn (speaking of scandals). During the unfolding of that discussion I also read Juarrero. Here’s my summary of Juarrero vis-a-vis Deacon from near the end of the thread:
“To act top-down from the intentional level in which meaning is embodied is thus to exercise free will, in the following senses: (1) Because all self-organizing systems select the stimuli to which they respond…” (from Juarrero’s last chapter, underlines added by me)
As I rushed toward the conclusion of Juarrero’s book, stumbling over the wild question-begging assertion in this excerpt, I realized something that had somehow eluded me: Juarrero’s intention in writing this book is exactly the opposite of Deacon’s.
Juarrero contends that not only are human actions intrinsically unpredictable; so too are physical and chemical actions. A human is like a hurricane, she avers a couple of times toward the end: you never know what pathway either one is going to take. When Juarrero invokes self-organizing systems and so on, she’s not looking for a way to “close the gaps” between unintentional causes and intentions. She’s arguing that the gaps cannot be closed even in the realms studied by physics and chemistry. And in the quote above she’s even contending that a self-organizing system like a hurricane is selecting its path; i.e., it’s intentionality all the way down.
In short, Juarrero is here to celebrate the gaps as the source of freedom, unpredictability, and (yes) the American entrepreneurial spirit. Deacon, on the other hand, brings in self-organizing system theory as a possible tool in the ongoing work of closing the gaps in the “traditional” scientific explanatory enterprise.
Because I read Deacon’s book first, and because I was alerted to the overlaps between his book and Juarrero’s, I read Juarrero’s book with a hermeneutical lens fitted by Deacon. I think this expectation misled me. These two writers might cover similar turf, but they deploy the tools and interpret the implications in radically different ways. It’s true: Deacon does invoke ideas that Juarrero also uses in her book. But if I were Deacon I’d likely have regarded Juarrero not as precursor but as foil. If Deacon had cited Juarrero, he might have said something like this: Juarrero asserts that “all self-organizing systems select,” but that’s both misleading and almost certainly flat-out wrong and here’s why… Citing Juarrero in this way might have helped Deacon frame his position more clearly, while simultaneously adding some controversy. It might have added a bit of tabloid appeal, an affective lure for drawing more readers to both books.
Thanks for this ktismatics. Very cool background material. Crazy the rancour.
Hari Seldon was right. Enough human beings can be treated like an ideal gas.
I’m amazed at how much your writings reflect this deepest structure of the brain with its lateral and medial aspects. The phylogeny of neocortical development reflects more as a sensorimotor cognitive mushroom which grows out of the brain stem, instead of a language computer in humans. http://en.wikipedia.org/wiki/Pulvinar_nuclei
And looping, freeways too and fro, wondrous and terrifying.
I think every organism that moves has a central locus or “self”, which me the engineer, theorizes down there in the lower brain structures which also reflects your cognitive bottlenecks when we hit those structures.
Consider this Scott, humans and other species can tell the red apple from the green apple, but we can imagine the red apple as green or we can “slip” our senses around on the neocortex with higher language. If the neocortex is a neural net, then how or from where do we drive those “slips” and other superpositions?
If I read Deacon right, the computations can drive the substrate but for biological organisms, the substrate drives the computations.
Scott, when I first learned about Deacon’s book I didn’t immediately think of BBT, but now that you’ve tackled the book it seems like his kind of emergentism is the ideal foil for your eliminativism, since he makes the nonbeing of the first person an ontological virtue!
I had a few critical thoughts as I was reading your article, though, and I’m wondering how you’d address them. You say that “the brain possesses suites of heuristic problem solvers geared to economize by exploiting various features of the environment.” I’m wondering how this pragmatic-sounding form of reasoning differs from abduction. Science itself is very pragmatic with respect to its models. Are you a realist or an instrumentalist about the properties that are scientifically posited to explain observed regularities? Aren’t all of these properties partly subjective, in that they indicate the condition of the explainers, since they stem from our interests, judgments of relevance, epistemic ideals, and so on?
I’m wondering, then, how folk psychology is fundamentally in a worse position than any science, including physics and cognitive science. You say that “Philosophical reflection is a cultural achievement, after all, an exaption [did you mean exaptation?] of existing, more specialized cognitive resources; it seems quite implausible to assume the brain would possess the capacity to vet the relative sufficiency of information utilized in ways possessing no evolutionary provenance.”
But science is likewise a cultural achievement. We didn’t evolve to explain the whole universe, the beginning of space and time, and the quantum nature of fundamental reality. So the brain by itself (without cultures and institutions) will be just as useless in evaluating our hypotheses about quantum or cosmological phenomena as it may be in evaluating those about subjective unobservables. If we rely just on our brain to explain the outer world, we’ll wind up with animism. The point is that there are lots of unobservables in nature. Why, then, do scientific theories of them get a pass whereas folk psychological explanation fails because of some fundamental ignorance of ourselves? Unobservables are equally unobservable, no?
You say, “But such a naturalization can only happen if our theoretical metacognitive intuitions regarding intentionality get intentionality right in general, as opposed to right enough for this or that.”
Again, are scientific models really universal in this sense of “getting things right in general”? This talk of getting things right in general seems a roundabout way of putting metaphysical realism on the table. But how can you be a realist without a semantic account of truth? So don’t all scientific models get things right only for this or that purpose? Again, how is folk psychology in a worse position than the sciences, in terms of their methods of explanation?
You say, “These apparently causally inverted phenomena vanish whenever we search for their origins because they quite simply do not exist in the high-dimensional way things in our environments exist.”
You seem to be saying that people are unobservable, because we can’t observe our inner selves. But what about the problem of other minds? Surely we encounter people in our environment because we’re social beings: we live in small or large groups, beginning with our families. So why couldn’t folk psychology develop just like any scientific model, as a way of explaining not ourselves but something in the environment, namely other selves? We’d then extrapolate and apply that model to ourselves.
We wouldn’t be explaining brains, but the causal (including social) capacities that account for the teleological and semantic phenomena that you say the brain is geared to find because of its programming. I doubt that the full folk theory of the self is innate. We became more and more personal—as we modern individualists think of personhood—when we started to live in larger groups and shaped our environments in such a way that we domesticated ourselves to fit in, after the Ice Age. Paleolithic people weren’t exactly like Neolithic people, just as modern folks aren’t much like the primitive nomadic tribes that still exist. There are similarities, of course, such as the crucial use of language, but there are profound psychological differences too. It’s almost as if aliens really have visited Earth—in the form of those kinds of people that are very different from us.
The point is that there’s sociological and anthropological evidence of such personhood, so we can’t attribute the concept of a person merely to some mental program which activates the concept given certain stimuli. No, folk psychology here seems quite similar to the other sciences, so they stand or fall together—again, in terms of their basic methods. I’ve said this before, but I wish you’d write a defense of scientific explanation that’s consistent with your eliminativism. Maybe you’ve done so and I’ve missed it. Anyway, as always, your writings are very thought-provoking.
The perfect foil in many ways. I highly recommend the book. I think I have the far better theory of course, but in terms of scope and synthesis, I really think IN is a remarkable book, and Deacon an ingenious thinker.
“Why, then, do scientific theories of them get a pass whereas folk psychological explanation fails because of some fundamental ignorance of ourselves? Unobservables are equally unobservable, no?”
The problem isn’t unobservables, the problem is our capacity to systematically discern unobservables (in the formation of efficacious sensorimotor loops), a capacity that is bound to the information and cognitive resources available. In a sense, this is what BBT amounts, a set of empirical hypotheses regarding the information and cognitive resources available. Scientific theoretical cognition is the baseline for all the obvious reasons. Phenomenological theoretical cognition, well, isn’t possible because of the constraints on information and cognitive resources.
“You seem to be saying that people are unobservable, because we can’t observe our inner selves. But what about the problem of other minds? Surely we encounter people in our environment because we’re social beings: we live in small or large groups, beginning with our families. So why couldn’t folk psychology develop just like any scientific model, as a way of explaining not ourselves but something in the environment, namely other selves? We’d then extrapolate and apply that model to ourselves.”
It goes without saying that hasn’t developed ‘just like any scientific model,’ which strands us with the question of why. ‘Folk-psychology’ consists of a number of biomechanical hacks – heuristic systems – our ancestors happened on that allow them to predict/explain/manipulate brains – our brains, others. Those hacks work astonishingly well, given their application in adaptive problem ecologies. Those adaptive problem ecologies just don’t include theoretical cognition because of the astronomical complexities of the systems involve and the poverty of theoretical metacognition. We opine on the nature of ‘desire’ without any access to that nature whatsoever, just flavours, gists, that help us troubleshoot specific social circumstances. Thus ‘people’ are simply not what we think they are. BBT asks how it could be any other way, given the complexities and bottlenecks that, as a matter of empirical fact, are involved. The onus is really on the intentionalist to explain the ‘magical metacognition’ that allows us to grasp these true inner natures on the basis of mere reflection. Think of all the cognitive labour require to understand the nature of an apple even given scads of information via our senses!
“The point is that there’s sociological and anthropological evidence of such personhood, so we can’t attribute the concept of a person merely to some mental program which activates the concept given certain stimuli.”
‘Personhood,’ certainly, but of ‘such personhood,’ which is to say, ‘personhood as metacognized,’ then I would love to see that evidence! Meanwhile, I’ll begin amassing all the prodigious evidence against. All I see is a bunch of philosophy winding on and on and on for.
There is no arguing that intentional intuitions lead to cognitive impasses. BBT simply says, here’s a way of explaining those intuitions as the kinds of perspectival illusions (eerily parallel to those underwriting geocentrism) one should empirically expect metacognition to suffer, so let’s put the traditional ‘intentionality’ we never had to bed, and start talking about the intentionality we do have, the one continuous with the rest of nature.
Scientific theoretical cognition isn’t a “baseline,” because like philosophy it’s a cultural achievement. Thus, again, I don’t see the relevance of your point that the brain, given only its innate programs, can’t know itself. Likewise, the brain by itself can’t confirm the exotic scientific theories. It takes the history that created the institutions of science to be able to do science. And it took the prehistory that led to our separation from the animals for our ancestors to discern the difference between people and animals. How is science different from folk psychology in that respect? How are the brain’s innate limitations relevant? It’s all just abductive reasoning and the positing of unobservables to explain the observables, some such posits being more or less useful, from a pragmatic viewpoint.
You talk at the end of your comment about the need for intentionality to be “continuous” with the rest of nature. This shows that you didn’t take on board what I said about the difference between realism and pragmatism in science. As I understand it, scientists and naturalists are quite split on the subject. There are string theorists who still search for the Theory of Everything and then there are the pessimists who replace the talk of natural laws with that of models. These pragmatists think it takes a Humean leap of faith beyond what science actually shows, to speak of continuity or universality in nature, that is, to speak of the universe as a cosmos or an ordered whole. Pragmatists think scientific models apply only ceteris paribus, and beyond that the models may or may not prove useful. We can make a patchwork of models, switching from one to the next as needed, but the point is that each model still applies only to a small, artificially isolated part of the world, since the model ignores the properties that we deem irrelevant or uninteresting.
The reason I brought that up is that if we take a pragmatic, instrumental view of science, we don’t need to ask for continuity and thus we can accept folk psychology as a limited model that’s useful in its context. If there’s a better model that’s more consistent with other models, fine. But consistency needn’t be more important than utility, from this pragmatic viewpoint. Of course, there are the models of cognitive science, including those of neuroscience, but those aren’t nearly as useful as the naive self-image for the purpose of actually getting on with other people. We may be more certain about the existence of the brain than about that of a person in the naive sense, but talking about the brain rather than about the person, in the context of trying to interact with someone is still to change the topic, not to show that the one is really nothing but the other. In any case, if the choice between the cutting-edge and the traditional models of the mind is pragmatic, your point about consistency with other models needn’t be decisive.
It seems to me, then, you’re a realist rather than a pragmatist about science. In that case, I’d love to hear your nonpragmatic account of how science is especially well-connected to reality. If instead you’re a pragmatist, why care so much about continuity between models?
I’m having a hard time getting a handle on your argument, Ben.
“Scientific theoretical cognition isn’t a “baseline,” because like philosophy it’s a cultural achievement.”
I’m not sure what you mean. What other institution better exemplifies actual, high-dimensional theoretical cognition? Put differently, what other institution provides more power to solve more problems? ‘Cultural achievement’ is a problem for philosophy because of the fractionate, problem specific nature of metacognition. Metacognition just doesn’t have the access or resources to solve the problems philosophy poses. It’s not a problem for science because of the integrated, problem general nature of environmental cognition.
The utility of ‘folk psychology’ is something I’ve never denied. What I deny is the way it’s typically defined, understood. I’m saying that it consists of a welter of different brain mechanisms adapted to take a ‘divide and conquer’ approach to the enormously complicated problem of cognizing other brains. What ELSE would it be?
“You talk at the end of your comment about the need for intentionality to be “continuous” with the rest of nature. This shows that you didn’t take on board what I said about the difference between realism and pragmatism in science.”
I didn’t address the ‘metaphysical realism’ comment directly for the sake of time more than anything. From outside BBT, I can see how it the position might appear ambiguous between the two, but within BBT both ‘realism’ and ‘pragmatism’ are traditional philosophical artifacts. The bottomline for ‘knowing that we know’ is efficacy, so it’s bound to sound pragmatic/instrumental/interpretive at turns, but it isn’t normativist at all, and it doesn’t ground out in intentional unexplained explainers like ‘interest.’ At the same time, it takes scientific cognition to provide our (far and away) most complete understanding, so it’s easy to think it’s ontologically essentializing that understanding in the ‘real,’ but this is simply an artifact of the degree to which the efficacy of scientific cognition outstrips any other kind of theoretical cognition.
We want to understand intentionality in terms continuous with nature because we are natural, for one, and because understanding things thus results in almost miraculous power: to cure disease, engineer information technologies, and so on. The process of science has been a process of rennovating our low-dimensional, often spurious, prescientific understandings, is it not? Are you suggesting that we deny that process where intentionality is concerned? That we should ‘leave well enough alone’? Or are you suggesting that intentionality, unlike anything we’ve hitherto encountered in the natural world, is somehow essentially immune to that process?
I think that some confusion results from people using the word ‘science’ in two different ways. Science as a human behavior is a ‘cultural achievement’ like philosophy and theology but science as a body of knowledge is unlike either. The speed of light in a vacuum is what it is regardless of who measured it or why they measured it. Science seems so far to be the only cultural achievement humanity has that produces knowledge about the world that is independent of human agency. Even aside from Blind Brain Theory if you compare philosophy to science in terms of track record for producing knowledge that is independent of human agency science is ahead. Based on track record I would expect science to to better than philosophy in producing such knowledge about the human mind. Science is just starting so the jury is still out. Regarding utility and folk psychology, I think that so far folk psychology is more useful than science for individuals trying to get along with other individuals. For political parties or corporations trying to manipulate voting or buying habits science might already be ahead, so pragmatic is as pragmatic does. Lastly, you can argue that deciding whether questions about intentionality or the mind or the soul or what you call it are best investigated using science or philosophy or theology is to make a decision about what sort of thing it is before you start your investigation, but again for just about any phenomenon for which the three cultural achievements have offered an explanation the scientific explanation has proved must useful.
[…] forces us to completely revise our understanding of the natural. And even if such a feat could be accomplished, the corresponding claim that it could be intuited as such remains […]
You say the brain is blind to itself, and I say, “compared to what?” I say the brain is blind to everything BUT itself — that cognition is not a struggle to comprehend the world, but a struggle of the brain to comprehend ITSELF — to reconcile afferent, uncontrolled excitations with efferent more centrally driven excitations. These two general streams don’t reconcile easily, or put otherwise, the brain has the means for decreasing the inefficiencies that arise from the amplified excitations organized around the afferent and efferent systems, and that increased efficiency is the brain learning itself. The world is just a model for testing that efficiency.
I like the idea of the brain as a mechanism that stores risk as complexity! But isn’t that kind of the point though? The idea is that you can focus on any level at any point and claim some kind of ‘special efficacy’ (when really what you mean is interesting or useful). ‘Efferent more centrally driven excitations’ are themselves an architectural feature arising as a phenotypical expression of genetic filtration by the world. The circuits just keep getting bigger from there. On smaller scales, there’s going to many ways to simplify the complexity, to generalize over components, don’t you think?
The question is whether the brain is efficacious the way it intuits itself to be (namely, as a first-person agent). Is there a magical level? My point is simply that neglect allows you to puzzle through the first-person in a very parsimonious way.
Very well put, by the way.
Thanks for the response.
I’m not saying there isn’t a world or a something-else, but that it is audacious of us to believe that “outward-referencing” ever escapes the gravity of phenomenal experiencing. On another hand, whatever we are doing when we think we are doing what we think we are doing, apparently produces effective results in the something-else.
I sympathise with your attempts to understand the absence of self-as-object-content, but I don’t think there is any “deliberate absenting” going on. The organization of afferent excitation must work with a certain “sloppiness” of organic response. That sloppiness is actually one of its advantages, allowing for micro-adjustments and the need for flexibility in millisecond intervals.
Our “non-presence”, then, avoids some critical errors that might result from portraying wrong values of position and intensity. Those errors (and worse stuff having to do with the inconstancy of self) have always prevented what we might call “internal depiction”. Instead, we get our self-output-feedback from a collection of interactive qualia that can be attached to world-objects, and can be fudged, such as weight, textures and velocity.
Notice, also, that virtually all of our afferent channels (i.e. our sensory channels) convey information from flat collectors, that is, 2-d sensory arrays. The cerebral shell is designed to continue the processing of waves of excitation from these 2-d arrays. We get putative 3-d from the integration of 2-d series. But the “inside of us” is genuine 3-d. A different kettle of fish.
But an interesting question is, “from what original collection of organizational opportunities did the realization of the value of the presence/non-presence dichotomy emerge?” What was the proto-dichotomy that prepared the way?