Alien Philosophy (cont’d)
by rsbakker
B: Thespian Souls
Given a convergent environmental and biological predicament, we can suppose our Thespians would have at least flirted with something resembling Aristotle’s dualism of heaven and earth. But as I hope to show, the ecological approach pays even bigger theoretical dividends when one considers what has to be the primary domain of human philosophical speculation: ourselves.
With evolutionary convergence, we can presume our Thespians would be eusocial, [1] displaying the same degree of highly flexible interdependence as us. This observation, as we shall see, possesses some startling consequences. Cognitive science is awash in ‘big questions’ (philosophy), among them the problem of what is typically called ‘mindreading,’ our capacity to explain/predict/manipulate one another on the basis of behavioural data alone. How do humans regularly predict the output of something so preposterously complicated as human brains on the basis of so little information?
The question is equally applicable to our Thespians, who would, like humans, possess formidable socio-cognitive capacities. As potent as those capacities were, however, we can also suppose they would be bounded, and—here’s the thing—radically so. When one Thespian attempts to cognize another, they, like us, will possess no access whatsoever to the biological systems actually driving behaviour. This means that Thespians, like us, would need to rely on so-called ‘fast and frugal heuristics’ to solve each other. [2] That is to say they would possess systems geared to the detection of specific information structures, behavioural precursors that reliably correlate, as opposed to cause, various behavioural outcomes. In other words, we can assume that Thespians will possess a suite of powerful, special purpose tools adapted to solving systems in the absence of causal information.
Evolutionary convergence means Thespians would understand one another (as well as other complex life) in terms that systematically neglect their high-dimensional, biological nature. As suggestive as this is, things get really interesting when we consider the way Thespians pose the same basic problem of computational intractability (the so-called ‘curse of dimensionality’) to themselves as they do to their fellows. The constraints pertaining to Thespian social cognition, in other words, also apply to Thespian metacognition, particularly with respect to complexity. Each Thespian, after all, is just another Thespian, and so poses the same basic challenge to metacognition as they pose to social cognition. By sheer dint of complexity, we can expect the Thespian brain would remain opaque to itself as such. This means something that will turn out to be quite important: namely that Thespian self-understanding, much like ours, would systematically neglect their high-dimensional, biological nature. [3]
This suggests that life, and intelligent life in particular, would increasingly stand out as a remarkable exception as the Thespians cobbled together a mechanical understanding of nature. Why so? Because it seems a stretch to suppose they would possess a capacity so extravagant as accurate ‘meta-metacognition.’ Lacking such a capacity would strand them with disparate families of behaviours and entities, each correlated with different intuitions, which would have to be recognized as such before any taxonomy could be made. Some entities and behaviours could be understood in terms of mechanical conditions, while others could not. So as extraordinary as it sounds, it seems plausible to think that our Thespians, in the course of their intellectual development, would stumble across some version of their own ‘fact-value distinction.’ All we need do is posit a handful of ecological constraints.
But of course things aren’t nearly so simple. Metacognition may solve for Thespians the same ‘fast and frugal’ manner as social cognition, but it entertains a far different relationship to its putative target. Unlike social cognition, which tracks functionally distinct systems (others) via the senses, metacognition is literally hardwired to the systems it tracks. So even though metacognition faces the same computational challenge as social cognition—cognizing a Thespian—it requires a radically different set of tools to do so. [4]
It serves to recall that evolved intelligence is environmentally oriented intelligence. Designs thrive or vanish depending on their ability to secure the resources required to successfully reproduce. Because of this, we can expect that all intelligent aliens, not just Thespians, would possess high–dimensional cognitive relations with their environments. Consider our own array of sensory modalities, how the environmental here and now ‘hogs bandwidth.’ The degree to which your environment dominates your experience is the degree to which you’re filtered to solve your environments. We live in the world simply because we’re distilled from it, the result of billions of years of environmental tuning. We can presume our aliens would be thoroughly ‘in the world’ as well, that the bulk of their cognitive capacities would be tasked with the behavioural management of their immediate environments for similar evolutionary reasons.
Since all cognitive capacities are environmentally selected, we can expect whatever basic metacognitive capacity the Thespians possess will also be geared to the solution of environmental problems. Thespian metacognition will be an evolutionary artifact of getting certain practical matters right in certain high-impact environments, plain and simple. Add to this the problem of computational intractability (which metacognition shares with social cognition) and it becomes almost certain that Thespian metacognition would consist of multiple fast and frugal heuristics (because solving on the basis of scarce data requires less, not more, parameters geared to particular information structures to be effective). [5] We have very good reason to suspect the Thespian brain would access and process its own structure and dynamics in ways that would cut far more corners than joints. As is the case with social cognition, it would belong to Thespian nature to neglect Thespian nature—to cognize the cognizer as something other, something geared to practical contexts.
Thespians would cognize themselves and their fellows via correlational, as opposed to causal, heuristic cognition. The curse of dimensionality necessitates it. It’s hard, I think, to overstate the impact this would have on an alien species attempting to cognize their nature. What it means is that the Thespians would possess a way to engineer systematically efficacious comportments to themselves, each other, even their environments, without being able to reverse engineer those relationships. What it means, in other words, is that a great deal of their knowledge would be impenetrable—tacit, implicit, automatic, or what have you. Thespians, like humans, would be able to solve a great many problems regarding their relations to themselves, their fellows, and their world without possessing the foggiest idea of how. The ignorance here is structural ignorance, as opposed to the ignorance, say, belonging to original naivete. One would expect the Thespians would be ignorant of their nature absent the cultural scaffolding required to unravel the mad complexity of their brains. But the problem isn’t simply that Thespians would be blind to their inner nature; they would also be blind to this blindness. Since their metacognitive capacities consistently yield the information required to solve in practical, ancestral contexts, the application of those capacities to the theoretical question of their nature would be doomed from the outset. Our Thespians would consistently get themselves wrong.
Is it fair to say they would be amazed by their incapacity, the way our ancestors were? [6] Maybe—who knows. But we could say, given the ecological considerations adduced here, that they would attempt to solve themselves assuming, at least initially, that they could be solved, despite the woefully inadequate resources at their disposal.
In other words, our Thespians would very likely suffer what might be called theoretical anosognosia. In clinical contexts, anosognosia applies to patients who, due to some kind of pathology, exhibit unawareness of sensory or cognitive deficits. Perhaps the most famous example is Anton-Babinski Syndrome, where physiologically blind patients persistently claim they can in fact see. This is precisely what we could expect from our Thespians vis a vis their ‘inner eye.’ The function of metacognitive systems is to engineer environmental solutions via the strategic uptake of limited amounts of information, not to reverse engineer the nature of the brain it belongs to. Repurposing these systems means repurposing systems that generally take the adequacy of their resources for granted. When we catch our tongue at Christmas dinner, we just do; we ‘implicitly assume’ the reliability our metacognitive capacity to filter our speech. It seems wildly implausible to suppose that theoretically repurposing these systems would magically engender a new biological capacity to automatically assess the theoretical viability of the resources available. It stands to reason, rather, that we would assume sufficiency the same as before, only to find ourselves confounded after the fact.
Of course, saying that our Thespians suffer theoretical anosognosia amounts to saying they would suffer chronic, theoretical hallucinations. And once again, ecological considerations provide a way to guess at the kinds of hallucinations they might suffer.
Dualism is perhaps the most obvious. Aristotle, recall, drew his conclusions assuming the sufficiency of the information available. Contrasting the circular, ageless, repeating motion of the stars and planets to the linear riot of his immediate surroundings, he concluded that the celestial and the terrestrial comprised two distinct ontological orders governed by different natural laws, a dichotomy that prevailed some 1800 years. The moral is quite clear: Where and how we find ourselves within a system determines what kind of information we can access regarding that system, including information pertaining to the sufficiency of that information. Lacking instrumentation, Aristotle simply found himself in a position where the ontological distinction between heaven and earth appeared obvious. Unable to cognize the limits imposed by his position within the observed systems, he had no idea that he was simply cognizing one unified system from two radically different perspectives, one too near, the other too far.
Trapped in a similar structural bind vis a vis themselves, our navel-gazing Thespians would almost certainly mistake properties pertaining to neglect with properties pertaining to what is, distortions in signal, for facts of being. Once again, since the posits possessing those properties belong to correlative cognitive systems, they would resist causal cognition. No matter how hard Thespian philosophers tried, they would find themselves unable to square their apparent functions with the machinations of nature more generally. Correlative functions would appear autonomous, as somehow operating outside the laws of nature. Embedded in their environment in a manner that structurally precludes accurately intuiting that embedment, our alien philosophers would conceive themselves as something apart, ontologically distinct. Thespian philosophy would have its own versions of ‘souls’ or ‘minds’ or ‘Dasein’ or ‘a priori’ or what have you—a disparate order somehow ‘accounting’ for various correlative cognitive modes, by anchoring the bare cognition of constraint in posits (inherited or not) rationalized on the back of Thespian fashion.
Dualisms, however, require that manifest continuities be explained, or explained away. Lacking any ability to intuit the actual machinations binding them to their environments, Thespians would be forced to rely on the correlative deliverances of metacognition to cognize their relation to their world—doing so, moreover, without the least inkling of as much. Given theoretical anosognosia (the inability to intuit metacognitive incapacity), it stands to reason that they would advance any number of acausal versions of this relationship, something similar to ‘aboutness,’ and so reap similar bewilderment. Like us, they would find themselves perpetually unable to decisively characterize ‘knowledge of the world.’ One could easily imagine the perpetually underdetermined nature of these accounts convincing some Thespian philosophers that the deliverances of metacognition comprised the whole of existence (engendering Thespian idealism), or were at least the most certain, most proximate thing, and therefore required the most thorough and painstaking examination (engendering a Thespian phenomenology)…
Could this be right?
This story is pretty complex, so it serves to review the modesty of our working assumptions. The presumption of interstellar evolutionary convergence warranted assuming that Thespian cognition, like human cognition, would be bounded, a complex bundle of ‘kluges,’ heuristic solutions to a wide variety of ecological problems. The fact that Thespians would have to navigate both brute and intricate causal environments, troubleshoot both inorganic and organic contexts, licenses the claim that Thespian cognition would be bifurcated between causal systems and a suite of correlational systems, largely consisting of ‘fast and frugal heuristics,’ given the complexity and/or the inaccessibility of the systems involved. This warranted claiming that both Thespian social cognition and metacognition would be correlational, heuristic systems adapted to solve very complicated ecologies on the basis of scarce data. This posed the inevitable problem of neglect, the fact that Thespians would have no intuitive way of assessing the adequacy of their metacognitive deliverances once they applied them to theoretical questions. This let us suppose theoretical anosognosia, the probability that Thespian philosophers would assume the sufficiency of radically inadequate resources—systematically confuse artifacts of heuristic neglect for natural properties belonging to extraordinary kinds. And this let us suggest they would have their own controversies regarding mind-body dualism, intentionality, even knowledge of the external world.
As with Thespian natural philosophy, any number of caveats can be raised at any number of junctures, I’m sure. What if, for instance, Thespians were simply more pragmatic, less inclined to suffer speculation in the absence of decisive application? Such a dispositional difference could easily tilt the balance in favour of skepticism, relegating the philosopher to the ghettos of Thespian intellectual life. Or what if Thespians were more impressed by authority, to the point where reflection could only be interrogated refracted through the lens of purported revelation? There can be no doubt that my account neglects countless relevant details. Questions like these chip away at the intuition that the Thespians, or something like them, might be real…
Luckily, however, this doesn’t matter. The point of posing the problem of xenophilosophy wasn’t so much to argue that Thespians are out there, as it was, strangely enough, to recognize them in here…
After all, this exercise in engineering alien philosophy is at once an exercise in reverse-engineering our own. Blind Brain Theory only needs Thespians to be plausible to demonstrate its abductive scope, the fact that it can potentially explain a great many perplexing things on nature’s dime alone.
So then what have we found? That traditional philosophy something best understood as… what?
A kind of cognitive pathology?
A disease?
IV: Conclusion
It’s worth, I think, spilling a few words on the subject of that damnable word, ‘experience.’ Dogmatic eliminativism is a religion without gods or ceremony, a relentlessly contrarian creed. And this has placed it in the untenable dialectical position of apparently denying what is most obvious. After all, what could be more obvious than experience?
What do I mean by ‘experience’? Well, the first thing I generally think of is Holocaust, and the palpable power of the Survivor.
Blind Brain Theory paints a theoretical portrait wherein experience remains the most obvious thing in practical, correlational ecologies, while becoming a deeply deceptive, largely chimerical artifact in high-dimensional, causal ones. We have no inkling of tripping across ecological boundaries when we propose to theoretically examine the character of experience. What was given to deliberative metacognition in some practical context (ruminating upon a social gaffe, say) is now simply given to deliberative metacognition in an artificial one—‘philosophical reflection.’ The difference between applications is nothing if not extreme, and yet conclusions are drawn assuming sufficiency, again and again and again—for millennia.
Think of the difference between your experience and your environment, say, in terms of the difference between concentrating on a mental image of your house and actually observing it. Think of how few questions the mental image can answer compared to the visual image. Where’s the grass the thickest? Is there birdshit on the lane? Which branch comes closest to the ground? These questions just don’t make sense in the context of mental imagery.
Experience, like mental imagery, is something that only answers certain questions. Of course, the great, even cosmic irony is that this is the answer that has been staring us in the fucking face all along. Why else would experience remain an enduring part of philosophy, the institution that asks how things in the most general sense hang together in the most general sense without any rational hope of answer?
Experience is obvious—it can be nothing but obvious. The palpable power of the Holocaust Survivor is, I think, as profound a testament to the humanity of experience as there is. Their experience is automatically our own. Even philosophers shut up! It correlates us in a manner as ancient as our species, allows us to engineer the new. At the same time, it cannot but dupe and radically underdetermine our ancient, Sisyphean ambition to peer into the soul through the glass of the soul. As soon as we turn our rational eye to experience in general, let alone the conditions of possibility of experience, we run afoul illusions, impossible images that, in our diseased state, we insist are real.
This is what our creaking bookshelves shout in sum. The narratives, they proclaim experience in all its obvious glory, while treatise after philosophical treatise mutters upon the boundary of where our competence quite clearly comes to an end. Where we bicker.
Christ.
At least we have reason to believe that philosophers are not alone in the universe.
Notes
[1] In the broad sense proposed by Wilson in The Social Conquest of the Earth.
[2] This amounts to taking a position in the mindreading debate that some theorists would find problematic, particularly those skeptical of modularity and/or with representationalist sympathies. Since the present account provides a parsimonious means of explaining away the intuitions informing both positions, it would be premature to engage the debate regarding either at this juncture. The point is to demonstrate what heuristic neglect, as a theoretical interpretative tool, allows us to do.
[3] The representationalist would cry foul at this point, claim the existence of some coherent ‘functional level’ accessible to deliberative metacognition (the mind) allows for accurate and exhaustive description. But once again, since heuristic neglect explains why we’re so prone to develop intuitions along these lines, we can sidestep this debate as well. Nobody knows what the mind is, or whatever it is they take themselves to be describing. The more interesting question is one of whether a heuristic neglect account can be squared with the research pertaining directly to this field. I suspect so, but for the interim I leave this to individuals more skilled and more serious than myself to investigate.
[4] In the literature, accounts that claim metacognitive functions for mindreading are typically called ‘symmetrical theories.’ Substantial research supports the claim that metacognitive reporting involves social cognition. See Carruthers, “How we know our own minds: the relationship between mindreading and metacognition,” for an outstanding review.
[5] Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have demonstrated that simple heuristics are often far more effective than even optimization methods possessing far greater resources. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23).
[6] “Quid est enim tempus? Quis hoc facile breuiterque explicauerit? Quis hoc ad uerbum de illo proferendum uel cogitatione comprehenderit? Quid autem familiarius et notius in loquendo commemoramus quam tempus? Et intellegimus utique cum id loquimur, intellegimus etiam cum alio loquente id audimus. Quid est ergo tempus? Si nemo ex me quærat, scio; si quærenti explicare uelim, nescio.”
Caveat: I’ve not read all the way through
It serves to recall that evolved intelligence is environmentally oriented intelligence. Designs thrive or vanish depending on their ability to secure the resources required to successfully reproduce. Because of this, we can expect that all intelligent aliens, not just Thespians, would possess high–dimensional cognitive relations with their environments. Consider our own array of sensory modalities, how the environmental here and now ‘hogs bandwidth.’ The degree to which your environment dominates your experience is the degree to which you’re filtered to solve your environments.
I wonder if at this point maybe the wrong cognitive tools get applied by philosophers/some readers? Like say you were saying a car enginge needs petrol to run – it’d be a non sequitur in regard to philosophy and the reader would continue reading with the mindset they were using before (remaining in a sort of social cognition).
Here, I wonder if, to some/many readers, the practicality described seems as much a non sequitur in regard to philosophy/questions of conciousness? “Why are you talking about engines? We’re talking meaning here!”
Maybe I’m chasing a dead end, but the thing about socialising is that it costs very little to attempt socialising and it potentially has great payoff. So a habit of hitting/spamming the social cognition button again and again and again makes sense. When there is nothing in the way…the thinking heads that way.
I was thinking if you had some practical activity, like a flash program/game where you have to physically drag processing time over to environment in chunks, and enough chunks or you just do not advance to the next stage – you ‘die’ in the game.
Whereas when there is no pinch, no cost, no wall, social thinking just comes in – reading right past these practical issues in as much as simply perfering to continue applying social cognition (because nothing stops it). So we’re back to talking ‘meaning’ like we talk about Suzie – and talking about engines as if that is the thing that matters the most about Suzie driving up to our house in her car…that’s just cart before the horse talk, for that social cognition.
Unless it hits a wall. Suzie/meaning will always come before engine talk, unless something physically blocks social thinking from getting any further (blocking it, but not blocking practical thinking)
Ironically I could just silently go and try and write a program instead of airing the idea of one. But, heh, social cognition! Hope I didn’t spoil the comments section with this.
There was a recent article in one of those Edge collections (This Idea Must Die) where the contributor argued that emphasizing too strongly the social aspect of human cognition is a mistake, but I don’t know what his arguments were. Has anyone happened to read that collection? In any cause, Scott’s explanatory task is giving a biomechanical account of intentional cognition, especially the apparent gap between intentional and mechanical/causal cognition and why it is resistent to natural scientific accounts (which presumably work via elaborate extensions of mechanical and causal cognition). So I don’t see where it’s so problematic that he has to spam the social cognition button to get his point across.
Fair question. Have you seen the hollow face illusion?
Because of social cognition, if the hollow side of the mask were facing the reader but you were trying to talk about the depth of the hollowness, you would have a hard time getting your depth talk across when, due to social cognition, their brain is reading the face as being convex rather than concave.
Here, its getting ‘bandwidth hogging’ talk across when ‘we are talking about conciousness, man!’
Granted it’s the regular old stumbling stone (it seems) – these accounts tend to rely on some sort of acceptance of some biomechanical explanation to then explain more biomechanical. What else is a reference to ‘bandwidth hogging’ but a reference to a prior commitment to the biomechanical? A commitment to a concave face as the means to prove it’s a concave face.
Commitments. Social talk. Social cognition. Or does it seem to work – because your commitments are different?
Anyway, imagine Dark Souls, but instead it’s Thespian Souls…it all ties in, man! 🙂
Explanations bottom out somewhere. He doesnt have to explain what biological explanations consist in to use biological notions in his explanation. You don’t have to explain everything in order to explain something.
He doesnt have to explain what biological explanations consist in to use biological notions in his explanation.
I dunno what law of physics means it’s a rule he doesn’t have to. But if that’s the rule, I guess it is *shrug*
You don’t have to explain everything in order to explain something.
Not sure that involves an accurate reading of what I’ve said? If one person thinks you can balance a car on four porcalin cups and another doesn’t, it’s hardly ‘explaining everything’ to do some sort of physical demonstration. That’s all I’ve suggested with the game idea (a game version since it’s real hard to demonstrate evolution perse)
on my to-do list
http://newbooksinphilosophy.com/2015/08/14/chad-engelland-ostension-word-learning-and-the-embodied-mind-mit-press-2015/
PSA: RSB is a sitting (or crouching?) judge for a new contest over at Grimdark Magazine. BEST BATTLE SCENE. Picture in your mind: you submit your battle scenes, then Bakker reads it, then Cleric screams “Where has all the judgment gone?” http://grimdarkmagazine.com/pages/the-grimdark-magazine-battle-off-competition
Very interesting, etc., etc., but I’m here for one purpose only: I’m tired of waiting, and am prepared to offer you a hefty bribe for a quick plot run down, wikipedia style, of Unholy Consult. Fifty bucks.
The bidding starts at $50! Do I hear $60? $60 for this fine plot synopsis folks…
I was wondering how to pronounce your handle and then I got to the $50 and realized it was ‘scrooge’! 😉
Sorry npow, but the cardinal rule in blog responses is to not jam the commentary with monologues. I’m clearing you’re responses from the board so as to encourage others to reply. Please, distill your questions and concerns into one comment of reasonable length, and we’ll take it from there.
Ah rookie mistake. Democracy is important I get it. My apologies. Ok so my question is twofold:
1. Is Brassier right to trace some of the insufficiencies of eliminativist accounts to latent metaphysical suppositions that are rebarbative not because they are metaphysical but because they constitute “an impoverished metaphysics, inadequate to the task of grounding the relation between representation and reality”?
2. Is it possible that the ontological incompleteness position that Zizek advocates (stripped of the dubious attempt to recuperate the void as the locus where necessity and contingency overlap in the Subject etc) contains a greater critical force (in terms of dismantling the core of our manifest image knowledge biases) because it destroys a form of the “The Given” that no purely epistemological account can touch. You claimed to always erh on the epistemological because you lack Zizek’s faith in theory. But what if transposing those epistemic inadequacies that BBT identifies as the root cause of our predicament into the Real itself, that is into asubjective, inorganic matter (via a metaphysical argument that would align with what Quantum Physics is telling us), wouldn’t that eliminate the unhealthy residues of the manifest image that pertain to that inorganic slop, or dead matter, from which something like a subject emerges? It seems to me by abjuring that metaohysical step you leave the door open for some smooth, stable, fully constituted reality (something like the Kantian in-itself) to exist, at least potentially.
No worries. Sick like dog on my end, as well, which doesn’t help.
1) Yes and no. The problem with traditional eliminativisms is that have no theory of meaning, which means they don’t have any abductive horses in the race. The further claim that they have no theory of meaning because they possess an ‘impoverished metaphysics’ is only something someone with a supra-natural metaphysics–in Ray’s case, a normative metaphysics–to sell would say. It’s worth noting that this is his own primary criticism of NU, the fact that it lacks a theory of meaning. This is what motivates his subsequent turn to Sellars.
2) Ontology is epistemologically expensive–obviously so. You can’t ontologize the explanatory gap the way Zizek does without raising more questions, and worse, without transporting the debate beyond the pale of practical arbitration, which is to say, raising more questions that can’t be answered. The perennial continental counter-argument (and the one I once used) is that epistemology is ontologically expensive. The problem is that it’s not at all clear how this is the case. I’ve since realized I’d found it convincing because I assumed ‘How do you know?’ questions presumed some kind of ‘subject-object ontology’ which was deeply flawed in this or that respect. Now I’m a living counter-example, insofar as my position has no subjects, and yet the question, ‘How do you know?’ remains as accessible to me as it does Zizek. It’s not wedded to any canonical ‘problematic ontological assumption.’
Besides, it’s good rule of thumb to suspect chicanery whenever people refuse to answer ‘How do you know?’ type questions, in philosophy as much as the pharmacy. Zizek gives a twist on what is ultimately a pedestrian ‘god of the gaps’ argument.
If you’re new to the thing, it all seems exciting, but tarry awhile… you’ll catch the whiff of the driving ingroup dynamics soon enough.
And the second reason he would give would be the following which is totally different. Is not mathematics a form of metaphysics? It is a science that doesn’t rely at all on empirical testability for proof. Much has been made about its seeming impenetrable rigor and yet complete reliance on non-deducible axioms. Now imagine, as a thought experiment that you Scott were a mathematician. Mathematicians at one point were engaged in an all out civil war. The intuitionists were the more skeptical, distrustful cautious bunch. They believed that just because you can proof that if a proposition does not exist and its absence entails a contradiction that in no way counts as evidence for its existence. The classical mathematicians were more dogmatic and trusted this form of reasoning by the absurd. It turned out that the intuitionists lost because the classical camp was able to make incredible progress. The whole cantorian set theory revolution for instance would have been ruled out by a strict intuitionist. So the history of mathematics, although extremely an rigorous science, maybe the original paradigm of all the others it spawned, involved a lot of groping in the dark. A lot of “ok lets assume x and play out the consequences see where it takes us”. Many of its greatest theorems were retroactively decided upon in this way. Now Quantum Physics is heavily reliant on mathematical formalism, more so than the life sciences. And so lets assume what Zizek claimed before wasn’t true, that his ontology is supported by findings in quantum physics. He puts it on the table for certain insubstantial but understandable reasons (he wants to avoid negative theologies, he feels instinctively skeptical that we fuck up reality with our skewed perception while it remains intact etc) and explores the consequences. So to conclude he could just say your right I don’t have substantial proof for my claim but that in no way means we should rule it out. It may turn out to be quite true and change our entire understanding of physics. So groping in the dark in this way historically, at least, has resulted in some incredible innovations in the sciences. And I would, quite modestly, claim that you don’t acknowledge this enough. But maybe you just have no faith in mathematics and quantum physics as telling us anything definitive or life changing or whatever. But like Einstein? Cantor? In a way they were in the same ontological boat as Zizek at one point (not to see that Zizek is in their league lol. What would you have said to them?
Is mathematics a science? It seems to me that in order to claim that mathematics is a science one needs a definition of mathematics and a definition of science. You seem to be using an unusual definition of science in that most definitions of science include some idea of empirical testability. It seems to me that if you eliminate that requirement from your definition a lot of things (like philosophy of mind) can be counted as science that one does not ordinarily think of as scientific. Few nowadays would argue that Plato’s theory of ideal forms is a scientific theory, precisely because nothing about those ideal forms is empirically testable. As Scott and others have pointed out elsewhere in this blog, philosophizing is only possible in the absence of empirical data.
I could be wrong, but perhaps you concluded that mathematics is a science because of its intellectual rigor, or because scientists find it useful. If instead we think of mathematics as an art form and/or as a style of philosophizing we at least have something to say to Cantor (Set theory is a brilliant application of the Impressionist spirit to the art of mathematics.) And to Einstein (Modernism in mathematics and modernism in physics go hand in hand.)
I suppose to some extent I’m joshing you, but the difficulty of defining “mathematics” and “science” and similar human activities in ways that command consensus and are useful in thinking about mathematics and science is symptomatic. We can’t say what it is our brains are doing when they do mathematics because we can’t say what it is our brains are doing when they do most of what it is they do. To put the point another way, did Georg Cantor discover set theory or create it?
Interesting – perhaps math is scienctific experimentation. Like if you have five objects and take away one, you have four. You can repeat that experiment a million times and get that same result over and over. It’s not so much the ‘five’ or the ‘four’ or the ‘take away’, but the same result over and over that’s the main thing? Which is how scientific testing is done.
I learned to despise Plato as a college freshman, but perhaps after many arithmetic operations on physical objects some prehistoric genius created numbers by abstracting a general rule from the many examples of arithmetic on physical objects that all human beings perform in the course of daily life. Perhaps numbers are the ideals for which collections of physical objects are the gross, disgusting material manifestations. But still one can ask, were numbers created or discovered? One might argue that arithmetic is a scientific theory and the many divisions of chunks of meat, nice, sharp obsidian flakes, pretty shells and so on were the observations and experiments that inspired the theorizing. As a matter of historical inquiry the origins of arithmetic are lost in the shadows of time. I believe some scientific inquiry into how very young children acquire number concepts is ongoing, but there are regular contributors to this blog who are far more qualified than me to address the state of the research. Other than that I think all we have left regarding the origins of arithmetic is empty philosophizing.
But for my empty philosophical two cents worth, I think that the step from counting physical objects to counting mental representations of physical objects was probably fairly easy. I think the step from counting mental representations of physical objects to inventing abstract numbers that could represent representations of physical objects and therefore by used to represent any physical objects, and then represent non-physical non-objects was quite difficult. I think numbers in this sense were invented.
I’d agree that obviously we’re have no historical record. But I’m guessing no ‘rule’ came up. Instead the caveman simply started imagining a number of objects instead of having them in front of him. As much as each letter of the text I write now originated as a depiction of an object or creature, but now (after so many repetitions and simplifications) bear no resemblance to such, so are ‘rules of math’ an imagining that bears no resemblance to its origin.
There are probably ways of emperically testing this theory.
I think its more basic than this. Animals encountered things which they can identify as bearing similarities to past experience and they can encounter iterations of similar items or features in their current cognitive field. Keeping track of these and managing them requires number. Knowing if there are 10 wolves outside the camp or two, being able to percieve and track this difference as a huge advantage. All number really does is sortally pick out populations based on some kind of property similarities. And frege said numbers were objects because this sortal criterion itself applies to numbers (eg they are the objects picked out by the recursive procedures of the peano axioms). Natural number also has extensions beyond this, I think relating to how procedural or discrete operations can be parceled out and tracked. Without number it would be difficult to break cognitive operations down or to decompose them into more basic steps or operations.
somehow my post of Zizeks responses is was placed above this entry. Still learning the ropes around here lol
above the post where you kindly announced why you deleted my initial posts.
http://pirsa.org/05110004/
Stick with Zurek. No transcendentalism, no nonsense approach to issues in quantum theory. Zurek pretty much agrees with what Brassier initially sets out in Alien Theory: humans are information systems which are themselves open quantum systems. Quantum theory and the integration of thermodynamic entropy with information theoretic entropy dissolves more epistemological conundrums than continental theory could hope to. All on the dime of National Security! His goal is to show how out of the massive multiplicity of quantum state in hilbert space a stable selection of enduring or robust states which are able to serve as predictable classical indices emerge in spite of on going environmental perturbations. What’s interesting is how the decoherence selctionist approach actually resembles much of what D/G were talking about completely away from any context related to quantum theory. On academia.edu you can find a paper concerning possible mappings between deleuze’s metaphysical constucts and quantum theory.
Decoherence is still a very controversial field, last I checked. And continentalism, to my knowledge, hasn’t solved any epistemological problems! 😉 The idea of mapping D/G’s metaphysics across certain interpretations, though certainly interesting, strikes more as a bid for legitimacy than anything else. It certainly doesn’t ‘verify’ their constructions, and it leaves us with the problem that plagues all metaphysics: profound theoretical underdetermination. Why not just chuck D/G and stick with the quantum physics?
I encountered Zurek a ways back. Fascinating character, and interesting to review, given that I’m presently reading the (excellent) Life on the Edge.
ps please don’t hate that was my first attempt at a blog post. In hindsight I acknowledge it probably came off as really douchey but honestly it was the product of hysteria
also my apology is sincere, that was literally my first attempt at a blog post. In hindsight I see how it came off as extremely douchey but it was honestly the product of unchecked hysteria and cannabis. Won’t happen again, please don’t hate.
“but it was honestly the product of unchecked hysteria and cannabis”
The engines of God… round these parts anyway.
Rereading that maligned, mawkish, downright dimwitted post on Rilke is by turns fascinating and nauseating for me. Only an overpacked blunt has the power to induce an idiotic defense of a return to the sacred whilst citing a Disney movie as evidence (fucking Pocahontas too, probably the worst of the bunch). Rilkean animism vs. neuroscience lol But I do take solace in the fact that I have added a new contribution to your store of continental fallacies. Behold: “Argumentum ad Marijuana”. It rears its bloodshot eyes more than you’d think so beware.
Phew thanks for the response. I take your point that ontological commitments are epistemologically expensive. Also on Brassier conceded I have some rethinking to do (which is all I really hoped for anyway). On Zizek though not having a response to why? what grounds? he has two mutually conflicting responses which are both i think compelling although one will definitely satisfy you more. 1.Lets assume Zizeks ontological account is confirmed or at least consistent with some interpretations of the collapse of the wave function in the field of Quantum Physics. Now we can admit that rehabilitations of pre-kantian metaphysics speculation are dead ends because they cannot ground their claims etc.I willfully admit I love to dabble in people like Deleuze exclusively for the mental masturbation they afford me. But what is interesting is that Quantum Physics is undeniably a science, but the level of abstraction at which it operates makes it the clearest contender for a replacement of old classical metaphysics. It is governed by scientific procedures, empirical data etc. but it retains a metaphysical speculative dimension. Thats also why many of the competing theories on the table approach Alice in Wonderland level weirdness.Now you might remain unconvinced that quantum physics can give us anything nearly as conclusive as neuroscience. But if you grant that Quantum Physics is in fact a science that has in fact made some “reality shattering insights” then Zizeks theories are not groundless but actually rely on the findings of an established science. He has a very dense and lengthy chapter on this in Less Than Nothing, which you may end up finding unconvincing when or if you read it. But I don’t think he is pulling the metaphysical rabbit out of the hat. So when you ask him why? How do you know? I dont think he would be reduced to shamed silence. He would immediately begin to hysterically talk quantum physics with the intermittent snorts and tics.
Also hope you get better soon.And my comments are still too long. Ill work on that.
Not quite three pounds . . .
https://news.osu.edu/news/2015/08/18/human-brain-model/
And some are skeptical.
An Ohio State professor claims to have created most of the brain structures of a 5 week old foetus from stem cells, as I am sure you have all seen.
And now they are going to experiment on it.
Fucking. Intense. Great link, Lyndon. Mucho danke.
I probably just sound an echo chamber, but dat ‘ethics’ in ‘Such a system will enable ethical and more rapid and accurate testing of experimental drugs’ tho…
Fifty. Bucks.
Hello Scott, I would like to make a request. As I understand it, you will not be announcing the title of the third series in the Second Apocalypse until The Unholy Consult has been out for a while, because the title will be a spoiler. I can’t fathom why you would do this. It won’t hurt me, since I’m already a fan, but what of your fans of the future? Is it OK that the series will be spoiled for them? The only justification I can think of is that you’ve had the title in your head for a long time, and it would seem “wrong” to call it something else. So here is my plea: let it go, change the title.
Thanks for listening, I love the series.
At the risk of greatly and obnoxiously oversimplifying . . .
These last two posts read as: let’s imagine an alien philosophy by anthropomorphisizing aliens.
Convergent evolution is all well and good, but we seem way beyond the boundaries of what science has so far told us about convergent evolutions strength. Especially when applying it to cognition, for which we have a sample size of exactly one (which arguably says something about convergent evolutions lack of strength in this area). These posts also, I think, assume SJ Gould is wrong about cognition and spandrels.
Great questions, points, though I don’t see how they cut against my thesis, which is abductive. ‘Aliens’ here is a way to use defamiliarization to get readers to step outside their ‘orthodox skin,’ and look at things differently. If I can build a plausible picture of human self-understanding without helping myself to intentional entities, then the question should be, “Why do we think some ‘inexplicable additive’ is required?” I don’t see the significance of spandrels, in this instance, but otherwise I entirely agree that the exercise is one in anthropomorphising aliens–that was the explicit point, which was to build a plausible alien (not a probable one) as a means of seeing the way our biology constrains us. If you think there’s more plausible ways to describe the challenges faced by alien metacognition, please share!
Ok I see I was missing some of your argumentative thrust. I get what you mean to do now, though the setup seems elaborate so I hope the payoff is worth it. Inductively, I find that arguments by intricate and convoluted hypothetic example tend not to be convincing except to prior believers.
Spandrels was simply meant as shorthand for: if cognition and metacognition are evolutionary fallouts or exaptations of something else, appeals to convergent evolutionary forces that “should have” acted upon cognition or metacognition are ill conceived.
I agree, but no more difficult than any other strategy I’ve attempted! My view requires grasping a conceptual ‘gestalt,’ the same as Kant or Heidegger or Wittgenstein, in order to be understood. Given my outsider status, no one has any institutional incentive to devote the kind of work required to appreciate any of those thinkers on their own terms. All I can do is come up with thumbnails striking enough to convince insiders that I’m onto something, at least, and so encourage a second look. So I just keep chipping away…
I fear I still don’t understand your usage of ‘spandrels’ here, and the moral you draw is even less clear. Are you saying we shouldn’t presume the existence of alien intelligence, that they would cognize their environments, their fellows, themselves and so on?
I am unable to reply to a reply so hopefully this shows up in the right place.
My two cents on argumentative tack:
1) Formulate one or more testable hypotheses for BBT. Better to convince scientists than philosophers. Especially as my layman’s (read undergrad philosophy major) take is that your argument in pet concludes that philosophy sucks. Philosophers will surely resist tooth and nail.
2) There are lots of us (dozens!) that are in group for your fiction but out group for your blog / philosophy. We are unable to cope with the jargon and need for an underlying philosophical baseline of knowledge required to understand these blog posts. It’s a real turnoff. Is it possible to write a real “BBT for dummies” that would be accessible to your fiction fans? That audience is there for you to tap and convert, and may pay surprising dividends.
Re: spandrels. Forget about it. Not worth the candle. Was making a throwaway comment about the limits of convergent evolution.
It strikes me as a good idea. This is what I wrote in response to a similar question from Grimdark several months back. Worth expanding on maybe?
“More than two thousand years ago, Aristotle argued that stars could not be ‘fiery stones’ as Anaxagoras had claimed because stars behaved in fundamentally different ways. The argument makes entire sense, given the information Aristotle had available. Given a terrestrial vantage, the principles governing the heavens are obviously different from those governing the earth. Thus, the famous dichotomy of heaven and earth.
Aristotle was wrong, of course. The ‘fundamental difference’ between the heavens and the earth turned out to be a trick of perspective: we’re simply too close to the earth and too far from the stars to readily see how the same set of natural laws govern both of them.
The Blind Brain Theory makes the same argument regarding the mind and the brain, the ancient dichotomy between how we understand nature and how we understand ourselves. We’re simply too close to ourselves to comprehend ourselves the way we comprehend the natural world. We lack the proper perspective—and even more importantly, we lack the perspective required to see the parochial limits of this perspective. We’re not simply blind, we’re blind to this blindness as well, so whenever we introspect, ‘contemplate consciousness,’ we think we’re apprehending a fundamentally different order of reality, one possessing freedom, reason, meaning, purpose, and morality.
Now we find ourselves at an absolutely unprecedented moment, historically speaking. So long as the brain remained a ‘black box,’ something too complicated to be scientifically understood, we could indulge our prescientific conceits without fear of contradiction. Now that cognitive science is a multibillion dollar industry, these days are fast drawing to a close. The ‘crisis of meaning’ has come to a head. Either we’re something fundamentally different and things like meaning exists, or we’re simply more nature and meaning is a kind of fantasy.
Think about what makes fantasy, fantasy. Science. What makes gods, magic, spirits and the like fantastic—or especially fictional—is the fact that science has thoroughly expelled them from any rational understanding of the natural world. In this sense, you can look at fantasy fiction as Shrek’s swamp, if you like, the place were discredited traditional entities and posits go to live as shadows of their former, scriptural and folkloric glory.
What I set out to do was to write the first fantasy that self-consciously included meaning with gods, magic, and spirits, to write a fantastic apocalypse that mirrors our ongoing ‘semantic apocalypse’ in photographic negative.”
Yes! I read this in Grimdark Mag and remember thinking that I finally get what you are trying to say with BBT. I truly do think worth expanding in a blog post. In fact, I recommend you make it a linked post up top “READ ME FIRST” style. That way people coming to the blog can have an entre that is understandable to an out group person.
http://gothamist.com/2015/08/20/dismaland_banksy_first_look.php#photo-3
http://www.npr.org/sections/thetwo-way/2015/08/21/433476889/banksys-dismaland-living-up-to-its-name-with-ticket-debacle
Gold… I’ll find a way to work this into the next Disney sequel, or die trying!
Everyday language is highly descriptive and even depends on which word in the sentence you emphasize: I WENT to the store, I went to THE store…etc. Scientific and philosophical; language and writing conform to stricter rules. Arithmetic does have strict rules but may really exist because of perceived order which is not just ouside of us; but also inside of us. Just think of your motor system and not just fingers and toes which you count on, but every voluntary muscle has a precise order.
That’s actually a working thesis in neurolinguistics… I forget the guys name now. The idea is to see linguistic expressions via the sequencing of motor processing, if I remember correctly.
OK, so I made a first pass at this second part; more will probably be necessary, but in the meantime, I’ve got two questions. The first is essentially the same nagging doubt I also have regarding Dennett’s views (with whom I think you share some common ground—rather than ansognosia, he uses scotomas as his example, but makes much the same point that the pernicious thing is not that you don’t see the things within your blind spots, but that you don’t see that there’s something you don’t see).
That doubt is, essentially, that it at least seems as if the really puzzling parts of the problem are in a sense smuggled into the ostensible solution from the start—namely, when you assert that “the constraints pertaining to Thespian social cognition … also apply to Thespian metacognition”, it seems on first blush that you assume salient factors of intentionality in its explanation. For in order to be capable of social cognition, Thespians certainly need the facility of ascribing motives, thoughts, and intentions to other Thespians—but these are capabilities that only come with intentionality. In then turning this capacity on themselves, Thespians may ascribe intentionality to themselves—but only if they first possess the ability of ascribing properties to things, which seems ineluctably intentional to me.
This is reinforced by your example regarding Anton-Babinski syndrome: patients suffering from this disorder may take themselves to see despite the fact that they don’t because they have the facility to take themselves to be a certain way (that they happen not to be); but in turning this analysis inwards, to some ‘inner eye’, you are essentially saying that they just take themselves to be capable of taking things to be a certain way—i.e. they falsely believe themselves to have a certain capacity, the intentional capacity of taking things to be a certain way; but this false belief is a belief nevertheless, and a belief is a way of taking things to be a certain way, an intentional capacity. But in order to do this, they of course need in fact be capable of taking things to be a certain way.
Now, I’m certain you’ve given this whole thing a lot of thought, and the above seems too obvious to expose a genuine flaw in your arguments; but nevertheless, I couldn’t find on first reading find an answer to this worry, hence, my first question: how could I believe myself to be intentional when in fact I’m not, when believing things itself is an intentional capacity?
The second question is sort of a takeover from my comments on your previous post: how does ignorance lead to a positive belief in things that don’t happen to be actual? If I don’t know that something is there, how does that lead to my believing something else is, that in fact isn’t?
Dennett lost his nerve, on my account! But then as they say, fools go…
“Thespians certainly need the facility of ascribing motives, thoughts, and intentions to other Thespians—but these are capabilities that only come with intentionality. In then turning this capacity on themselves, Thespians may ascribe intentionality to themselves—but only if they first possess the ability of ascribing properties to things, which seems ineluctably intentional to me.”
I don’t see why Thespians need any such things (any more than humans). What they do need is the capacity to predict/explain/manipulate one another. The ‘explanation’ bit in particular requires posits, explicit communication. We know the Thespians will be blind to one another’s brains, so we know they’ll have scarce data with which to accomplish any of this. This means they’ll have to rely on simple heuristics, and that their posits, rather than cutting the nature of what they are at the joints, will cut the nature of the practical problems they encounter at the joints. Since they have no metacognitive inkling of any discrepancy between these two ways of carving, it seems safe to presume they will be convinced (the same as humans) that their heuristic means of solving one another isn’t heuristic at all, that their posits cut nature itself at the joints.
The burden is actually yours, at this point, since the model posed above is the more epistemically modest one. You need to explain how Thespians could have anything other than ‘fast and frugal heuristics’ to work with, and why we should expect the posits associated with them enable accurate and useful social cognition, as opposed to merely useful cognition.
To me, pondering this problem brings home the kind of biological miracle that intentionalists are presuming–but I’m open to being wrong. Give it your best shot!
“What they do need is the capacity to predict/explain/manipulate one another.”
Yes, and I simply don’t see such a capacity working without some form of intentionality/aboutness/representation. Basically, I must be able to make propositions about another being in order to predict its behaviour—I must be able to say things like, ‘Laurence over there is reaching for the bottle because he is thirsty’. This is a sentence about Laurence; it in some way pertains to Laurence, represents him and his actions, etc. It has conditions of satisfaction, direction of fit, and so on. It means something; it’s not a string of idle symbols. How this comes about is in my view exactly what is to be explained.
In order to engage in social cognition, there needs to be some form of representation of one agent accessible to another; in some way, what that agent cognizes must be about, or pertain to, another agent. Thus, starting out with this capacity as given, one might be able to explain how metacognitive representation, etc., comes about, but I think the most difficult part of the job has merely been swept under the rug.
Linguaformalism, as Churchland calls it, the assumption that what’s going on in our heads has to be isomorphic with how we communicate what’s going on in our heads. What’s the evidence for this, Jochen? I know philosophers talk about it a lot, so I appreciate there’s a lot of institutional inertia, but then philosopher’s can’t even agree on how to formulate their explananda.
Empirically speaking, it’s simply not the case that social cognition requires “some form of representation of one agent accessible to another.” Do ant’s ‘represent’ one another? Meerkats? Capuchins? Chimps? Personally, the ‘propositions about another being’ is something I’m only aware of doing in the process of communicating something I somehow already understand. Since it’s an empirical fact that my own complexities escape me, doesn’t it simply stand to reason that ‘propositions about another being’ is heuristic? What else could it be? Given the sheer cost of ‘accurate cognition,’ the idea that I understand others on the basis of ‘accurate representations of intentional states,’ as opposed to heuristically, strikes me as… well, wildly implausible. Are you saying that, despite the fact that evolution relies on heuristics to solve complicated problems everywhere in nature, for some strange reason it granted us humans the miraculous capacity to accurately cognize one another? How is this system supposed to work?
And once again, what information could possibly form the basis of such an amazing capacity? We can’t see through skulls, so we pretty clearly only have behaviour to go on. From the standpoint of bounded cognition, this all but makes ‘fast and frugal heuristics’ mandatory. If this is the case (and how could it not be?) then you need to explain how ‘propositions about another being’ can at once be radically heuristic and accurate.
“Empirically speaking, it’s simply not the case that social cognition requires ‘some form of representation of one agent accessible to another.'”
But only those forms of social cognition that do will serve as an underpinning to your model, it seems to me. What is basic to BBT is a form of self-attribution—we believe ourselves to have a certain capacity that (according to you) we in fact don’t have. But this necessitates the capacity to make any attributions at all. One can plausible argue that we have this capacity, as we exercize it in a social context, and thus, can turn it upon ourselves; but this can’t serve as an explanation of just how that capacity works.
Take the ant, which I think we can reasonably assume is ‘mere mechanism’: a certain pheromone, a certain pattern of antennae drumming on its carapace, will engender certain behaviours. It will not engender beliefs of the form that ‘ant Z has found some source of food, and wants me to follow it’ (or at least, it need not—the pheromone acts as a mere switch, simpy triggering the appropriate behaviour, as for instance in an expert system). But without such beliefs, there’s nothing to turn inwards; while an ant may autocue itself by producing a pheromone that triggers food-gathering behaviour in itself, this is not accompanied by some belief that it has found food, because it has no way to produce such a belief.
It simply does not matter here whether these beliefs are accurate or mere heuristics—nobody’s saying that our intentional content is always perfectly accurate, or even tracks the outside world especially well. But there must be some way of producing these beliefs, and in your account, those beliefs stand at the very bottom, because if you take them out—as in the case of the ant—, I simply don’t see how your model produces anything like the intentionality we seem to possess.
“What is basic to BBT is a form of self-attribution—we believe ourselves to have a certain capacity that (according to you) we in fact don’t have. But this necessitates the capacity to make any attributions at all. One can plausible argue that we have this capacity, as we exercize it in a social context, and thus, can turn it upon ourselves; but this can’t serve as an explanation of just how that capacity works.”
There’s no ‘attribution’ in BBT, if by attribution you mean an intrinsic intentional operation. The thing I’m always at pains to remind intentionalists is that although they have tradition and intuition on their side, there’s is actually the greater explanatory burden. Adding ‘attribution’ to an explanation absent any naturalistic explanation as to what attribution consists of simply strands us with the same problem. This where they typically go ‘God of the gaps’ and try to spin ignorance into something more virtuous, like ‘irreducibility.’ So to begin with, there’s a real sense in which invoking attribution simply amounts to repeating the problem to be solved.
Different information structures, such as ‘human behaviour,’ cue the application of socio-cognitive systems–we know that much for sure. We are downstream component of our environments, like every other thing living. The problem is that we have no way of cognizing ourselves AS SUCH: we are natural in such a way that we cannot cognize ourselves as natural, and so we’ve developed a variety of tools for cognizing ourselves otherwise. So it seems to us that we are always upstream, that we are where efficacy resides, and being cued by information structures (sociocognitively or metacognitively, it makes no difference) suddenly becomes an inexplicable, automatic act, ‘attributing,’ ‘taking-as,’ or what have you. These heuristic assumptions allow us to carry out quite a bit of work in practical contexts, but since they are ‘work arounds,’ they leave us baffled whenever we attempt to use them to solve theoretical problems.
“But there must be some way of producing these beliefs, and in your account, those beliefs stand at the very bottom, because if you take them out—as in the case of the ant—, I simply don’t see how your model produces anything like the intentionality we seem to possess.”
Again, it serves to remember simply how profoundly ‘belief’ has snarled cognitive science in controversy. The bottom line is that we don’t possess the intentionality we are inclined to think we possess, that it is every bit as impossible as it seems. Ants rely on simple sets of cues to generate a fascinating array of behaviours. Now say you had a million different species of ants, each with distinct systems for generating adaptive behaviours on the basis of simple cues. Now say you found a way to integrate and link all these species into a ‘super-organism,’ combining cue sets and behavioural scripts to generate an apparently endless array of behaviours on the basis of an apparently endless array of cues. Where should we go looking for intentionality in such a organism? Should we bother?
“The problem is that we have no way of cognizing ourselves AS SUCH: we are natural in such a way that we cannot cognize ourselves as natural, and so we’ve developed a variety of tools for cognizing ourselves otherwise.”
This right here is where my problem lies: our ability to cognize ourselves any which way is what is to be explained; that’s the mysterious thing. I’m not incurring any explanatory burden in pointing this out, because whether or not a phenomenon exists does not depend on whether we can explain it; so if you say, but how could it possibly be that way?, I can only answer—no idea. But just because none of us can conceive of a way for things to be the way they appear to be, or even to merely appear that way, does not buy us the luxury of concluding that well, then they can’t possibly be that way.
We’ve taken our first stumbling toddler’s steps towards explaining anything at all. Something like 96% of the universe, we’ve got no real clue what it’s made of. That a lot of things are still mysterious to our barely-developed and hardly-exercizes cognitive skills is no great wonder at all; in fact, the wonder is how much we already understand, given our minute capacities and the timescales over which we’ve been applying them. Give it some ten thousand more years, and maybe then only 90% of everything remains utterly mysterious.
“Again, it serves to remember simply how profoundly ‘belief’ has snarled cognitive science in controversy. The bottom line is that we don’t possess the intentionality we are inclined to think we possess, that it is every bit as impossible as it seems.”
And again again, just because the term ‘belief’ is difficult to analyze, doesn’t mean that there’s nothing there—there are many unproblematic ways in which we take ourselves to believe things, such as that the sun shines now. But figuring out how it can conceivably be the case that we have such a belief—that anything physical can pertain to a completely different thing—, that’s the tricky part, and as far as I can see doesn’t receive an answer on your proposal.
Social cognition is too high a level to start from, everything that’s difficult is basically already built in—tell me how, on your model, a creature incapable of social cognition the way we have it, that is, incapable of forming beliefs about other creatures, of attributing motives to it, of predicting, modelling, or in some other way representing it, can come to ‘cognize itself’ in some way; that’s the question that I think is interesting. To me, you’ve got it exactly backwards: social cognition presupposes cognition, presupposes being able to form beliefs about, thoughts pertaining to other members of some social group, but that’s what we’d like to explain.
I can form the thought ‘Scott Bakker believes BBT solves the problem of intentionality’, which is an instance of social cognition, and to me, it clearly seems to refer to a well-defined state of affairs. Now, in that, I may be mistaken and deceived; but any explanation of intentionality must take into account how I can come to be thus deceived. In fact, that’s where I usually part ways with eliminativist accounts: it seems quite plain to me that in order to be deceived into possessing intentionality, I must first be capable of being deceived; but being deceived is itself intentional—it entails representing to oneself a state of affairs that happens not to obtain.
The explanatory burden comes with the perennially mysterious intentional posits. I’m telling you those posits, on any plausible picture of the human brain, have to be heuristic. If they are heuristic, then they don’t cut nature at the joints.
“I can form the thought ‘Scott Bakker believes BBT solves the problem of intentionality’, which is an instance of social cognition, and to me, it clearly seems to refer to a well-defined state of affairs. Now, in that, I may be mistaken and deceived; but any explanation of intentionality must take into account how I can come to be thus deceived.”
But why is ‘intentionality’ the explanandum here, and not deception (which is easily understood in terms of decorrelations between our cognitive systems and our environments)? We know, thanks to cognitive neuroscience, that what’s actually going on when you form your thought is completely neglected by the thought. So we know your thought is (extraordinarily) heuristic, that it somehow involves a way of making sense of an astronomically complex set of systems absent high-dimensional information relevant to those systems. As a component of a radically heuristic system, we know its incapable of cutting nature at the joints.
The problem is that we have no metacognitive inkling of this information insufficiency, so we suppose sufficiency (and why not? given that the metacognitive resources exapted for the solution of theoretical problem-solving almost certainly did not evolve a magical ‘theoretical adequacy detector’ just in case we turned out to become philosophers). We think that thoughts, desires, beliefs, and so on, rather than being posits that help us cope with radical information scarcity, must be real in some kind of high dimensional sense.
Since you think they are real in some kind of high dimensional sense, you’re on the hook for how this could possibly be, given the fact that the systems subserving them are–as a matter of empirical fact–heuristic. How do systems adapted to solve in conditions of abject information scarcity manage to get a hold of ‘reality’?
“The explanatory burden comes with the perennially mysterious intentional posits.”
Except I’m not making any posits, I’m merely reporting data—that is actually how things seem to me; and any theory purporting to account for intentionality must account for these data. I’m not bringing forward any explanatory hypothesis, merely pointing towards the phenomena in need of explanation; hence, I do not incur any explanatory burden. I’m just sitting here, all like hypotheses non fingo, merely pointing to the fact that the thing to be explained is how we can cognize anything as anything, before we can use this to explain how me mis-cognize us as something we’re not.
“But why is ‘intentionality’ the explanandum here, and not deception (which is easily understood in terms of decorrelations between our cognitive systems and our environments)?”
Intentionality is that which you have to explain first, before you can explain deception, since being deceived is being deceived about something—an intentional state. Correlations (or decorrelations) don’t suffice, they’re purely syntactical. Consider two colored cards in two sealed envelopes, which can be either red or green, but we know that they only occur in pairs of opposite colours—i.e. the colours are correlated. Hence, if you look into your envelope, you can infer the colour of the card in my envelope; but it’s not the case that the card in your envelope is about that inside mine, rather, it’s simply you, an intentional being, who uses your card as a representation of mine. The mere correlation between both cards means that one can be used as standing for the other, but this usage—like all symbol usage—necessitates an already-intentional being, and can’t hence be used to account for intentionality.
In the same way, decorrelation is not deception; it only becomes deception once some already-intentional being uses something as a representation for something else, when in fact the correlation that would make this possible is not present. Only once you suppose that there is a correlation between the cards in the envelopes, while in fact, they’re anticorrelated or not correlated at all, and hence, wrongly use your card as representing mine, are you deceived about the colour of my card.
“The problem is that we have no metacognitive inkling of this information insufficiency, so we suppose sufficiency”
And again: how can we suppose anything, when we are not intentional beings, since supposing something entails having a belief about something? Whenever you follow the origin of the apparently intentional links your BBT accounts for by supposing that they are generated via an imperfect, heuristic pattern matching and prediction system turned upon itself upstream, you will find that you end up with links that presuppose an already-intentional system, links where something somehow pertains to something else. It’s these links you need to account for, and which it seems to me you have a fish-and-water problem of missing precisely because they are so ubiquitous.
“I’m not bringing forward any explanatory hypothesis, merely pointing towards the phenomena in need of explanation; hence, I do not incur any explanatory burden.”
Why so evasive? I’m not asking you to prove why you SEEM to somehow intuit intentional phenomena (I agree with you on that much), I’m asking to evidence the fact that you DO intuit intentional phenomena. I have an account of the seems, blind brain theory, one that turns on the picture of human cognition arising out of science. If you have no satisfactory account of the do, that’s okay, but it means you have far and away the weaker theory.
These are some pretty straightforward questions, Jochen. If you think you see something genuinely inexplicable, then the rational thing to do, it seems to me, is to begin by ruling out possible ways you might have been tricked. If it applies to ghost-hunters, then it applies to you!
So to restate the question: Given that social cognition is heuristic, how could it possibly cut nature at the joints?
“And again: how can we suppose anything, when we are not intentional beings, since supposing something entails having a belief about something?”
If you presuppose the reality of intentionality, sure. But surely you don’t want to presuppose your conclusion in support of your conclusion. Blind brain theory actually provides a good way to understand the short-circuit you’ve run afoul of (as well as why the human race has wasted thousands of years without making heads or tails of intentionality). High dimensionally speaking, humans suppose nothing at all, ever, nor do they believe or attribute or desire. This is why none of these things can be found in nature: they simply do not exist. ‘Suppose,’ ‘belief,’ ‘desire,’ all the idiomatic apparatus of intentional cognition, belong to heuristic systems adapted to solving human affairs using as little data as possible. But since the low-dimensional, heuristic nature of these systems is invisible to the system as a whole, instances of using these systems do not register as heuristic or low-dimensional at all, so we apply them and we apply them out of school, perpetually baffled by the way they always fail to solve the kinds of theoretical problems we set them.
You’re the only one presupposing intentionality, Jochen. But because you can’t see the impasse (one that 2500 years of theoretical futility has been shouting), you assume there is no impasse, and that therefore I must, of necessity, be forced down the same blind alleys as you.
“Why so evasive? I’m not asking you to prove why you SEEM to somehow intuit intentional phenomena (I agree with you on that much), I’m asking to evidence the fact that you DO intuit intentional phenomena.”
Well, see, I’m not sure that I do intuit intentional phenomena apart from it merely seeming to me that I do—but that’s merely because I don’t see the difference. The thing is that if you agree that it seems to me as if I had genuine intentionality, then you agree that things can seem a particular way to me, which of course means that I possess intentionality. So that’s my evidence there.
What would you say to a thought-eliminativist trying to convince you that you don’t actually think, you merely think you do? Because to me, that’s what you’re doing: claiming that we don’t possess intentionality, but merely believe we do. That things don’t seem a certain way to us, but that it only seems to us as if things seemed a certain way.
“So to restate the question: Given that social cognition is heuristic, how could it possibly cut nature at the joints?”
I don’t think that I’m committed to thinking it does—I’m well aware that almost everything within my awareness is simply a series of just-so stories, fudges, and cut corners designed not to provide me with an accurate picture of the world, but merely in order to enable me to get around in it in some way. But the thing is, these are still objects within my experience, intentional objects, no matter how well they track the actual world, or don’t.
“High dimensionally speaking, humans suppose nothing at all, ever, nor do they believe or attribute or desire.”
That may very well be the case, but the claim then obliges you how it comes to pass that we take ourselves to suppose, believe, and desire, given that taking ourselves to be some way is just the sort of thing you claim does not exist.
Whenever you give a sketch of your explanation, you inevitably use vocabulary that I only know how to make sense of within an intentional framework. Take this passage:
“But since the low-dimensional, heuristic nature of these systems is invisible to the system as a whole, instances of using these systems do not register as heuristic or low-dimensional at all, so we apply them and we apply them out of school,”
You talk about the nature of these systems being ‘invisible’ to the system as a whole, implying that the system can ‘see’—that is, can have certain beliefs about the systems it uses to try to make sense of its own nature; you speak of using and applying these systems, which implies using them as something, i.e. them being a certain kind of thing to the user; you speak of these usages ‘registering’ a certain way, and so on.
You keep talking about us being deceived about our nature, about taking ourselves to be other than we are, about being unable to see through all the high-level abstractions down to all the messy low-level details—all of which, I think, is probably right to some degree: we indeed are deceived about our true nature, consider us to be different from the way we really are, etc.
But in order to do so, we must first be able to be deceived, to conceive of ourselves in some particular way, and so on. Without this, you simply could not say that we are wrong about what we are, because we wouldn’t have the capacity to be either right or wrong about anything, because there simply would not be any aboutness to our cogitations.
“Well, see, I’m not sure that I do intuit intentional phenomena apart from it merely seeming to me that I do—but that’s merely because I don’t see the difference. The thing is that if you agree that it seems to me as if I had genuine intentionality, then you agree that things can seem a particular way to me, which of course means that I possess intentionality. So that’s my evidence there.”
So you’re right because you feel you’re right? I was worried you would say something like this. You should have told me your view was irrefutable to begin with… you would have spared me the effort. You have managed to confirm the view of several on TPB who maintain that intentionalism is primarily a religious position, tho!
Small wonder you guys can’t agree on how to formulate your explananda, what with everyone being right about what they seem to intuit because they intuit it! But I hope you’ll excuse me for not considering it a credible view.
“So you’re right because you feel you’re right? I was worried you would say something like this.”
Yeah, that’s not even remotely close to what I’m saying. I’m merely pointing out that if you say that you agree that things SEEM a certain way to me, you agree that I possess intentionality—because that’s what things seeming some way to someone or something is. But if believing I’m just some closed-minded quasi-religious mystic helps you dismiss my comments, then while I think it’s a somewhat frustrating conclusion to our discussion, I suppose I can understand the need.
I apologize for the religious comment. It gets exhausting, sometimes.
So are you saying that ‘seeming’ is irreducibly intentional (as opposed to merely heuristic)? How is this not simply begging the question? Even if it isn’t (and I can’t see how it can’t be), how is it supposed to count as ‘evidence’?
For that matter, how can ‘appearance/reality’ be anything but an incredibly schematic way to look at complex systems cognizing complex environments? Useful yes, but proof of intentionality? Not at all.
“I apologize for the religious comment. It gets exhausting, sometimes.”
No worries, I’ve been on the internet too long to be that easily offended (although I do tend to snap back, and for that, my apologies).
“So are you saying that ‘seeming’ is irreducibly intentional (as opposed to merely heuristic)?”
Well, it might, of course, be ‘merely heuristic’, but I still fail to see what that buys you. If something seems a certain way to me, then I have a thought that is about something else, and that takes that something else to be a particular way—that is, I have a thought with representational content, with aboutness, with (in other words) intentionality. To me, this doesn’t amount to inferring or concluding intentionality, it’s definitional—if I have such thoughts with such contents, then I have intentionality. If things can seem a certain way to me, then I have intentionality, because that’s all there’s to it. And certainly, that is what we would like to have explained: how I can have thoughts that are about something else, how, in the broadest sense, my state as a physical object, can come to be about, to refer to, another object (or to appear as if it does so refer—again, there is no significant difference between the two, because in appearing a certain way to itself, my mind does in fact refer, to that which it appears as).
This might well be a mere heuristic, but then, you still owe an explanation about how heuristics can generate the appearance of something being about something else—and even here, in this formulation, you see the problem I’ve been highlighting: if heuristics generate some appearance, then that appearance is an appearance of something, it is something appearing a certain way—that is, it is intentional, about something.
And no matter how far I try to follow your model, it seems I always come back to something seeming a certain way, something appearing, something being deceived about something else—that is, I always end up on formulations that presuppose that one physical system can be about another in some way. But how that works is precisely the question to be answered. So, to me at least—bounded by my degree of (mis-)understanding of your model—it seems like the only intentionality you get out is that which you put in, in assumptions like organisms being able to perform social cognition, etc.—which is certainly cognition about something, some other member(s) of some particular social group, that is, cognition which possesses an intentional object.
And then you say, ah, but that’s only a splotchy series of cut corners and glitches that only appears to us to be a smooth, truth-tracking sequence of intentional cogitations because we’re blind to the gaps, a story merely confabulated from incomplete, sketchy, and often outright false data; and I’m inclined to agree. But in order to say this, you must still admit that things can appear to us in a certain manner; in order to mis-cogitate about something, we must first be able to cogitate about something. What the story is about, how it appears to us, is of no consequence; the remarkable fact is that it appears to us a certain way. And it’s that faculty that needs to be explained. Not the way the content of our minds appears to us, but the (apparent!) fact that it has content that appears to us in some way at all—for surely the idea that ‘I am an intentional being’ is cognitive content, even if it is false. It refers to something, an ‘I’, which may or may not exist; it asserts something about this ‘I’, which may or may not be true. But certainly, there’s something it is about, simply by appearing to be about something, because in so appearing, it is about that which it appears as—whether that something has any correlate such that the proposition expressed comes out true is of no importance here.
To put it bluntly, how does something seem some way to me without there being some mental content that is about that which seems this way? How could the sky seem blue to me without my thoughts being about the sky? How could my mind seem intentional to me, without my thoughts being about my mind—thereby implying that it is, in fact, intentional? If I have the power of anything seeming a certain way to me, then, it seems to me, I have intentionality—because that’s just what intentionality is: things seeming a certain way to me.
So in denying that we possess intentionality, and then claiming that it only seems like we have intentionality to us, you’re saying that things don’t actually seem like anything to us, it just seems as if they do.
The ‘original intentionality’ intuition arises because deliberative metacognition has no way of cognizing the heuristic limits of its objects, nor its own heuristic limits. So we assume intentional posits cut the world (as opposed to a specific set of problems) at the joints, then, in our attempt to characterize this real thing, we assume the sufficiency of our subsequent intuitions (even though they are heuristic in extreme, as they have to be on any plausible account of metacognitive function). Since intentional posits amount to ways to cognize absent information regarding what’s going on, absent causal information, they seem to possess all sorts of impossible properties.
It’s a cognitive illusion, turning on the same heuristic structure as visual illusions. The difference is, the information comprising our ‘cognitive field’ is far, far more sparse than our visual field–we lack the ready contrast to identify it as such. What you keep doing, on my account, is repeat the same misapplication again and again, then say, ‘See, there’s no way out!’ This is what makes arguing this so frustrating–it actually gets creepy to me, sometimes, the mechanical regularity with which intentionalists repeat this pattern.
And the fact is, dialectical stalemate suits me fine, because abductively and empirically, I have the far more powerful position. Aside from institutional inertia, all the intentionalist has is intuition, and a track record of thousands of years of futility. You just have this ‘a-ha’ move (which I can explain) that makes you think embracing irresolvable confusion is inescapable.
If you don’t want to understand how my position works, Jochen, then keep repeating this move. If you do want to understand–at least ask why my way of looking at things works for me and others–think through the description I gave above. Metacognitive neglect is a fact. The heuristic nature of human cognition is a fact. Is it really just a coincidence that intentionalism bears all the hallmarks of heuristic confusion?
I’m offering a way out here.
“If you do want to understand–at least ask why my way of looking at things works for me and others”
The thing I will ask you instead is: how can I have intuitions if I have no intentionality? How can I have metacognition, which is, after all, cognition about cognition—and hence, intentional? How can I assume things? How can I be subject to an illusion?
If you say things like ‘metacognitive neglect is a fact’, then—on every way I know how to parse these words–you’re saying ‘intentionality is a fact’, since metacognition is by its very nature about something (cognition), and hence, is intentional. This is not intuition, this is mere analysis of the words you use.
It’s not the particular intuitions that I have from which I start—it’s the fact that I have intuitions at all. This needs explanation, before it can be used in an explanation, as you want to do.
“The thing I will ask you instead is: how can I have intuitions if I have no intentionality? How can I have metacognition, which is, after all, cognition about cognition—and hence, intentional? How can I assume things? How can I be subject to an illusion?”
You’re doing it again. Rather than provide any evidence for intentionality, you simply assume the very thing you are purporting to argue. Don’t you think this curious? Doesn’t it trouble you to have that question-begging is the only weapon in your arsenal? From an outsider’s perspective it is really quite remarkable to watch. You remind your interlocutor that the question of whether original intentionality is required to explain aboutness is the question to be solved, and then they reply (over and over and over) that aboutness makes no sense without original intentionality. Personally, I have a hard time seeing how it differs from the notion that what warrants assuming the divinity of revelations from God is the fact that God sends them. How is it you see your argument working?
Cognition is never ‘about’ anything. About is an artifact of one way we cognize cognition, given the fact that we have no access to the facts of cognition. Aside from your occult experiences, what does a heuristic account get wrong?
“Rather than provide any evidence for intentionality, you simply assume the very thing you are purporting to argue. Don’t you think this curious? Doesn’t it trouble you to have that question-begging is the only weapon in your arsenal?”
See, that’s part of why I proposed the switcheroo above: because if you think that this is what I’m doing, then you fundamentally misunderstand the argument I’m making. So, befor I try to explain myself again, I’d like to ask you, in the way I’ve tried regarding your position, to sum up what you take mine to be (and correct me about yours); perhaps then we can make some progress.
I take this to be the clearest summary of your argument:
You are convinced that intentionality is a transcendental condition of cognition. But this isn’t an argument, Jochen. It’s a declaration of faith. You do see that? At any rate, it makes your position irrefutable, insofar as you take any instance of cognition to evidence intentionality. I take every instance of certain kinds of cognition to evidence certain kinds of heuristics, but you don’t see me saying to you, ‘But my argument isn’t that cognition is heuristic, it’s that cognition must be heuristic for us to have any cognition at all.’
If I did opt for that strategy, where would that leave us?
OK, I guess this discussion has just about run its course; none of us seems to be moving towards the other a great deal. But rather than just leave with hardened battle lines, I’d like to try something different for once. Just so as to get clear whether we’ve at least managed to convey our respective viewpoints to another, even though none was swayed by it, I’ll just try and give a brief rundown of your point of view, for you to evaluate, and I’d like to invite you to do the same. That way, at the very least, we’ll know whether there’s been some successful communication between the two of us, or if everything was just lost to the noise.
So your story, in brief, seems to me to be the following. We’re natural creatures in a natural world; within this world, information processing, in some sense, occurs, some of which is performed by us in order to facilitate our survival, as accounted for by evolutionary theory. We are, in a term introduced by Murray Gell-Mann and James Hartle, IGUSes—Information Gathering and Utilizing Systems.
Now, the facilities we use to do this information processing are shaped by evolutionary necessity, and hence, not adapted towards tasks like philosophy or any other form of apprehending the world as it is, because that’s quite simply not something necessary for our survival.
Some of this information processing includes what you call social cognition—that is, modelling/predicting other members of our social group. That’s again a capacity simply suited for survival, as beings able to perform social functions efficiently incur a herd benefit towards successfully spreading their genes, to put it somewhat crudely. Moreover, this is a capacity which is, in principle, perfectly non-mysterious: a sufficiently capable intellect possessing the right concepts could see how it is a perfectly natural capacity working entirely within the natural world.
Trouble is, evolution did not shape us into such capable intellect, and did not provide us with the right cognitive tools to perform this analysis. And well, why would it? It’s in the business of ensuring successful gene transfer, not in the business of creating beings capable of self-reflection. To suppose that the two goals aligned just by chance beggars belief. It creats survivors, not navel-gazers.
Thus, when we turn the facilities evolution has provided us with upon ourselves, they are simply insufficient to provide the right picture; instead of the naturally flowing information, what we see is a fragmented, incomplete picture that doesn’t tell us the whole story. And from our inability to see this whole story, we conclude, assuming the sufficiency of our cognitive tools, that there is no such whole story; instead of perfectly natural cognitive capacities that are of one piece with the rest of the natural world, we see a mysteriously divisive story, in which we can’t square the cognition we seem to possess with the processes we see occurring in nature, leading to the impression that they are somehow other, of a fundamentally different kind. We simply don’t possess the tools necessary to see that our capacities and the things the natural world consists of are fundamentally of one piece, that they’re the same kind of thing, and thus, it appears that we possess some inscrutable capacities that natural processes don’t account for.
This is of course very condensed and leaves out a few things, but it’s the gist I so far got of your theory. Does this seem about right, or did I grossly misunderstand some aspect of it?
“You are convinced that intentionality is a transcendental condition of cognition. But this isn’t an argument, Jochen. It’s a declaration of faith.”
It’s not; it’s merely analytic. On any way in which I (or anybody) can understand the words you’re using—illusion, deception, assumption, appearance, etc.—they include an element that pertains to something else. And whenever your terms involve such an aboutness, they involve intentionality, just as whenever your terms involve unmarried men, they involve bachelors. This is just a question of word meanings.
Now, I do believe I understand that you intend to say that in principle, from the outside, there is a way to analyze these terms that does not refer to aboutness or any such thing. But the thing is, your theory also entails that we are constitutionally incapable of accessing this ‘outside meanings’; we’re doomed to use the terms of the intentional idiom, and understand them in the way I’ve sketched above. Thus, your theory is inevitably formulated in terms that, on your theory, have no meaning, or at least, no meaning accessible to us; but then, what am I to make of your theory? On the level of understanding accessible to us, it’s false, since it’s circular; and if it can’t be formulated on that level, it just doesn’t have any content accessible to any of us.
Or, if that still isn’t any clearer, you may also take me to be making an empirical statement: every time I analyze my own cognition, I find it to be about something, to possess intentionality. Indeed, analyzing our own cognition implies cogitating about it. Now, I understand (or again, I think I do; you’ve neglected to comment how appropriate you find my sketch of your theory) that on your position, I’m ultimately wrong about this—but nevertheless, it’s something that your theory must account for. Only, it’s constitutionally incapable of doing so: it’s itself formulated in terms that appear intentional to us, while denying the reality of what those terms refer to; hence, it can’t be used to analyze these terms, as it depends on them. You wish to doubt intentionality, but doubting itself is intentional.
You assume the conclusion that intentionality can be naturalizable; then conclude from there that, since this is fundamentally at odds with our experience, we must be systematically deluded about our experience; but then, you cannot use terms derived from our experience to formulate a theory of it, since you’ve just denied their applicability. You’re proposing a model of the world on which our original model of the world is necessarily false; but that original model of the world is an essential ingredient in formulating yours.
A separate worry, by the way, is that if we’re really that fundamentally deceived about the functioning of our mind (although that again is a sentence I don’t know how to parse on your theory, since I don’t know—and can’t know—the meaning of ‘being deceived’ anymore), then there are no grounds on which to trust its conclusions; so it seems to me, if I believed your theory, I really have no grounds to do so, since the chain of reasoning leading up to it—couched, again, in vocabulary that ultimately doesn’t refer—is itself suspect, indeed wrong on your theory. There’d simply be no reliable way to tell whether it’s right, because it undermines those very faculties that we rely on to arbitrate such things.
‘Analytic’? ‘All aboutness involves intrinsic intentionality,’ is a contentious theoretical claim, Jochen.
But again, two can play this game: ‘All aboutness involves heuristic cognition’ is an analytic statement. Every time you use the word ‘aboutness’ you are evidencing the heuristic nature of meaning, and the absence of original intentionality. So I know you’re wrong because you have no way of describing your position without using these terms, all of which evidence the heuristic nature of intentional idioms.
If this is the best intentionalism has to offer, then it’s in deep, deep trouble.
“‘All aboutness involves intrinsic intentionality,’ is a contentious theoretical claim, Jochen.”
Basically any definition of intentionality will be along the lines of ‘the power of the human mind to be about or to represent’ something, so I’d like to see some substantiation of the claim that it’s contentious; to me, it’s simply definitional—‘intentional’ is just a name we use for phenomena involving aboutness.
But again, I think you’re still missing the issue I’m raising. So let me try to condense this somewhat. Your theory (or at least a part of it) can be roughly encapsulated as ‘the evolutionary capacity of social cognition, if turned upon itself, yields the illusory impression that there are aboutness-phenomena, such as social cognition’.
But how am I to understand this sentence? It calls into question the very concept it begins with, and moreover, asserts that we can’t access the true nature of what we heuristically consider ‘social cognition’ to be, which is cognition about members of our social group. On your account, that can’t be what is is, since we only believe such things to exist due to our cognitive deficiencies at seeing what those things really are. But then, the formulation of your theory starts with a term whose meaning it calls into question, and thus, that formulation can’t be the actual formulation of your theory—since on your theory, such formulations are meaningless. It’s like saying, ‘I’ve discovered a fundamental truth about us: we can’t discover fundamental truths about us!’. That’s simply not a sentence that has any cognitive content.
Anyway, my aim here was simply to try and understand what your theory’s claims are; barring your protest, I’ll just consider my earlier summary to be reasonably accurate, so I can at least check that off my list.
Anyone can claim anything is analytic, so claiming analyticity (or aprioricity) doesn’t evidence anything, and raises a new batch of inscrutable mysteries, such as What is analyticity? or What is aprioricity? that no one can answer.
Your thumbnail was pretty good, save that it overlooks the problem the heuristic nature of metacognition poses any attempt to describe the nature of cognition, the fact that it generates ‘sufficiency illusions,’ such as those likely underwriting your curious attempts to warrant intrinsic intentionality. I’m saying social cognition does not involve ‘aboutness,’ though applications of social cognition often involve the term ‘about.’ All you’re doing is defining the very thing you need to explain into ‘social cognition’ – about as clear an example of begging the question I can think of…
“Anyone can claim anything is analytic, so claiming analyticity (or aprioricity) doesn’t evidence anything, and raises a new batch of inscrutable mysteries, such as What is analyticity? or What is aprioricity? that no one can answer.”
Well, I disagree, somewhat wildly—I mean, there’s certainly more sense in saying ‘all bachelors are unmarried men’ is analytic that there is in saying ‘all bachelors are little black spots of mould on the shower curtain’ is analytic; but it’s also beside the point, really, While every definition of intentionality anywhere ever relates it to aboutness, if it’s such a red flag to you, let’s just let that term slide and talk about aboutness (ha!) directly instead. (I also disagree with the sentiment that’s been shining through a lot of your posts now that just because nobody knows a clear answer to something (yet), it must in some sense just be a useless pseudoproblem—as I said, give it another twenty or thirty thousand years, maybe then we can start formulating the difficult problems.)
Certainly, to my mind, and from a perusal of the literature, to many others’, as well, it seems that aboutness is a bit of a difficult thing to explain—whether or not you want to call it intentionality, or whatever word you want to use; the difference is merely verbal: the problem remains to explain how one thing can be about another. To most, this is the problem of intentionality, and saying that ‘intentionality is aboutness’ is merely definitional, but I’ll try to avoid that word from now on.
So, let’s try again. When you say, ‘social cognition turned inwards yields metacognition’, I can understand this, because of my understanding of what social cognition is. In social cognition, we attempt to formulate behaviour heuristics for other beings in order to predict and interpret their reaction to certain environmental stimuli. Turned upon ourselves, this then leads to formulating heuristics about our own behaviour—quick and dirty schemes that allow us to make sense of certain stimulus-response patterns, but which have no special obligation to conform to anything in the real world.
Of course, to me, and, as I think is your main point, somewhat inevitably to anybody, this seems a somewhat mysterious thing: social cognition involves aboutness; but an account of how one physical system can come to be about, or refer to, another is lacking, and moreover seems categorically impossible. And of course, understood on this level, your proposal does not yield any explanation: social cognition is itself already profoundly mysterious, and thus, assuming it as a building block, as if it were completely transparent, does not make any headway on explaining the apparent mysteries we find when cogitating about our cognition.
But then you say, well, that’s actually something we’re misled about exactly because this mechanism of backreflected social cognition does not yield a faithful picture of ourselves—in actual fact, there is an account of social cognition that’s perfectly nonmysterious, that does not involve any problematic aboutness or the like, but that is a perfectly natural occurrence in a natural world, as natural as a rock falling down under the influence of gravity. It’s just that this concept, because of our own necessarily flawed capacities of self-analysis, is not accessible to us—hence, we see mysteries where there, in fact, are none. So this is then the story: this perfectly natural social cognition turns inwards generates imperfect and heuristic metacognition, whose apparent characteristics—reference, aboutness—do not actually correspond to any real property; they are merely confused attempts at self-explanation. Problem solved!
However, recall that the first part of the story, the sentence ‘social cognition turned inwards yields metacognition’ is understood, by me, under the flawed conception of social cognition that makes it seem as if that capacity contained some intrinsic aboutness—only with this aboutness does it become clear to me how, if directed at myself, it leads to some form of metacognition. But then, in final consequence, you say that this understanding of social cognition is wrong, that it is not actually something that includes things like aboutness, and that the correct conception of social cognition is not accessible to me.
But then what about the sentence ‘social cognition turned inwards yields metacognition’? If my concept of social cognition is misguided, and I’ve got nothing to replace it with, then in what way am I to understand it? The consequence of your analysis of this sentence is that the very starting point of this analysis must be wrong—but then, this brings the whole analysis down. You’ve pulled out the epistemological rug from under yourself.
The only way to substantiate your theory would then be to produce an analysis of the term ‘social cognition’ that does not include reference or aboutness, that I could substitute in my understanding of the original sentence, and hence, that could be used to formulate a concept of metacognition—but of course, this is just equivalent to solving the problem of aboutness, i.e. finding a naturalized account of it.
Else, I could just go ahead and believe that such an account exists (which I, as a matter of fact, do—it’s just quite tricky); but then, I have no need for BBT anymore, since it yields no additional grounds for this belief. So either way, you’re in a dialectival bind: your analysis of the concept of ‘social cognition’ calls that concept as it is ordinarily understood into question, without substituting a new understanding; but this concept is what your analysis rests upon.
Social cognition implies imperfect metacognition implies us being wrong about aboutness implies there not being social cognition (of the sort which we have used to ground metacognition).
I think the essence, Jochen, of your disagreement with Scott is:
“Take the ant, which I think we can reasonably assume is ‘mere mechanism’: a certain pheromone, a certain pattern of antennae drumming on its carapace, will engender certain behaviours. It will not engender beliefs of the form that ‘ant Z has found some source of food, and wants me to follow it’ (or at least, it need not—the pheromone acts as a mere switch, simpy triggering the appropriate behaviour, as for instance in an expert system). But without such beliefs, there’s nothing to turn inwards; while an ant may autocue itself by producing a pheromone that triggers food-gathering behaviour in itself, this is not accompanied by some belief that it has found food, because it has no way to produce such a belief.”
I think you’re arguing that some difference-in-kind exists between the process you describe for the ant and the analogous process in humans (pattern of behaviors in person A engenders certain behaviors in person B). Scott is asking what that difference-in-kind is. I think we all agree that humans are capable of a wider variety of behaviors than ants, but it seems to me that they are the same sorts of behaviors, and that the kinds of explanations that suffice for ant behaviors suffice for human behaviors. If the ant “has no way of producing such a belief” regarding food, but the human does, what is the mechanism by which humans produce such a belief?
It could simply be that intentionality is the way the brain encodes percepts about its own neurological states, analogously to how color is the way the brain encodes percepts about the frequencies of light striking the retina. If that is the case intentionality as a set of neurological states can be real without being transcendental, and the process of understanding intentionality can be merely scientific instead of being philosophical or theological.
“I think you’re arguing that some difference-in-kind exists between the process you describe for the ant and the analogous process in humans (pattern of behaviors in person A engenders certain behaviors in person B).”
No, I’m not; in fact, I don’t believe this to be the case. In that passage, I’m pointing out that without the aboutness of social cognition, it does not suffice to subserve metacognition of the sort we take ourselves to have; but since Scott’s analysis leads to the understanding of social cognition which includes some form of aboutness being wrong, then we can’t justifiably use the concept of social cognition in order to ground metacognition, since we don’t know how social cognition works anymore.
Take first social cognition the way it seems to us that we have it: it includes, say, propositions of the form ‘Jochen wants to open the fridge, because he is hungry’. Such propositions are intentional; they are about something, they refer to something, etc. Hence, social cognition has aboutness, it is intentional (although for some reason Scott seems intent on trying to argue the two are different, by all definitions I’m familiar with, they’re basically one and the same thing). Due to it being about something, we can use it in a story to explain metacognition (call this the ‘sc -> mc story’ for reference): we use our capacity for making up stories explaining the behaviour of other human beings on ourselves, making up similar heuristic stories about ourselves. We understand this story, because we have an understanding of all the terms involved in it, in particular of aboutness, and hence, we can see that this story is sound.
But now, Scott points out that the metacognition we get this way is under no obligation to track real features of ourselves; instead, it need merely be useful. Hence, the stories we tell us about ourselves are in general elaborate fantasies, able to facilitate our survival in an environment in which we have no way to track all the relevant information, but not capable of giving a clear picture of what actually goes into the making of these stories. Thus, we seem quite puzzling to ourselves—we appear to have certain capacities, such as aboutness, that resist our metacognition; that is, we are constitutionally incapable of giving an analysis of things like aboutness because really, deep down, there are no such things; in fact, they don’t even make sense.
Now, on first blush, this seems very promising: all the puzzling features about ourselves are really just outcrops of our faulty metacognition, which is faulty because it is just an evolved, heuristic capacity of social cognition turned inwards. Thus, we can conclude that actually, the puzzling features are not puzzling at all; they merely seem that way to us.
But this is too quick. The reason is that, in a sense, the start of the whole story (the sc -> mc story) rests upon taking these puzzling features for being genuine; that is, we can only understand the sc -> mc story if we understand it in terms like aboutness. But given that these terms do not actually refer, that there is not actually something like ‘aboutness’ in the world, we loose this understanding—we have no longer any grounds on which to say that the sc -> mc story is sound, because it rests on terms our analysis has just shown to be meaningless. But this story is where the whole enterprise started; hence, if we follow this thinking to its conclusion, the conclusion it yields is that the starting point we took was never justified in the first place (or rather, that we have no way of assessing whether it was). Thus, in particular, we are no longer warranted in concluding that our puzzlement about aboutness is just due to imperfect metacognitive capacities, because we have no longer any justification for these capacities; indeed, we don’t even know what ‘metacognitive capacities’ could conceivably be. And it just goes round and round like that.
It’s really just analogous to saying, ‘I’ve discovered a fundamental truth about us: we can’t discover fundamental truths about ourselves’—if it is really true, it can’t have been discovered; if it has been discovered, it can’t be true. If sc -> mc is right, then there’s no aboutness (as it seems to us); but without aboutness, we have no way of telling if sc -> mc is right.
(It also won’t work to consider some alternative foundation: one could, for instance, just posit that we have imperfect metacognitive capacities for whatever reason, and then argue from there; but still, the conclusion would simply be that we don’t know anymore what ‘metacognitive capacities’ are and how they lead to false self-assessment.)
I think you can make a convincing argument either way. The social cognition mechanisms which were exaptations for science were previously used for religious storytelling, afterall science is a more advanced form of storytelling based in observation. Science still finds itself at odds with religious creation stories. If we better understood these mechanisms, which are universal along with the religious myths may go a long way. The Neural Correlates are low hanging fruit and yield minimal insight when phenomenality, glia cells, microcolumns, etc., are still hanging high in the tree.
We’re blind to the brains of other people, in the sense that we can’t directly observe the kind of neurological activity described here:
but that’s okay, because nothing that happens inside your head matters to anybody else until it gets expressed in behavior. The things that matter to other people regarding your neurological activity are the things that can be translated into motor activity. If you can perceive the motor activity of other people you can in principle infer everything about their neurological activity that is worth knowing. We can perceive much more about our own motor activity then we can about most other people’s motor activity. If social cognition is merely inference regarding the perceived motor activity of other people and self-cognition is merely inference regarding our own perceived motor activity we should expect self-cognition to be more accurate than social cognition.
Regarding the video linked above, it describes the neurological process that goes into making a choice and explains how that process can yield suboptimal results if it receives too many inputs. One of the things that struck me about it is that choice is a black box. It receives a sensory input such as an apple, an orange and an instruction to choose between them. It outputs a choice. That output has to go to some location elsewhere in the brain that controls the motor apparatus needed to reach for or ask for the chosen fruit. Intention is the signaling by which the choosing module outputs instructions to the motor apparatus by which that choice is exercised in the world. Intentions in the sense of neurological activities that link sensory inputs to motor outputs are real things, but not transcendental. If the act of grabbing an apple is about that apple then it seems reasonable to say the neurological activity that makes grabbing an apple possible is also about that apple. We become aware of our intention to grab the apple the same way someone else would, by seeing ourselves reach for the apple. Intentionality is real, but in a straightforwardly natural way. The inferences we make from perceived sensorimotor activity to hypothesized neurological activity are heuristic in Scott’s sense of the term.
It would be monstrous but interesting to trace the pathways between the choosing apparatus described in the video and the parts of the brain that control activity and sever them to see if subjects lose the subjective sense of having intentions.
“Intentionality is real, but in a straightforwardly natural way.”
If that were true, you wouldn’t need BBT, since it concludes that we can’t give a naturalized account of intentionality because the concepts we use to describe it—aboutness, reference etc.—are fundamentally misguided.
But it’s also easy to see that things aren’t quite that simple as you make them out to be. For instance, what is 00101101011001 about? What is ‘der Hase auf dem Hügel’ about? What is this sentence about?
The answer is, in either case: nothing, not intrinsically. The final sentence seems to be about something because you understand English; if you understand German, the one before also seems to be about something to you; and well, if you interpret it as a binary number, then maybe even the digit sequence is ‘about’ that number to you, in some sense. But in each of these cases, it is in fact you that lends your intentionality to them—you understand them as something, and understanding as is an intentional act. To somebody decoding the digit sequence differently, it will be about something else; someone speaking a language outwardly identical to German, but with different conventional word meanings, will interpret the second example as something else; and so on. The intentionality does not lie with the symbols, but with the way they are interpreted—only a creature that is already intentional can create their derived intentionality.
But now consider the fact that our mental states seem, to us, to be intentional, that is, to be about something. You wish to analyze our intentionality in terms of the object-directedness of our actions. So imagine a robot, grasping for an apple. It has the kind of object-directedness you propose to account for intentionality; but its state of mind can be entirely accounted for in terms of, e.g., symbol patterns, such as binary digits. It’s just a long sequence of 0s and 1s.
Moreover, it’s not tied to apple-grasping in any special way. The same pattern could be part of a computer program tasked to do your taxes, for instance—its interpretation is arbitrary. So what is it that could account for this pattern to be about apple grasping to the robot? You might be tempted to say that it’s the robot’s action of grasping the apple; but keep in mind that the robot only has access to its action by means of its mental state, that is, it is presented with the apple-grasping action only in terms of another pattern of 0s and 1s. So it might well associate the bit pattern that induces its performing the apple-grasping motion with the bit pattern that results upon observing itself grasping the apple; but the result would merely be a larger bit pattern, whose interpretation again is arbitrary.
Now you, an already-intentional being, may imagine linking the bit pattern describing the robot’s state of mind with its action; but this is only possible because, to you, your state of mind is about that bit pattern, is about the grasping action. You substitute your own intentionality for the robots, and since it’s basically impossible to think in non-intentional terms, you don’t notice doing this—again, it’s a fish-in-water problem: intentionality is such an ubiquitous feature of our thought that we hardly ever realize that everything we cognize, we cognize from an intentional perspective, by taking it as the content of our minds. Thus, in your story, you suppose that the robot knows it’s grasping an apple, because you know that, and it seems inconceivable to you to imagine a perspective on which there is not this knowledge, simply because that wouldn’t be a ‘perspective’ in any sense of the word and we imagine the world in perspectival terms; but supposing the robot knows it’s grasping an apple is supposing that the robot has intentionality.
Now, Scott, I believe, sees this problem, but thinks it’s a pseudo-problem: this kind of aboutness I keep yapping on about, the reference, things like understanding-as or perspectives are merely artifacts of a miscognition of ourselves and the world we’re in; those concepts just don’t refer, and that’s the only reason they appear mysterious to us. There’s no story capable of naturalizing them, but that’s just because they fail to apply to the world, and to us. On first blush, this seems indeed capable of addressing the problems I outlined above—there’s no intentionality in the robot grasping the apple; but there’s likewise no intentionality (of the sort it seems to us we have) within us, and whatever is there is capable of being made sense of in naturalistic terms that we, however, don’t have access to.
It’s a seductive story in that it allows us to brush away the problems without having to come up with a solution; and indeed, one might think that the failure of coming up with a solution after this much time spent thinking about the issues is quite neatly accounted for in this way.
But the problem is that the story is only intelligible if we know what things like ‘miscognize’ and the like mean; but BBT explicitly denies this. So, effectively, it denies the terms in which it is itself formulated, and thus, if it were true, renders its own formulation meaningless, and hence, impossible to believe. So either there is aboutness in the sense we appear to have it, and there exists a natural account of it—then the story of BBT is intelligible, but false. Or, there is no such aboutness, the term is confused—then, the story of BBT is simply unintelligible, even though we might ‘believe’ we understand it.
First, thank you for taking the time to reply at such length. I consider myself honored.
Second, Scott has written on numerous occasions in this blog that intentionality provides a useful, indeed an indispensable set of heuristics for managing day-to-day human life. I think what he has said about intentionality is that it does not exist in the supernatural way some philosophers and theologians have claimed, and that therefore attempts to use intentionality to explain human minds are doomed to failure. It is not “aboutness, reference etc.” that are fundamentally misguided. It is the belief that they have explanatory (rather than merely descriptive) power with regard to human mental life that is misguided.
Regarding “der Hase auf dem Hügel” and “00101101011001” I agree that the ink squiggles, patterns of compression and rarefaction in the air or patterns of lighter and darker on a monitor do not have intrinsic efficacy. Firstly, those squiggles and patterns have to be seen and heard, that is to say we have to generate action potentials in our optic and auditory nerves that correspond to the squiggles and patterns. Linguistic acts begin with sensory percepts.
The squiggles and patterns are objects, and perceptions are object-directed toward the squiggles and patterns in the same way that the human or robot hand is object-directed toward the apple as it grasps it. You argue that the state of the robot’s mind can be accounted for by the software running on its CPU and controlling the servo-motors that enable it to grasp the apple. I agree. At least in so far as this particular instance of apple-grasping is concerned, I agree for the human as well. One difference between humans and robots is the fact that humans have much better CPUs, able to manage a complex sensorimotor task such as grasping an apple while simultaneously performing a complex perceptual task such as checking the apple for blemishes. Because human brains are not unitary, parts of the human brain can have object-directedness toward other parts of the human brain. This capacity allows us to simultaneously dredge up our best guesses on the test and keep track of how confident we are in the various answers we have given. It also allows us to simultaneously grasp the apple, inspect it for blemishes and wish it was a pear. I will have to grant you that while roboticists have done well at teaching machines to grasp apples and inspect them for blemishes they have made very little progress on machine wishing.
Regarding “the problem is that the story is only intelligible if we know what things like ‘miscognize’ and the like mean; but BBT explicitly denies this” I don’t read Scott as claiming that intentional language is meaningless. I read him as claiming that intentionalism is heuristic, a fast, computationally cheap way of solving social interaction problems without being able to perceive or comprehend the neurological activity that underlies social interaction. One of the reasons I posted the Waterloo Brain Day lecture is to demonstrate what the alternative (slow, computationally expensive, able to perceive and comprehend the neurological activity) looks like. I think human beings do have the capacity for intentionality in the merely neurological sense described above. I don’t think this merely neurological intentionality is incompatible with BBT’s claims that no supernatural intentionality exists and that therefore intentionalist language cannot explain intentionalism.
“The squiggles and patterns are objects, and perceptions are object-directed toward the squiggles and patterns”
But perceptions are themselves merely squiggles and patterns (of neuronal activity, say). How does their object-directedness come about? A neuronal firing pattern is not in any way more intrinsically about something than a pattern of brush strokes on a piece of paper, so appearing to the aboutness of the former to explain that of the latter merely kicks up the problem up the ladder one rung.
“It is not “aboutness, reference etc.” that are fundamentally misguided. It is the belief that they have explanatory (rather than merely descriptive) power with regard to human mental life that is misguided.”
But aboutness is used as a term with explanatory impact in the formulation of BBT, since it appeals to social cognition and metacognition, which is cognition about other agents and cognition about cognition, respectively. BBT aims to break us out of our intentionalistic delusions by noting that social cognition leads to faulty metacognition, that is, that the object of metacognition is not as it appears to us; but to do so, it must first appeal to that metacognitive object, and hence, to aboutness. So if aboutness doesn’t have any explanatory power, then this story doesn’t get of the ground.
“I don’t read Scott as claiming that intentional language is meaningless. I read him as claiming that intentionalism is heuristic, a fast, computationally cheap way of solving social interaction problems”
On such a reading, however, the story is circular: BBT aims to provide an account of the mysterious powers of aboutness by reducing it to the product of faulty metacognition; but if faulty metacognition needs aboutness in any form, even as a heuristic, to work, then nothing is being explained. And besides, Scott has at several occasions explicitly denied the reality of anything like aboutness:
“Cognition is never ‘about’ anything. About is an artifact of one way we cognize cognition, given the fact that we have no access to the facts of cognition.”
This, to me, straightforwardly implies that metacognition is not about cognition, since it does not possess any aboutness at all; but the story of BBT is only intelligible if we understand metacognition as being about cognition—otherwise, there simply is no account of how we acquire our faulty beliefs about our own cognition. Even this very quote first denies aboutness, but then nevertheless talks about ‘cognizing cognition’.
My claim was that the act of (for example) grasping an apple created an object-directedness relationship between the apple and the grasping hand. The object-directness of the neurological precursors to the grasping is derived from the object-directedness of the physical relationship between hand and apple. If one denies the object-directedness of the primary physical relationship or denies that object-directedness can be transitive in this manner one is left wondering how mental activity can be about the world outside one’s head. One can conclude that mental activity is not actually about the world, or one can conclude that aboutness operates through natural mechanisms yet to be explicated, or one can argue that aboutness operates through supernatural mechanisms yet to be explicated.
If we accept the evidence of our senses to the effect that human action does affect the physical world I don’t think we can consistently hold that the mental activity that makes that physical activity possible does not affect the world. I think we have to accept that thoughts and actions can be about the world unless we are willing to claim that the world outside our minds does not exist.
The idea that phenomena operate through natural mechanisms yet to be explicated has a long and honorable scientific history. Newton’s description of gravity allowed a great deal of work to be done in areas from ballistics to celestial mechanics without any knowledge of the mechanism whereby massive bodies were attracted to each other. Similarly, biology has made a great deal of progress without an understanding of the origin of life. This suggests that neuroscience should continue to study the operations of the brain while others pursue the mechanism whereby the mind interfaces with the world on a parallel track.
The idea that phenomena operate through supernatural mechanisms also has a long and honorable history, although that history is not scientific. I believe that one must accept the legitimacy of supernatural explanations before one can determine how to choose between them. Because I do not, supernatural explanation is a subject about which I ought best to be silent.
If we provisionally grant that Blind Brain Theory is incoherent because it claims that intentionality does not exist but states that claim in intentional language we still can ask whether intentionality exists and if so what is its nature. If one determines for reasons outside of Blind Brain Theory that intentionality does not exist the charge of incoherence loses much of its sting. If Blind Brain Theory’s central claim that intentionality does not exist is true then the incoherence charge can be dismissed as merely stylistic and the problem of constructing a non-intentional formulation of Blind Brain Theory can be pursued on the sort of parallel track mentioned above. If intentionality does exist the inability (thus far) of philosophers to offer a plausible non-supernatural mechanism for its operation is at least a big an intellectual liability as Blind Brain Theory’s purported incoherence.
My own sense regarding intentionality is that it exists in the simple mechanical manner I described in my previous comments. If human actions can affect the world those actions can be said to be about the world. If those actions can be about the world the thoughts (neurological activity) that makes those actions possible can be said to be about the world. I this I suspect I disagree with Scott. I agree with Scott that no metaphysical force (analogous to the strong, weak, electromagnetic and gravitational forces) is needed to make the aboutness relationship between mind and world possible. I believe that no such force exists and that the aboutness relationship described above need not and cannot be further explained.
So I’d written a long reply to your comments, but the machine elfs ate it… Oh well. Let’s try again.
The sort of story you tell re the object-directedness of grasping the apple is compelling to you, I think, simply because you tell it from an already-intentional viewpoint: the apple, the grasping hand, even the neuronal state of the agent initiating the action are intentional objects of your own mind; hence, you infer their correlation, and attribute them with the directedness you perceive.
But put yourself in the robot’s shoes. The robot knows not apple, knows not hand—all it knows is ones and zeros: a pattern of them, served up by the environment; a pattern produced in response. What occurs in the world in result is entirely opaque: it could be grasping for an apple; it could be grasping for a banana; it could be fleeing from a predator. You, utilising the perceived transparency of your own mental state, can construct the grasping-the-apple story, and correlate robotic neuron firing with action in the real world, to lend the former directedness; but it is not you who must be capable of this feat, but instead, the robot.
You’ll note that this is essentially a variant of Searle’s Chinese Room, and perhaps prepare to voice your objections to this story. Fair enough; I don’t think it’s sufficient myself (in particular, I think the systems reply is perfectly apt). But Searle’s aim was to show that no algorithm at all can produce semantics from syntax; and it’s the strength of this conclusion in which he overreaches. I, however, merely want to show that your particular proposal doesn’t work: the intentionality is not simply inherited from physical, ultimately causal, relations, because the objects standing in these relations are not present to the robot’s mind. This, I think, is a valid conclusion to draw, because these objects can be (and are) replaced by arbitrary tokens; but it is not tokens that the mind is about (i.e., our thoughts are not about neuron firings).
There are other problems with such causal theories of reference. For instance, they lead to an embarrassing proliferation of attributions of intentionality, even where such seems absurd: consider that the same attribution of object-directedness could be made, with equal justification, if the apple simply pressed down a lever, that mechanically instantiates a grasping reaction. But does this mean that there is intentionality in such a system? And if so, is there then intentionality in any causal relationship? Is the falling tree about the axe that felled it? Is the electron’s swerve about the proton’s electromagnetic field?
Furthermore, there is a persistent problem with misrepresentation in such accounts. Apple-grasping is about an apple because it is the apple that causally initiated the grasping; so aboutness is directed at its causal origin. But the aboutness of our thoughts can be misplaced: we can believe we are reaching for an apple, while in fact, we are reaching for an oddly-shaped pear. But if it was in fact the pear that initiated the grasping, then it would be pear-grasping, not apple-grasping. It doesn’t help to note that it was merely the pear that falsely initiated a mechanism usually used for apple-grasping, because in that case, the mechanism is not triggered in the presence of apples alone, but in that of apples-or-oddly-shaped-pears; and hence, apples-or-oddly-shaped-pears are causally responsible for its activation; but then, such grasping is not apple-grasping, but apple-or-oddly-shaped-pear-grasping. But the mental state of an agent grasping a pear she takes to be an apple is not about an apple-or-oddly-shaped-pear, but about an apple that simply fails to be there.
Or for that matter, what about my belief that ‘Sherlock Holmes is the world’s greatest amateur detective’? How could it be directed at Sherlock Holmes, if there is no Sherlock Holmes, and hence, if Sherlock Holmes has no causal powers? You might want to say that it’s instead the concept, the fiction of Sherlock Holmes that I’m thinking of; or else, that my thoughts about Sherlock Holmes are merely triggered by the sign ‘Sherlock Holmes’ as a series of brushstrokes. But I’m not talking about a concept, or a sequence of signs—neither concepts nor signs are the sorts of things that can be amateur detectives. No, my mind is directed at Sherlock Holmes, who’s a human being, a man, who smokes and plays the fiddle, and who just happens not to exist—and hence, fails to be the sort of thing that could be ‘graspable’ in any physical sense.
So that’s why I don’t think your notion of aboutness does what we need it to. But that’s not to say that I don’t believe a naturalized account of intentionality is possible: I just think it’s hard. Now, both you and Scott seem to think that there is some kind of maximum hardness to problems, before one best considers them pseudoproblems; that if we haven’t cracked it by now, we never will, and that thus our failure to do so is an embarrassing fault at directing our attention towards the proper matters. I don’t think that’s the case. It’s taken us a couple of thousand years to even really get started on getting a grip of the fundamental nature of matter; if at any earlier point, people had taken your reasoning on board, we never would have gotten this far. And I think that’s one of the really easy problems, concerning the simplest sort of systems out there—so who knows how long it might take to get a grip on some less trivial matters?
Every age so far has taken it for granted that the full and complete knowledge about the world is just around the corner. Every age up to ours has been wrong. I see no reason why we should fare any better. So I say, let’s give it a couple of hundred thousand years more before we throw in the towel.
So now let’s get back to Blind Brain Theory. I’m happy to see that I’ve managed to make my argument intelligible; in my discussion with Scott, I’d become quite disheartened at my failure to communicate, and the resulting mutual missing of points. But I don’t think that just disbelieving in intentionality from the outset is of any help: then, you simply can’t formulate the theory in the first place, since it’s simply not clear what things like ‘metacognition’ or ‘faulty beliefs about ourselves’ might mean in the absence of intentionality. And of course, you would have no need for the conclusions of BBT, either.
On the other hand, if one were to find a formulation of BBT in nonintentional terms, then one would have found a way to explain how we can come to have beliefs about ourselves in nonintentional terms; which simply means that one would have solved the problem of intentionality. Again, then, one would have no need for BBT anymore. So of course, if that’s the outcome of working on BBT, that it becomes the Wittgensteinan ladder we must throw away in order to solve the problem of intentionality, then I’m all in favour of pursuing it; but personally, I think that there are more promising alternatives.
I just don’t understand what evidential grounds you have for believing in intentionality aside from ‘that’s how it feels,’ Jochen. All the ‘apriori’ arguments you offered begged the question. What we know about metacognition disqualifies ‘how it feels’ arguments (though some analytic philosophers, like Uriah Kriegel, continue to argue for intentionality on the basis of minimal introspective access).
Meanwhile, the question between us, the question of whether alleged intentional phenomena actually possess apparently impossible properties or only seem to given our metacognitive limits decidedly breaks in my favour. Yours is far and away the more extravagant ontology, one that can explain nothing. My more parsimonious ontology, meanwhile, can actually explain the failure of your ontology, as well as why it strikes so many as ‘necessary.’ It can actually map ‘mind’ across the brain.
That leaves you two thousand years of confusion in your favour, and only the promise of more to come.
The problems you raised with semantic externalism are problems for intentionalism, not for BBT, where things like the disjunction problem or swampman simply do not come up. Meanwhile the problems I’ve raised for you (in addition to all the problems your view inherits) still remain unanswered. You accept that intentional cognition is heuristic cognition, that it solves by neglecting what’s going on, yet you insist that only intentional cognition can tell us what’s going on with intentional cognition. I’m saying this explains the millennia of futility.
All I see are theoretical liabilities and vague promissory notes in your position.
Scott, I’m sorry, but I’m really not sure how to explain myself further to you. Michael above seems to have grasped my argument perfectly, so I have at least one data point that the failure to communicate isn’t entirely my own.
But anyway, let’s try again. I think you take me to make some kind of metaphysical argument towards the reality of intentionality; at least, that’s the only way I can understand where you could get the impression that my account was question-begging from (and even more so when you claim that my ontology is less parsimonious than yours—I was quite shocked upon learning that I had proposed one!). (Also note that parsimony isn’t everything: ‘nothing exists’ is maximally parsimonious, but also wrong.)
But that’s not my argument, at all. At its core, my argument really is quite simple: the story of BBT can only be understood on the basis of aboutness; but this aboutness is what it denies. Hence, it is fundamentally incoherent in its formulation. Roughly, if the brain is subject to distrust, then so is reason; thus, trying to reason yourself into a position of distrust for the brain is at the very least problematic, because reaching the desired target means the road you took there can’t have existed. It’s worse than Wittgenstein’s ladder, because once you’ve climbed it, you learn that there was no ladder—but then, what is it you’ve climbed?
This is not an empirical argument; hence, asking me for evidence is fundamentally misguided. It also doesn’t rely on a preexisting belief in intentionality—it merely suffices to note that such is inherent in BBT, as it contains formulations pertinent to thinking about thinking, or having mistaken beliefs about our cognition, and so on. But this is already where the mystery lies—how do we think about thinking? It’s simply this aboutness that is in the foundations of BBT’s formulation that I want to have explained, absent of any prior familiarity with concepts of intentionality and so on.
So picture me in the role of the total ignoramus (you might not have to strain too much), to whom you try to explain your theory. You start out by talking about social cognition—something with which I’m unfamiliar, so I ask what that’s all about. And you say, wait a second, I’ll get to that. Then you talk about how this social cognition yields (faulty) metacognition; and again, I ask how that works—how it is that we can come to think about our thinking. How anything can come to be about anything. And then you come with the punchline: due to this derived metacognition being ill-suited to its task, the concepts with which we cognize our own thinking are fundamentally misguided; my earlier questions thus don’t have an answer, since this ‘aboutness’ thing I referred to (you allege) doesn’t actually exist.
But the problem is, it was not I who first utilized the concept of aboutness, it was you—you used this concept to formulate your theory, and then concluded that the concept doesn’t refer! Your story is intelligible only if metacognition is thinking about thinking; but from there, you go on to conclude that there is no such thing as thinking about thinking. That is, your premises hinge upon the reality of aboutness, which your conclusion then denies. Given A(boutness), you infer not-A.
What you would have to do in order to make your reasoning plausible would be to give an account of BBT in non-intentional terms—i.e. solve the problem of intentionality, since BBT invariably and ineluctably depends upon intentional terms. Failing that, you’d have to assume that there is some way of making sense of things like ‘thinking about thinking’ that isn’t actually mysterious, that, even though you can’t provide it, there is some naturalized account of intentionality merely inaccessible to us (at the moment, or in principle). But since that’s your conclusion, it would then be your account that is question-begging.
“At its core, my argument really is quite simple: the story of BBT can only be understood on the basis of aboutness; but this aboutness is what it denies. Hence, it is fundamentally incoherent in its formulation.”
This is precisely the argument I take you to be making, the same tu quoque argument that intentionalists always make. And it begs the question. Clearly so. I’m saying ‘aboutness’ is a heuristic, a way of simplifying complex relations, and not a mysterious property requiring a fundamental rethink of our understanding of the physical universe to comprehend. The fact that BBT can be more efficiently understood via this heuristic, simply evidences the explanatory comprehensiveness of BBT, does it not? How on earth does it evidence your position, and so lapse into incoherence?
Given that the question is whether intentionality is a heuristic or inexplicable property is the issue between us, how does the tu quoque do anything more than beg the question? I can parrot the exact argument back to you: “At its core, my argument really is quite simple: the story of original intentionality can only be understood on the basis of aboutness; but this aboutness is what it denies. Hence, it is fundamentally incoherent in its formulation.” In making this argument I’m begging the question because I’m supposing that my heuristic interpretation of aboutness–the very issue to be decided–is true, to settle the question of whether my heuristic interpretation is true. On my account, every time you deploy the term ‘aboutness’ you are deploying a heuristic mode of problem-solving. So if you need to use aboutness to deny this heuristic account, you are using the very heuristic you say doesn’t exist to make your case, are you not? Ergo, your view is incoherent.
But I don’t make this argument because it’s vacuous. And the thing is, I’ve made all these points already.
You managed to get Michael to buy into a couple faulty intuitions, to take the traditional philosophical problem of knowledge as his starting point, rather than, as I do, actual examples of scientific knowledge. The problem of other minds, or the external world… these trouble science not at all. Why take them seriously? Especially once we know how prone reflection is to get things wrong. Far better to regard them as reductios of their starting premises, I think.
“I’m saying ‘aboutness’ is a heuristic, a way of simplifying complex relations, and not a mysterious property requiring a fundamental rethink of our understanding of the physical universe to comprehend.”
And as I’ve pointed out, for that to work, you need to have an account of ‘thinking about thinking’ in terms of heuristics, or whatever else you think actually underwrites the apparent aboutness; otherwise, you’re merely saying ‘suppose an answer to the problem of intentionality exists; then, an answer to the problem of intentionality exists’. So, how am I to understand ‘thinking about thinking’ if not in terms of aboutness? Or, in other words, what is the solution to the problem of intentionality?
“I can parrot the exact argument back to you: “At its core, my argument really is quite simple: the story of original intentionality can only be understood on the basis of aboutness; but this aboutness is what it denies. Hence, it is fundamentally incoherent in its formulation.””
This is incoherent. I’m not denying anything; in fact, I do believe that there is a naturalized account of intentionality. But assuming there is from the get-go doesn’t get us anywhere.
I don’t have to provide an account to dispute your account, any more than I have believe in evolution to disbelieve in God. I can deny original intentionality a great number of grounds (like those used by Rosenberg, say), such as the failure to get anywhere after thousands of years, the way it contradicts the second law, and so on.
Having an account simply helps my case, which is why I’ve given it to you several times now, depending the ‘term de jour.’ Thinking about thinking amounts to cognizing cognition, which amounts metacognition, as described in blind brain theory. Jochen, you just keep repeating the same invalid move over and over again, assuming your interpretation as a condition of possibility of any interpretation, which is to say, assuming your conclusion to evidence your conclusion. I say aboutness is this. You say, A ha! You have to think about aboutness to think that! I say, thinking about aboutness is this. You say, ‘A-ha! You have to think about thinking about aboutness to think that!’ This is all you have done. In each case you simply reapply the heuristic to the previous application of the heuristic. And all I’ve done is try to get you to see this.
In the meantime you have offered nothing in the way of the evidence for original intentionality I had asked for. If this is your evidence, then you simply none, only a failed tradition based on a myopic metacognitive capacity.
You need an account of original intentionality as well. You keep trying to reframe things to put BBT in an explanatory hole but how can it be the case that BBT has a higher bar to jump than your particular theory of original intentionality? How do you explain ‘thinking about thinking’ without presupposing a heuristic account of ‘aboutness’?
“This is incoherent. I’m not denying anything; in fact, I do believe that there is a naturalized account of intentionality. But assuming there is from the get-go doesn’t get us anywhere.”
You’re denying my heuristic account of aboutness, are you not? Since you need to use ‘aboutness’ to make that denial, you are clearly (given the argumentative form you’re foisting on me) presupposing the truth of the very thing you’re seeking to deny, are you not? I can just as easily claim that your usages presuppose my interpretation of aboutness as you can claim that my usages automatically presuppose your interpretation. In other words, the incoherence is the point. How can you not see this?
“I don’t have to provide an account to dispute your account, any more than I have believe in evolution to disbelieve in God.”
Yes you do, to the extent that you want to be able to use intentional terms intelligibly. Otherwise, if you simply say that there is no aboutness, but talk about thinking about thinking (or any of its synonyms), you’re using meaningless words.
“In the meantime you have offered nothing in the way of the evidence for original intentionality I had asked for.”
I have not anywhere in any way argued for the existence of original intentionality, so I neither need evidence for it, nor an account of it. As I’ve tried to clarify in my last post, I don’t even believe it exists; but this doesn’t change anything about the incoherence of your account.
What I have done is to point out that the formulations you use include reference to aboutness; and this must be explained in order to make your formulations intelligible. I’m very much not saying that you have to think about aboutness, or presuppose it, or stipulate it as a transcendental condition, or anything like that: I’m simply pointing out that you do rely on aboutness in order to tell your story.
“You’re denying my heuristic account of aboutness, are you not?”
You haven’t given any account of aboutness; you’ve claimed that what appears intentional to us merely does so because of heuristics, but that in itself of course does not tell me how, for instance, thinking comes to be about thinking (which is, again, what you stipulate in order to make your account work). It’s nothing but saying that ‘some account exists’. It’s like, if I ask you how the TV works, you reply, via a mechanism.
So I ask again: how do I understand the phrase ‘thinking about thinking’ (or its equivalents) if there is no aboutness? How do your heuristics do the work they need to do?
“Yes you do, to the extent that you want to be able to use intentional terms intelligibly. Otherwise, if you simply say that there is no aboutness, but talk about thinking about thinking (or any of its synonyms), you’re using meaningless words.”
I don’t get it. So if I disagree with your interpretation of the word ‘life’ as “The instant of conception,” say, there’s no way I can intelligibly use the word ‘life’?
Does it mean children who use the term ‘about’ are not using the term intelligibly? Does it mean that physicists who think ‘time’ is illusory are using the term unintelligibility because they can’t explain the illusion, only insist that the equations governing their physics can be run forward or backward?
These ‘intelligibility arguments’ of yours simply hold no water. I’ve been hit with many of them over the years, Jochen, some quite subtle, but these just don’t float my friend. The only way I can be contradicting my own meaning (unintelligible) is if I’m implicitly relying on some other meaning. The only evidence you’ve offered of this is the fact that I’ve used the term ‘about.’ And I’ve replied, yes, ‘about’ is a powerful heuristic. Why shouldn’t I use it. And you say, again, that I can’t because I’m presupposing this other meaning you have in mind. I reply, well, I don’t think so. What evidence do you have? And you stomp your feet once again.
“I have not anywhere in any way argued for the existence of original intentionality, so I neither need evidence for it, nor an account of it. As I’ve tried to clarify in my last post, I don’t even believe it exists; but this doesn’t change anything about the incoherence of your account.”
I’ve noticed you backing away from ‘intentionality’ and focussing on ‘aboutness,’ but you’re initial counters to me contained no such qualification, so I’m sure you can understand why I assumed you thought ‘aboutness’ referred to a real feature of the natural world. Are you now saying that you agree ‘aboutness’ is a feature of our problem-solving, not of our nature? If you think aboutness does exist in nature, has to belong to nature, then your position is consonant with those professing ‘original intentionality.’ If you want to delineate your particular view (because there are thousands) then we could stipulate ‘original aboutness’ if you like, or ‘j-intentionality.’ Or whatever term you please – we just need to agree. Otherwise the suspicion is that you’re simply moving the goal posts every time ball moves toward your end-zone.
“So I ask again: how do I understand the phrase ‘thinking about thinking’ (or its equivalents) if there is no aboutness? How do your heuristics do the work they need to do?”
I already told you: thinking about thinking amounts to cognizing cognition via metacognitive heuristics. We have no way of cognizing the mechanics of cognition, so we go about solving those mechanics heuristically, via mechanisms that ignore the mechanics of cognition (there’s no other way for the brain to do this). That’s the function idiomatic usages of ‘about’ discharge. The mechanical specifics of these heuristic systems is something cognitive neuroscience will unravel in due course–without the need for any miraculous twist in our understanding of the natural world, any kind of spooky emergence.
That’s my explanation (which, once again, I don’t need to intelligibly dispute ‘original aboutness’ or ‘j-intentionality). What’s yours?
“So if I disagree with your interpretation of the word ‘life’ as “The instant of conception,” say, there’s no way I can intelligibly use the word ‘life’?”
Well, first of all, I’m merely using the dictionary definition. Second, if you don’t agree with that definition, then you have to provide an alternative—otherwise, you’re just Humpty-Dumptying, and there’s no way to ever reach mutual understanding.
“The only way I can be contradicting my own meaning (unintelligible) is if I’m implicitly relying on some other meaning.”
And that’s exactly what you are doing: you rely on the usual definition of aboutness in order to set up your account, but then claim that this definition is wrong. Let’s break your story down into two parts: first, you introduce your social-cognition derived metacognition. This, in particular, includes a claim that there is something like metacognition, that is, thinking about thinking. Now, I understand that the ordinary way, as thinking that refers to thinking, or that has thinking as its object. Nothing more—in particular, this is not a theory-laden concept: I’m not saying that this aboutness, reference or object-directedness is irreducible; but likewise, I’m also not saying that it is to be explained by teleosemantics, causal theories, or anything like that. I’m simply mute on the issue. (It’s true that I’ve backed away from the use of intentionality, because it caused you to attribute a position to me that I don’t hold; but frankly, I’m running out of ways to try to express myself in such a way that you don’t attribute positions to me.)
The problem is simply that the ordinary understanding of aboutness has issues: it is entirely unclear how something can be about another thing. But this is not something that enters the discussion at this point—I can understand your claim that there is thinking about thinking without reference to any particular theory of how this aboutness comes about.
But now, you want, in part two, to cash in on your premises, and conclude that due to the mistaken nature of metacognition, there is actually no aboutness—no thinking that has thinking as its object, in other words. But if you are claiming that, then you’re saying that my earlier understanding of the first part of the first part was mistaken—the metacognition you appeal to can’t be thinking about thinking, because there’s no such thing. But then, I can’t follow your reasoning anymore: your further claim that this metacognition can be mis-directed rests on the earlier claim that it is directed; but now you’re saying that this claim isn’t correct anymore.
So that’s why, in order to make sense of your account, another explanation of the ‘thinking about thinking’ in the first part is necessary, just so that your reasoning has a firm grounding. It’s at this point that we need an explanation of how thinking can be about thinking, has thinking as its object, etc., since on your theory, there’s no such thing; but you need something of that sort in order to justify your account in part one. Again, this is not a claim that there necessarily must be some original aboutness; this is just pointing out that you’ve argued that the ordinary meaning of these words is mistaken, and hence, if your want to use them, must propose a new one.
When I ask you about this new meaning, you come back with:
“I already told you: thinking about thinking amounts to cognizing cognition via metacognitive heuristics.”
So thinking about thinking is cognizing cognition (i.e. thinking about thinking). It uses metacognitive (i.e. pertaining to thinking about thinking) heuristics. This definition, once again, needs an agreed-upon understanding of the term aboutness in order to get off the ground—otherwise, how am I to understand ‘cognizing cognition’ in a way that doesn’t refer to aboutness? Or ‘metacognition’?
Putting this ‘explanation’ of thinking about thinking into part one of your formulation doesn’t change anything—the problem remains, because you’re still appealing to aboutness in order to get your theory of the ground, and then conclude that there is no aboutness.
So, to sum up, your story, part one, depends on the notion of thinking about thinking to be coherent; then, part two, you conclude that the notion of thinking about thinking is incoherent. This is not proposing any theory of aboutness, not proposing that it is somehow part of nature, etc.; it’s just the dictionary meaning of the words you use, and hence, how I understand them. A theory is only needed once you propose that there is nothing such as the aboutness as I understand it (or as it is commonly understood), because then you need something to replace it with—else, the first part of your story becomes unintelligible, since you use words you claim have no meaning. So far, however, your proposed alternative again depends on the notion of aboutness.
I have no idea what to make of the ‘dictionary definition’ stuff.
Do you at least understand the mistake you consistently make by my lights? Over and over, you keep trying to find ways of insisting I’m begging some occult definition of aboutness other than my own, suggesting that applications of the heuristic somehow logically necessitates commitment to a nonheuristic understanding of aboutness. Why does applying the ‘aboutness heuristic’ render the aboutness heuristic unintelligible?
“I have no idea what to make of the ‘dictionary definition’ stuff.”
I’m merely pointing out that in order to communicate, words must be used according to their agreed-upon meaning by the language community. So I interpret ‘aboutness’ to mean ‘directed at, pertaining to, having as its object’, in the same way I interpret ‘house’ to mean ‘building within which somebody lives’, or something like that.
You keep trying to insinuate that I am substituting some sort of theory of aboutness to which you don’t hold; but I’m not, not anymore than I’m introducing some theory of houseness into my understanding of ‘house’. I can make the same objection to your theory whether I hold aboutness to be intrinsic, to be explained by teleosemantics, to be merely due to attributions of intentionality, and so on—it’s completely independent of my beliefs about aboutness.
So this:
“Over and over, you keep trying to find ways of insisting I’m begging some occult definition of aboutness other than my own, suggesting that applications of the heuristic somehow logically necessitates commitment to a nonheuristic understanding of aboutness.”
Is just a misunderstanding. I’m not suggesting that your formulation entails a commitment to a nonheuristic understanding of aboutness; I’m merely observing that your formulation uses the notion of aboutness (in whatever way you want to understand it), to then conclude that the notion of things being about other things is as a whole mistaken. And that’s just a logical flaw in the structure of your theory; it has nothing to do with whatever my beliefs about aboutness might be.
Your exposition of metacognition depends on it being cognition directed at or referring to cognition, or having cognition as its object; then later on, you say that nothing ever is really directed at or refers to something, or has something else as its object. But you reached that conclusion on the basis of assuming that metacognition does exactly that. Otherwise, you could not conclude that it provides us with faulty self-explanations.
“Is just a misunderstanding. I’m not suggesting that your formulation entails a commitment to a nonheuristic understanding of aboutness; I’m merely observing that your formulation uses the notion of aboutness (in whatever way you want to understand it), to then conclude that the notion of things being about other things is as a whole mistaken. And that’s just a logical flaw in the structure of your theory; it has nothing to do with whatever my beliefs about aboutness might be.”
This is getting surreal. Where’s the logical flaw, Jochen, outside your insistence on begging the question? My exposition of metacognition depends on it heuristically processing available data. Thanks to this heuristic processing, we can talk of metacognition as ‘thinking about thinking,’ without falling into the trap of thinking, as you do, that ‘aboutness’ is an occult property of the universe. I do not require that metacognition possess this occult faculty to give the above explanation of metacognition. I only require that it be heuristic. Why is this so difficult for you to grasp? Why do you always suppose that I have to presupposing aboutness as an occult property, and not as a heuristic? And why don’t you see this as an obvious example of begging the question?
Do you have anything other than this interminable tu quoque chicanery? This is all you have, isn’t it? Is this why you insist on pouring so much energy into endless adjusting and tweaking your formulations?
Doesn’t that trouble you? Personally, I bailed from intentionalism precisely because I realized that arguing that people had to be presupposing my ontological interpretations (which like you, I tried to dress up as something not my own at all, but just ‘what X means’) ultimately collapsed into question-begging. Don’t be trapped by this crap. Seriously.
Think about it. Why should I be remotely convinced by any argument that begs the question against me?
“Where’s the logical flaw, Jochen…?”
Well, exactly where I’ve pointed it out for, I don’t know how many times now: you use the concept of metacognition, cognition about cognition, cognition having cognition as its object, to arrive at a conclusion that there is no aboutness, nothing having anything as its object, and so on.
“My exposition of metacognition depends on it heuristically processing available data.”
No. You say that there is some account of metacognition in terms of heuristics; but you have not proposed a way of how heuristics can be used to, e.g., come to conclusions about our cognitive faculties. But without such a mechanism, you’re merely making the bald assertion that some such account exists; but this does not help you at all.
Now I know you’re all getting ready and want to claim that I’m once again merely introducing my own occult prejudices when I’m talking about ‘conclusions about our cognitive faculties’, and asking for an account thereof; but it’s not me who’s dragging this kind of stuff into the discussion, it’s you, with your talk of metacognition. Whatever it is, it does something, right? Like coming to conclusions about out cognition. That it does that is an integral part of your theory, not something I project into it. That’s how you use the term ‘metacognition’. And your theory has to be able to account for the fact that it does so; but instead, it concludes that something like that isn’t possible.
“I do not require that metacognition possess this occult faculty to give the above explanation of metacognition.”
And again, I’m not ascribing any occult faculty to metacognition, I’m merely saying that metacognition is cognition having cognition as its object. Do you agree with this, or not? If not, then how does metacognition produce conclusions about our cognitive faculties (even faulty ones)? And of course, you’re going to mumble something about ‘heuristics’ again; but this doesn’t answer the question, since it does not tell me how heuristics do what you require them to do, or even if they are able to—your account alone gives me zero reason to believe they are.
“Is this why you insist on pouring so much energy into endless adjusting and tweaking your formulations?”
No, it’s merely to try and finally find a way to circumvent your cognitive blocks that force you to saddle me with positions I have nowhere defended, or even articulated. You, still, after I have pointed out a half-dozen times to you that it’s false, assume that I am arguing for something like original intentionality. But, still, that’s false.
Anosognosia. No matter how many times I define it, explain it, that definition or explanation contradicts your intuitive understanding of ‘about,’ which you assume, on the basis of divine revelation perhaps, simply MUST belong to any interpretation of ‘about.’ And you so you beg the question again and again.
What this means, Jochen, is that you have some block preventing you from understanding heuristic cognition, that will lock you out of what will very shortly, I guarantee you, revolutionize all our thinking about all these things. You’ll be stuck stamping your foot saying, ‘Incoherent! Incoherent! The only coherent definition of about is that I see in my heart! The one you have to use! You have to, because I can’t say why, only that I know you have to!’
The thing is, Jochen, I once had your belief. I understand full well the (faulty) intuitions your attempts to argue turn on. But you can’t even see how anyone could find mine intelligible.
Well, good luck to you sir. Maybe another thousand of years is all it’ll take to figure out your absolutely-necessary-because-something-tells-me-so-even-though-physics-says-its-impossible version of about.
Me? My fingers have had enough.
“No matter how many times I define it, explain it, that definition or explanation contradicts your intuitive understanding of ‘about,’ which you assume, on the basis of divine revelation perhaps, simply MUST belong to any interpretation of ‘about.’”
No, I’m still not assuming anything; you’re still attributing a position to me that I don’t hold, which I’ve pointed out to excess to you by now. Yet it seems that the only way you have to engage with me is set up strawmen to knock over with great bravado.
All I’m doing is pointing out that you say things like these:
(S1) “The problem is that we have no way of cognizing ourselves AS SUCH: we are natural in such a way that we cannot cognize ourselves as natural, and so we’ve developed a variety of tools for cognizing ourselves otherwise.”
“The ‘original intentionality’ intuition arises because deliberative metacognition has no way of cognizing the heuristic limits of its objects, nor its own heuristic limits.”
“Thinking about thinking amounts to cognizing cognition, which amounts metacognition”
And these:
(S2) “humans suppose nothing at all, ever, nor do they believe or attribute or desire.”
“Cognition is never ‘about’ anything. About is an artifact of one way we cognize cognition, given the fact that we have no access to the facts of cognition.”
“social cognition does not involve ‘aboutness,’”
So then I say—and that’s all I’m saying:
(J1) Those ain’t consistent.
Which they clearly aren’t: the first group clearly hinges on the existence of thinking about thinking, and the second group clearly denies it. Yet, you claim that the second group is derived from the first.
This argument does not in any way hings on my ‘divine revelations’ about intentionality, or my belief in original aboutness, or anything like that, no matter how much you would like to tar me with that brush.
But anyway, I guess I’ll just wait around for the when your blog posts revolutionize all our thinking.
Do you agree that humans solve complicated problems by positing simples possessing intrinsic properties?
I don’t think so. Personally, I tend to solve complex problems for instance by breaking them up into smaller ones, that can be solved individually.
Could you give an example of where you think we do that?
So for instance, do people generally understand the ‘value of money’ as the product of vast differential processes, or as something intrinsic to money?
That depends on the context. I suppose one could talk that way when merely exchanging money for goods and services; but I don’t think anybody would really say that money has any ‘intrinsic value’. The value that it does have is constituted by the background processes, for which it is essentially a token.
So people are born understanding the value of money in differential terms? For that matter, how long did it take to overcome inherent notions of value in economic theory?
This isn’t rocket science here, Jochen. People have to learn how to look at systems in complex differential terms, do they not?
Is this going to go anywhere?
I’m sorry I snapped at you there. It was late, and I was tired and frustrated; and to be honest, I’m not sure I appreciate being cast as the mark in some sort of mock-socratic dialogue. But I’m still curious as to where this goes, so what I should have said is:
Money has value, since it can be exchanged for goods and services. Any theory of money must account for this fact. Possible theories that spring to mind are:
–having value is just a property of money that admits of no further analysis (intrinsic value);
–money has its value due to standing for something else that has value, e.g. precious metals (derived value);
–the value of money is due to our belief that money has value (the evaluatory stance); and
—money’s value is due to complex, partially opaque processes in the background (naturalized value).
The point, Jochen, is that human beings, as a matter of empirical fact, automatically attribute ‘inherent powers’ to things to better manage those things. This is why essentialism is the default, why we begin, as small children, cognizing the world around us not as the vast, differential machine that science has revealed, but in terms of entities intrinsically invested in with powers. You don’t dispute this do you?
Well, I had hoped that my example regarding the value of money might help nudge you in the right direction… The point is not what we attribute to things (or don’t)—the point is that money has value qua being something that can be exchanged for goods and services; the nature or origin of that value is a secondary concern, and not what the debate is about.
In analogy, metacognition does what it does by having cognition as an object—that’s simply what metacognition is, and what you require it to be, and repeatedly have claimed it is. Pointing this out does not mean I subscribe to some inherent notion of intentionality any more than pointing out that money has value since it can be exchanged for goods and services commits me to holding to the idea that this value must be inherent.
One is the phenomenon: money can be exchanged for goods and services, hence has value, in that sense only; metacognition has cognition as its object, hence has aboutness, in that sense only. The other is the theory of how this phenomenon comes about: it’s intrinsic, it’s derived, it’s due to mere attribution, etc. About this, I am, and have been, silent.
Now, you might want to say that I’m deceived about the existence of the phenomenon—that there is no aboutness, even in this sense. Likewise, you might claim that money can’t actually be exchanged for goods and services, that things merely conspire to make it appear that way. And that’s fair enough; it’s entirely possible that this is how things are.
But what you can’t do—and what you nevertheless are doing—is then to start out by presuming that metacognition does have cognition as its object, then conclude from there that it doesn’t. That is, you can’t base your conclusion that the appearance of aboutness—in the simple sense of having something as its object—is misleading upon a premise that rests on the existence of just that kind of aboutness. You can’t say that ‘thinking about thinking … amounts to metacognition’, and then claim that ‘cognition is never ‘about’ anything’.
Do you honestly not see the conflict there?
Are denying the empirical research on essentialism? I don’t understand.
Otherwise, do words need to refer to have efficacy?
“Are denying the empirical research on essentialism?”
No, I’ve tried to explain, as clearly as I’m able to, why it’s not relevant to the point I’m making.
“Otherwise, do words need to refer to have efficacy?”
They certainly need to have meaning in order to yield meaningful propositions.
So you agree that humans posit intrinsic efficacies when solving otherwise computationally intractable complex systems?
And you agree that ‘about’ doesn’t require a referent to possess efficacy?
Nope. Not in such a blanket form, at least.
Seriously, though, make your argument, if you’re not going to respond to mine. Socratic dialogues never work in the real world.
So then how do humans cognize each other and their environments, if not heuristically?
Depends on what you mean. Certainly, we often use heuristics to model the behaviour of the objects of our cognition; but that doesn’t mean that the act of cognition itself is heuristic. And that cognitive act is prior to our use of heuristics as applied to its objects, and it’s that act that needs explaining.
Do you deny that in order to apply heuristics to something, that something first needs to be an object of our cognition?
In other words, you don’t know, but you’ll be damned if you admit to anything that might commit you to denying intrinsic intentionality (or whatever you want to call it)!
As for your question, not at all. But I do deny that saying ‘object of cognition’ commits me to intrinsic intentionality. I’m just using a heuristic.
Do you deny that the concept ‘object of cognition’ is a heuristic?
Please, stop putting words in my mouth. And I still don’t believe in intrinsic intentionality; the problem is entirely elsewhere, not concerned with the functioning of aboutness at all, but with the fact that you presuppose it, and then deny it. You still haven’t told me, by the way, how you can say ‘thinking about thinking … amounts to metacognition’, and then claim that ‘cognition is never ‘about’ anything’, without that being a flat contradiction.
“Do you deny that the concept ‘object of cognition’ is a heuristic?”
I’m not sure how even to make sense of this; heuristics can only be applied to things that one thinks about, that is, that are objects of cognition. If I am a heuristics-applying system, I need to presuppose that I have objects of cognition; otherwise, I simply don’t know the meaning of ‘applying a heuristic’. So what’s the object of cognition that I apply the heuristic notion of ‘object of cognition’ to? Or what do I apply a heuristic to, if not to an object of cognition? (And no, none of this implies a belief in intrinsic intentionality; everything I say can be equally well read under any other consistent theory of aboutness.)
Like I said, whatever it is you mean by this thing you keep insisting is the condition of possibility for heuristics. What do you want to call it?
Look, this has been a farce from my perspective for quite some time. All you have is the same tu quoque gimmick of insisting the application of the heuristic cannot account for intentionality because there has to be something the heuristic can be applied to and that has to be some nonheuristic intentionality (or as everyone calls it, intrinsic or original intentionality). The point of applying the heuristic is that the biomechanical story of our actual causal relations to our environments is too complex/inaccessible to be accessed any other way. You’re caught within that heuristic loop, but since you have no metacognitive access to the loop, it seems to you that you’re walking in God’s own reasonable footsteps, that ‘intentionality just is,’ indeed, that it has to be for a heuristic account to even get off the ground. You’ve retreated from maintaining any specific commitments to avoid the charge of question-begging, so you just keep repeating ‘object of cognition’ as if you’re invoking something necessarily more than the application of a mere heuristic. But why? Because there has to be an object? Of course their has to be an object, and of course it has to be causally related to human cognition. But for some incomprehensible reason you think it has to be more than that… For evidence that you cannot produce, outside from foot-stomping.
I get it. I’m not really interested in wasting any more time. Besides, you’ve already demonstrated my point enough. You can keep going if you want…
YTheir is no about’
, and this If you are a heuristics applying mechanism you need to apply those heuristics to cognize any object as an ‘object of cognition,’ do you not?
“Like I said, whatever it is you mean by this thing you keep insisting is the condition of possibility for heuristics.”
It’s not me who posits something as precondition for the applicability of heuristics; it’s that your own account doesn’t get off the ground, because without it, there simply is no grounds on which you can argue that we draw faulty conclusions about ourselves—because evidently, this necessitates first and foremost being able to draw conclusions about ourselves. How this capacity works is of not importance, but without it, your story has no beginning.
I’ve given you ample chance to try and formulate a consistent account, to try and explain how to apply heuristics without applying them to anything, to draw conclusions without drawing them about anything; but all you have managed to produce in return is sound and fury, saddling me with strawmen, putting words in my mouth, and dodging or ignoring the simple questions I’ve posed that should be trivially answered by your account, if it in fact amounted to anything. That you’ve felt it necessary to resort to such tactics, despite my repeated, polite insistence that the points you were arguing against weren’t the ones I made really suffices to tell the whole story.
You keep applying the heuristic, then insisting you’re not applying it, that something you would rather not call intrinsic intentionality can only be explicated intrinsically. Over and over and over. I think I’ve had the patience of Job! I appreciate that you don’t see it, that every time you run through it there it is as plain as the nose on your face, the inconsistency of a view that requires intrinsic intentionality (or whatever it is that you insist cannot be explained extrinsically) in order to insist that their is no intrinsic intentionality. It all hangs on your unexplained insistence that your intrinsic X (or non-extrinsic X, or whatever it is you take yourself to be talking about) must come first because it has to be ‘about something’ first like for real, rather than causally connected to environments in a way that can only be heuristically cognized. It’s like arguing with a skipping record. I’m saying ‘about’ is heuristic way to cognize the crazy complexities of cognition, and you’re saying, but cognition has to be about first! continually applying the heuristic in order to ‘demonstrate’ that isn’t a heuristic.
The thing is Jochen, I’m anything but an ideologue. I’ve bitten many, many bullets in my day, and even still regularly acknowledge the force of different alternatives to my view. The reason I hate BBT is simply that I spent more than a decade arguing against its like. I literally gave up all my pre-existing commitments in coming to this view, and if I could find an honest way out, I would do the same in a heartbeat. You think you’re demonstrating something obvious, and in point of fact you are, but it has nothing to do with the ‘force’ of your (non)argument. Even when I began aping your tactic, insisting that you had to beg a heuristic account to even say things like ‘object of cognition,’ you still didn’t get it. You just don’t understand how your view holds probative force for only those who already agree with you (beg the question with you). It’s clear you never will. My guess is that you’re not the conceding kind.
Unfortunately, politeness possesses no logical force. And at least we established that you in fact have no view. By all means continue begging the question against my view by insisting it’s incoherent because it presupposes something that can’t be heuristic, or however you want to colour whatever it is you think your insight is. By all means continue applying the heuristic to prove that it can’t possibly be true.
You’re just demonstrating my anosognosia claim at this point, bud.
“You keep applying the heuristic, then insisting you’re not applying it, that something you would rather not call intrinsic intentionality can only be explicated intrinsically.”
See, that last part is just flat out wrong: everything I said could have just as well been said by, e.g., a teleosemanticist. What do you think is inconsistent with any naturalized account of intentionality? There isn’t any such thing, because I do believe such an account exists.
All I’m doing is pointing out that your account rests on the notion of ‘cognizing cognition’. This I take as my definition of aboutness: aboutness is cognizing something. And then you bizarrely claim that this commits me to an intrinsic account of intentionality—to which you, then, should equally well be committed.
Let’s go back to the example of money. Your account, in this analogy, starts with ‘money can be exchanged for goods and services’, which I take as my definition of value (hell, you might even say that money can be exchanged for money, to make the analogy closer). You then later claim to be able to draw the conclusion that nothing, in fact, can be exchanged for goods and services, just as you start from cognizing cognition to then conclude that cognition is never about anything. But my pointing out that this isn’t consistent doesn’t commit me, in any way, to the idea that value is intrinsic to money—in fact, it’s entirely silent about the origin of value, it’s merely pointing out that you use the concept of exchanging stuff for goods and services to arrive at the position that such isn’t possible. The question of where money’s power to be so exchanged comes from doesn’t enter at any point. Likewise, you start with cognizing cognition, to arrive at the point that nothing is cognized, because cognition has no object.
Now obviously, my impression that cognizing cognition must mean that cognition can have things as their object may be wrong, I may be deluded about that, just as my impression that money may be exchanged for goods and services may be wrong; but it’s not my impression that I’m starting from, it’s the fact that you use these concepts in that way—you use cognition as if it could be about something, to then conclude that such use has no basis. And again, it may be true that such use has no basis—but you can’t stipulate that on the basis of this use!
There’s just two possibilities. When you say ‘cognizing cognition’, it could be the case that it’s literally such that cognition has cognition as its object; that if this is so, your argument doesn’t hold water should be clear (isn’t it?). Or, ‘cognizing cognition’ doesn’t mean that cognition has cognition as its object—it’s just a heuristic way of talking, a language game, whatever. But then, any conclusion based on the construction ‘cognizing cognition’ does not follow.
You keep claiming that when I say ‘cognizing cognition’ (which is, again, all I’m saying when I say ‘about’), I’m applying a heuristic. But then, so are you (well, if you’re not claiming some special powers of reasoning fundamentally inaccessible to me); and in particular, this means that you have no license to claim for any conclusions that they are logically sound, because a heuristic isn’t necessarily truthful, it just needs to be useful (as you point out so eloquently). The point is merely, that if you are right and cognition can’t have cognition as its object, then your inference that cognition misleads us about cognition doesn’t follow. Your inference is ‘cognizing cognition –> not(cognizing cognition)’.
So, this should suffice to make it clear that I’m not claiming that aboutness is just what I feel it is, that there must be some special sauce that your account leaves out; I’m merely working with the words as you use them in your account, nothing else. I’m not bringing anything to the table here. There’s no unexplained insistence on some intrinsic stuff—I mean, of course, there’s no explanation for that, but there also never was any insistence. Otherwise, point out, in the above, what you think saddles me with a commitment to intrinsic intentionality, or whatever the hell it is you claim I just can’t let go of. There’s no such thing: all I need is your phrase ‘cognizing cognition’.
“The reason I hate BBT is simply that I spent more than a decade arguing against its like.”
That’s a very romantic notion, isn’t it? The prophet, in agony, forced to tell the truth, tearing at his ragged clothes.
But I don’t really care what your prior views were, I don’t care whether you don’t like your own theory but feel forced by powers greater than you to proclaim it; I merely care that the theory you’re proposing can have no consistent account of its own formulation.
“My guess is that you’re not the conceding kind.”
Well, since you seem to be so interested in the opinions we held historically, let me at least disabuse you of this notion: not too long ago, I held a view very close to yours, and Dennett’s, in particular regarding the notion that the impression of the problematic aspects of conscious experience merely originates in a systematic inability to cognize our own inner workings, although the analogy I used is that of a blind spot (with Dennett). But I got better. A (very rough) outline of this view can still be found on my old blog:
http://ratioc1nat0r.blogspot.de/2011/07/consciousness-explained-anyway.html
“You’re just demonstrating my anosognosia claim at this point, bud.”
Isn’t it terribly convenient if one can just believe that all opposition to one’s views merely comes from cognitive defects of one’s interlopers? I mean, I wish I had such a trump card, but unfortunately, I’m still bound to just using reasoned argumentation in disagreements.
AIIIIEEEEEEEEEEEEE!!!!
Yeah, that’s about what I thought. Anytime you’re faced with actual questions, you just get evasive (though usually less overtly so). Anyway, to finalize this: you have, and had, always the option to end this by simply answering my questions. All you need to tell me is: how does one cognize cognition heuristically?
But of course, you’ve got no answer to that—since it’s just the good, old fashioned problem of intentionality. And yet, your theory depends on the notion of cognizing cognition. So it goes.
Life on the Moebius strip, eh, Jochen? I’ve laid out the magic trick in a number of different ways, and each time you keep insisting that I’m just explaining a trick, not the magic. I have every right to scream, especially given the fact that I’ve been through this very same dance several times with several different people now. I say, ‘Here’s the trick,’ and they say, ‘Yes, but what about the magic! There’s no trick without the magic!’ Explain to me how your argument doesn’t fall into this pattern, and I’d be FASCINATED, truly. As it stands, there’s nothing you’ve brought up that I haven’t encountered (and in far more subtle forms, no less).
No answer then, unsurprisingly. And I don’t doubt that you’ve come across this argument in a more subtle manner; but since that didn’t take, I’ve been trying to be as explicit as possible. Everything that can be said, can be said clearly. Unlike, it seems, how you propose heuristic cognizing of cognition works.
The only magic, so far, is your use of the word ‘heuristic’, which, everywhere you use it, could be just as well be replaced by ‘magic’: somehow, you belive that it’ll just do what you need it to; but of course, you don’t actually have any account of that.
Funnily enough, by the way, I just noticed that I even appealed to Anton-Babinski-syndrome in my old blog post… I guess that’s a road everybody needs to go down once regarding this sort of thing. Some move on, some don’t.
“If we accept the evidence of our senses to the effect that human action does affect the physical world I don’t think we can consistently hold that the mental activity that makes that physical activity possible does not affect the world. I think we have to accept that thoughts and actions can be about the world unless we are willing to claim that the world outside our minds does not exist.”
This only follows if you take human experience as intentionally conceived to be our theoretical starting point. This entails artificially disconnecting experience from environments, assuming the former possesses some kind of essential, epistemic priority. But this is the consequence of philosophical decisions, not any essential starting point. So the question is simply one of why anyone should accept those philosophical decisions. I don’t. I start with the assumptions of the biological sciences instead, where you have organisms and environments. Trusting philosophical intuitions over these basic, enormously powerful assumptions is something than needs to warranted, evidenced, even more-so now that we understand how treacherous these intuitions can be.
Hello, Jochen and Scott.
My point to Jochen is that the mental activities we gather together under the term ‘intentional’ are neurological activities we don’t understand well enough to describe in neurological terms. If you think that intentionality is something other than or in additional to neurological I think Scott (and I, for that matter) would like to know what this other or additional thing is and what evidence you have for its existence. If you agree that intentionality is merely neurological then I think you and Scott have more in common than not. Your claim that he can’t describe intentional phenomena using purely non-intentional (meaning neurological) language is correct, but I think that his inability to do so merely indicates the fledgling state of neuroscience. I’m not any sort of expert but I’d guess that neuroscience is about where astrophysics was the day Galileo died. Even if you believe that Blind Brain Theory as Scott formulated it is incoherent the questions ‘is intentionality something other than or in addition to neurological? and ‘if yes, what is this other or additional thing and what evidence do you have for its existence?’ are still on the table.
My point to Scott is that “the mental activities we gather together under the term ‘intentional’ are neurological activities we don’t understand well enough to describe in neurological terms.” When I say “I’m thinking about my dog” I think what’s really going on is that prior to the current moment I have petted the dog, washed her, played fetch with her and so on. All the sensory percepts associated with these dog-centric activities are stored in my brain in some fashion. When I “think about my dog” some part of my brain is interacting neurologically with the part of my brain that stores the (somehow rendered into memory) dog-centric sensory percepts. Even though our understanding of how sensory percepts are encoded into and retrieved from memory is still rudimentary I think that “thinking about my dog” has to be one part of your brain interacting with another part of your brain. That interaction is what aboutness is. It’s a neurological rather than a metaphysical activity, but it’s not nothing.
If the two of you agree that intentionality is merely neurological then your other disagreements are merely rhetorical and can be deferred pending further scientific developments. If you disagree about whether intentionality is merely neurological you have a disagreement about the fundamental nature of reality which no evidence or argument can settle. You’ll have to fight to the death. (Insert ‘pistols at dawn’ smiley-face here.)
Hi Michael, glad you’re still with us, I had been afraid that the sheer volume of our exchange had overwhelmed you.
As for intentionality, no, it’s nothing beyond neuronal activities, or at least I’d need to see some pretty substantial evidence to be convinced that it is. But I do think that there is aboutness, simply because it’s the simplest explanation for why our mental states appear to possess aboutness. This aboutness, however, is not some irreducible property of the world; ultimately, it will yield to scientific analysis, and we’ll know the story of how it emerges from nonintentional neuronal activities, the same way we now know how life emerges from nonliving constituents. I just allow for the possibility that we might be a long way away from that explanation.
There’s also a positive case to be made for the hope that we’ll get to know the story of how the tiny salty squirts that occur in our brain can come to be about something, or at least can come to seem to be about something, and that’s the fact that we’re universal symbol manipulating agents. Basically, we possess what one might call ‘universal general intelligence’: we’re capable of emulating a universal Turing machine, at least in the limit. But that means that nothing that occurs in a computable way is beyond us: we are in general capable of discovering how any process occurs, as long as it occurs according to computable rules. So if the Church-Turing thesis holds, then I think nothing is ultimately beyond our reach, even if some things may be quite difficult (which is the same difficulty that Turing machines geared to some specific task face in emulating others: one merely of resources, not of quality).
Beyond the threshold of Turing completeness, nothing is fundamentally beyond an evolved intelligence, contra BBT.
“Your claim that he can’t describe intentional phenomena using purely non-intentional (meaning neurological) language is correct, but I think that his inability to do so merely indicates the fledgling state of neuroscience.”
Which intentional phenomena? I gave him a number of explanations, and each time he insisted something (which became ‘x’ in the course of the exchange) was left out. I had simply assumed that the problem wasn’t that my accounts lacked sufficient neuroscientific detail simply because that criticism (if he made it) would simply amount to misunderstanding a debate about how best to theorize intentionality with a debate about specific neurophysiological structures, a how-possibly debate with a how-actually debate.
“When I “think about my dog” some part of my brain is interacting neurologically with the part of my brain that stores the (somehow rendered into memory) dog-centric sensory percepts. Even though our understanding of how sensory percepts are encoded into and retrieved from memory is still rudimentary I think that “thinking about my dog” has to be one part of your brain interacting with another part of your brain. That interaction is what aboutness is. It’s a neurological rather than a metaphysical activity, but it’s not nothing.”
‘About’ is not a something but it’s not a nothing either. Unpacking this quasi-Wittgensteinian claim is central to BBT, explaining how there can be no such thing as intentionality in the natural world, while explaining how it is intentional idioms do the massive amount of lifting they do. Every single time Jochen tried to reduce BBT to a contradiction, insisting that the elimination of ‘aboutness’ depended on ‘aboutness’ (and he changed his tactics as he went on), it turned on a strawman characterization of the view. There is no such thing as aboutness as a feature of the universe, but as a feature of our cognitive predicament, ‘about’ is clearly a powerful tool.
Otherwise, I’m just not sure what’s to be gained by conceptualizing aboutness in intraneural terms, aside from resituating an naturalistically inexplicable relation between ourselves and our environments into an inexplicable relation between subpersonal processes. It’s all causal, within and without. There are no relations possessing the property ‘aboutness’ outside certain philosophical imaginations.
Jochen, of course, is convinced that the only way ‘There are no relations possessing the property ‘aboutness’ outside certain philosophical imaginations’ is possible is if this claim is ABOUT something. I say, ‘Sure, easily understanding the relation between the claim ‘There are no relations possessing the property ‘aboutness’ outside certain philosophical imaginations’ and the world requires the application of the very heuristic I’m talking about.’ And Jochen says, ‘A ha!’ To say ”Sure, easily understanding the relation between the claim ‘There are no relations possessing the property ‘aboutness’ outside certain philosophical imaginations’ and the world requires the application of the very heuristic I’m talking about” is only possible if the claim is ABOUT something! To which I reply, ‘Yes. You have just applied the heuristic again.’ To which he replies, ‘A ha!’
And so on, ad nauseum. Like I say, I explain the trick, and he keeps pointing to the instant of apparent magic. The recursive nature of the trick dupes him into thinking that the magic hasn’t been explained, the fact that explaining the trick involves repeating the trick, somehow convinces him something magical remains unexplained.
“I gave him a number of explanations, and each time he insisted something (which became ‘x’ in the course of the exchange) was left out.”
I haven’t said that even once. The problem with your theory is not one of failing to explain something, it’s one of failing to be consistent: it’s formulated in intentional terms (whether they refer or not, are heuristic or intrinsic, doesn’t matter; it’s formulated in those terms), but it denies their applicability (again, whether they are applicable to anything in the world doesn’t matter).
“Every single time Jochen tried to reduce BBT to a contradiction, insisting that the elimination of ‘aboutness’ depended on ‘aboutness’ (and he changed his tactics as he went on), it turned on a strawman characterization of the view.”
And yet, you thought my characterization of BBT was on point when I gave it.
“Jochen, of course, is convinced that the only way ‘There are no relations possessing the property ‘aboutness’ outside certain philosophical imaginations’ is possible is if this claim is ABOUT something.”
That’s completely wrong. I’m saying that you can’t say ‘There are no relations possessing the property ‘aboutness’ outside certain philosophical imaginations’ and simultaneously build a theory depending on relations possessing the property ‘aboutness’.
It may be true that whenever I believe something to be about something else, I am merely engaging in heuristic cognition; but you can’t say that in your formulation of BBT, ‘cognizing cognition’ is something merely heuristic and then expect to derive something from there and have it come out true, because heuristics generally aren’t truth-tracking. The claim ‘social cognition turned inwards leads to faulty metacognition’ is true if there is a relation of aboutness characterizing cognition in either case; but if there isn’t, if such is merely heuristic talk (which it may be), then we have no reason to believe that this claim is true—it may or may not be.
You have a theory whose first premise is ‘money can be exchanged for goods and values’ (‘social cognition has other members of our species as its objects’), and produce the conclusion ‘nothing can be exchanged for goods and services’ (‘cognition never has anything as its object’). Now, that conclusion might be true, or it might not be. It may indeed be the case that nothing can be exchanged for goods and services, and that everytime that seems to be happening, it’s merely apparent; it may indeed be the case that there are no relations of aboutness (and at a neuronal level, their almost certainly aren’t), and that aboutness is just a high-level heuristic.
But you can’t argue for these conclusions based on premises that require their opposite to be true. And I don’t mean ‘require to be true because their is some inherent value/intrinsic aboutness’, I merely mean that the premises themselves state these things to be true, by saying that there is something like ‘exchanging for goods and services’ respectively ‘cognizing cognition’.
Of course, you still might just answer my question and tell me how to heuristically cognize cognition (thereby solving the problem of intentionality and proving BBT false). But without such an account, there is just no reason to believe that ‘cognizing cognition’ yields the outcome you say it does, if BBT is right and ‘cognizing cognition’ is merely heuristic.
“Otherwise, I’m just not sure what’s to be gained by conceptualizing aboutness in intraneural terms, aside from resituating an naturalistically inexplicable relation between ourselves and our environments into an inexplicable relation between subpersonal processes. It’s all causal, within and without.”
I don’t think the relation between subpersonal processes is inexplicable. I think the mechanical causality is the explanation. I suppose what I’m really claiming is that the relations known as ‘aboutness’ are actually causal/mechanical relations between physical objects which are misconstrued because the objects and their interactions can’t be directly perceived.
“As for intentionality, no, it’s nothing beyond neuronal activities, or at least I’d need to see some pretty substantial evidence to be convinced that it is. But I do think that there is aboutness, simply because it’s the simplest explanation for why our mental states appear to possess aboutness. This aboutness, however, is not some irreducible property of the world; ultimately, it will yield to scientific analysis, and we’ll know the story of how it emerges from nonintentional neuronal activities…”
I think that if you agree that intentionality/aboutness is neurological and you agree that we can’t directly perceive that neurological activity then you have to agree that something like Blind Brain Theory is true. After all BBT says we can’t directly perceive the neurological activity of our own or other people’s brains so we use placeholders like about and intention, pending scientific understanding of the underlying neurological activity.
That’s why I say the differences between Scott’s position and yours are superficial. Some people do think that aboutness is an “irreducible property of the world.” The people who don’t think so are all on the same side.
I’ve given up, Michael. I realize that Jochen is neither insincere nor unintelligent, but on this one matter he cannot see his way clear the belly of the beast. For him, the recursive moment continually trips him up: he cannot understand how the magic cannot come first, and so as a result, is bound to see any explaining away of intentionality as a contradiction. He needs to see the sterility of his position for himself.
“I think that if you agree that intentionality/aboutness is neurological and you agree that we can’t directly perceive that neurological activity then you have to agree that something like Blind Brain Theory is true.”
Well, I do agree with Scott in the trivial sense that every materialist must: if there is no intentionality among the fundamental properties of the world, then an account must exist of intentionality in terms of non-intentional properties; since we only perceive the intentionality, not the sub-intentional constituents, we are, in this sense, ‘blind’ to our own brain’s workings. In the same sense, we are blind to the underlying cellular composition of our arms, say.
If this were all BBT claimed, I would have no issue with it (but it would be a rather trivial thing to make so much fuss about). However, Scott further concludes that the problem of intentionality is ultimately ill-posed, that there is no such thing, even in some ’emergent’ sense. We’re so fundamentally misled about the workings of our minds that the terms in which we conceive of it simply don’t refer; hence, intentionality is not something that can be analyzed in terms of non-intentional properties, because it’s ultimately not a meaningful concept at all.
This is where he gets inconsistent, because this conclusion implies that the reasoning he used to derive it was unsound. For one concrete example, he believes that one can move on from social cognition, that has as its object other individuals, to metacognition, that has as its object the individual itself. This is well and good as long as we can conceive of cognition, in general, having an object—if this is the case, then the above follows. But if cognition ultimately doesn’t have an object, doesn’t refer to anything, isn’t about anything, then the reasoning becomes specious: we can draw the conclusion only within our own mental concepts; but Scott says that those concepts are misled.
So in particular, there is no reason to assume as true that social cognition can lead to metacognition in the described way just because our concept of cognition tells us so, since that concept is at the bottom mistaken. Without the idea of cognition being about something, we loose the only way we had to evaluate the truth of claims made about cognition, and hence, reasoning using the concept of cognition immediately becomes questionable. Scott tries to reason himself into distrust of the brain, without noticing that this distrust implies that the reasoning process itself no longer can be trusted.
Scott, of course, has invested too much into his theory to be swayed by this observation; he then throws up the term ‘heuristic’ and somehow seems to believe that this would fix anything. But one can’t heuristically conclude anything, as heuristics, contrary to logical inferences, are not guaranteed to preserve truth, at least not absent an explanation of their functioning. So in order to make his claims believable, Scott would have to, at the very least, come up with an account of how one heuristically cognizes cognition, in order to demonstrate whether on this account it is justified to simply substitute the self for the other in moving from social to metacognition. But of course, this account is just what he claims can’t exist, namely an analysis of intentionality in non-intentional terms. So he’s stuck in a theoretical bind he can’t escape, and is left attributing strawmen viewpoints and cognitive deficiencies to those that disagree with him,
I’m quite familiar with this problem, unfortunately, and so I also know that one can only get oneself out of it, if one ever does; one believes to have made such a breakthrough that others simply fail to think within the new paradigm, and thus, are deceived from the outset. I was in the same situation, with a very similar theory, not too long ago (see the blog post I linked to earlier), and I don’t think anybody but myself could have gotten myself out of that. So I never did really have any great hope of getting Scott to see the tangle he’s reasoned himself into, and I wouldn’t have kept replying, if not for his persistent misrepresentation of my views. It’s one thing to disagree, and quite another to immediately label any opponent foolish and accuse them of being deceived.
Not that I’m not used to it: in decisions with those holding to dualism, original intentionalism and the like, I’m generally derided as being deceived by scientism and reductionism. In a way, pissing of partisans on both sides is the best indication that I’m doing something right, at least. 😉
Perhaps my take on Blind Brain Theory is slightly different than Scott’s. You ascribed to him:
Scott further concludes that the problem of intentionality is ultimately ill-posed, that there is no such thing, even in some ’emergent’ sense. We’re so fundamentally misled about the workings of our minds that the terms in which we conceive of it simply don’t refer; hence, intentionality is not something that can be analyzed in terms of non-intentional properties, because it’s ultimately not a meaningful concept at all.
My take on intentionality is this: When I “think about my dog” what is really happening is that the executive center of the brain queries the information stored in the brain’s memory. Even without knowing much detail about the brain’s executive center and how it does what it does we can be confident that such a brain component exists. Similarly, we can be confident we have stored memories which are logically and physically separated within the brain from the executive. We can be reasonably confident of those two features of the brain because we have had the experience of trying to remember something. The thing I want to emphasize about this is that “thinking about my dog” is actually one part of my brain interacting neurologically with another part of my brain. It’s a perfectly natural sort of neurological activity and scientists are working out the details.
But for most of human history we have not thought of it that way. Most human beings through most of history who have written about this sort of thing have thought that each human being is a single, unitary mind rather than a brain who’s components are logically and physically separate within the skull, have different evolutionary histories and work together only fairly well except (Alzheimer’s, strokes, schizophrenia, NFL football) when they don’t. One of the unfortunate consequences of this unitary mind notion is that since we can’t see, hear, feel, smell or taste the mind, we believe it to be immaterial, but we’re sure it exists. If it’s real but immaterial, then one might reasonably assume that material actions don’t affect it. The death of the body is a material action, so death should not affect it. It’s immortal. Belief in the immortality of the mind (or call it a soul, it’s the same idea) has led to many misunderstandings.
Another unfortunate consequence is the belief that since the mind is unitary, it does not have components, so it does not have inter-component communications. If we believe in the unitary mind the explanation for “thinking about my dog” in terms of intra-brain communications that I offered above is not available to us. Instead we believe that “thinking about my dog” creates an actual connection between our unitary mind and our dog. We give this connection names like “aboutness” and “intentionality.” In effect, we have mistaken a real, but natural neurological connection between parts of our brains for a supernatural connection between our minds and the world. It is this intentionality, this supernatural connection between mind and world, which Scott (rightly in my opinion) says does not exist. The reason BBT seems non-trivial to me is that it’s the first attempt I know of to explain the nature of the misconceptions that make the idea of supernatural intentionality seem plausible.
I don’t know if Scott would agree with my approach to BBT. I’m trying to explain it to myself more than to you or anybody else, and I’m trying to use it to work through some of my own questions, which are more about the nature and origin of religious beliefs, than to wrestle with philosophy of mind. Thank you for your time and attention. I will probably comment here again, because I find that writing clarifies my thinking.
…Or consider language. The previously hypothesized executive center of my brain sends instructions to the muscles that generate speech, or written words or what have you. These patterns of compression and rarefaction, or squiggles on paper, or electronic offs and ons travel through purely natural mechanical means through a purely natural mechanical medium. The squiggles etc. are perceived by you. That is to say your optic nerve sends action potentials corresponding to the squiggles to other areas of your brain. A number of other intra-brain communications take place, of the sort I mentioned previously. When I communicate with you using language I’m making a complicated but perfectly natural mechanical connection between your brain and mine. It’s the same kind of thing as “thinking about my dog” except it’s one part of my brain interacting neurologically, acoustically and electronically with one part of your brain. I said that in “thinking about my dog” we confuse a neurological connection between different part of our brains for a supernatural connection between our minds and the work, and call that supernatural connection “aboutness” or “intentionality.” We make the same mistake with speech. The natural (neurological/acoustic/neurological) connection between one brain and another brain is mistaken for a supernatural connection between our minds and the world. In this case that supernatural connection is called “semantics.”
If I understand BBT correctly, this mistaking of natural brain/medium/brain connections for supernatural mind/world connections is common among human beings and is the mistake that lies at the heart of the constellation of thought errors that BBT attempts to diagnose.
“Even without knowing much detail about the brain’s executive center and how it does what it does we can be confident that such a brain component exists.”
Well, there’s a lot of people that would presumably contest such an assumption. Notably, Daniel Dennett in Consciousness Explained makes a quite forceful case that ultimately, no assumptions of this kind—executive centers, finish lines beyond which content enters into conscious cognition, etc.—can stand.
But that’s a bit beside the point. My problem is the question: what is it that makes the connection between two different parts of the brain—be they executive center and memory, or whatever—appear to be about anything? It can’t be, for instance, that the memory is about the dog—that would simply be circular. And again, nothing in the pattern saved in memory is in any way intrinsically dog-like, dog-appropriate, etc. A different brain could use just the same memory configurations, and have them refer to a cat, e.g.; it’s just a question of encoding.
So what happens in order to make the interaction of one part of your brain with another be about your dog?
I am of course speculating far beyond the tiny bit of neuroscientific knowledge I acquired as an English Lit major several decades ago in college, but do you remember the television show Batman? Commissioner Gordon had a telephone set in his office that rang directly to Wayne Manor when he picked the handset up. It was a point to point circuit, so whenever Commissioner Gordon picked up that phone Bruce Wayne answered. Neither the telephone set in Commissioner Gordon’s office, nor the copper cable pair between the office and Wayne Manor, nor the telephone set in Wayne Manor are intrinsically about Batman, but I can see how Commissioner Gordon can come to associate that telephone with Batman. If our brain has a commissioner it’s possible that whenever that commissioner sees the word Batman or the Batman logo it activates the neural circuits that lead to his stored memories of Batman. Because those neural circuits always lead to Batman memories the neural circuits come to seem to be about Batman.
I agree that a different person could use the memory configuration that represents Batman to me for Green Hornet. However once I have established that memory configuration as Batman I can’t change it in my own brain to represent Green Hornet. When I learn about Batman I form memories of Batman and I acquire the ability to recall those memories. While I suppose that it is in principle possible to replace my Batman memories with Green Hornet memories and use the process whereby I once recalled batman memories to recall Green Hornet memories, I personally have never succeeded in deliberately forgetting anything. If I repeatedly use a particular neural pathway to access a particular set of memories the pathway can seem to be about the memories in the same way, more or less, that a particular telephone can seem to be about the person to whom it connects me every time I pick it up.
Although your English prose is quite fine it occurs to me that you might not be American (meaning United States) and so might not be familiar with Batman or the Green Hornet. If that is the case please feel free to substitute the cartoon superheroes of your choice.
“When I learn about Batman I form memories of Batman and I acquire the ability to recall those memories.”
The problem is, however, how that ‘learning about Batman’ thing is supposed to work in the first place. To stay in your example, Commisioner Gordon forms an association between the phone line and Batman by virtue of his concept of Batman; that is, when he first uses the phone, he’s thinking about Batman, and it is this thought about Batman that lends its aboutness to the connection, such that the phone (or maybe its ringing) can afterwards be used as a ‘stand in’, a kind of symbol, for Batman.
This is sometimes called ‘derived intentionality’, and it’s the kind of intentionality that a sentence, or a word or a phrase, possesses: a sentence is not intrinsically about something, but only once it is read by a person with the necessary understanding. To somebody with a different understanding, the sentence might have a different meaning, and hence, be about something different. So, its aboutness does not inhere in the sentence, but rather, within the understanding of the person reading it: it is brought into, rather than extracted from, that sentence. The problem is that whenever you follow this kind of derivation of aboutness, you end at some entity that already must possess it, not at nonintentional processes.
In fact, you’re already aware of this: as you rightly surmised, I am indeed not a native English speaker; but the concepts of ‘Batman’ and ‘Green Hornet’ have culturally diffused far enough for me to be sufficiently (or, as perhaps my wife might say, excessively…) aware of them to understand what you wrote. Had I not already possessed this understanding, however, your words would have been meaningless to me; likewise, had Commissioner Gordon not already had a concept of Batman, thoughts about Batman, then there would have been no way for him to associate the red phone with Batman. So this sort of association process always relies on pre-existing concepts; hence, such kind of explanation can’t suffice to answer the question of where concepts come from in the first place.
Any sort of representation needs to be interpreted. A representation in terms of English is meaningful to me, but meaningless to a non-English speaker; a representation in terms of Chinese is a set of weird squiggles to me, but meaningful to a Chinese speaker. But if we try to explain our understanding in terms of ‘internal’ representations, we find we hit an infinite regress: if we have an internal representation in such a way that some external representation—say, an English sentence—is translated into an internal representation, then we immediately face the question of how that further representation is understood; and if that again only works in terms of creating a representation of it in order to be understood, we have an infinite nested chain of homunculi interpreting representations of representations of representations, etc. The whole thing never bottoms out.
So if, what makes the phone line be about Batman is some connection between centers of the brain, what makes that connection be about Batman? If it is some further connection, then we only end up iterating the problem, never coming closer to solving it. But if it’s something else, then why did we need that connection in the first place? Whatever option obtains, the connection doesn’t do any explanatory work.
“The problem is, however, how that ‘learning about Batman’ thing is supposed to work in the first place.”
So how did Commissioner Gordon learn about Batman? Perhaps the previous commissioner told him, or perhaps Batman himself did. Perhaps Commissioner Gordon suggested the Batphone to Batman. However it happened, I feel confident that it happened through some sort of purely natural connection. I agree with you that any human being must possess some knowledge or skill in order to acquire more knowledge. I believe that human beings are born with certain abilities, such as the ability to perceive using their senses and the ability to remember these sensory percepts, compare them to previous percepts, and create rudimentary classification schemes for them. In other words I believe human beings are born with the ability to learn. I agree that we don’t yet understand the neuro-mechanics that make learning possible, but the fact that newborn children learn indicates that some such neuro-mechanics must exist. That innate capacity to learn is where the chain of associations bottoms out, so to speak. Questions regarding the nature of the intellectual capabilities with which children are born are, to my mind, questions for neuroscience and evolutionary biology, not for philosophy.
“So if, what makes the phone line be about Batman is some connection between centers of the brain, what makes that connection be about Batman?”
The connection seems to be about Batman because the connection leads to Commissioner Gordon’s Batman-related memories, concepts etc. To my mind the next logical question to ask is ‘how did Commissioner Gordon acquire Batman-related memories, concepts etc?’ My answer to that question is ‘he learned them.’ One might then ask ‘how did he learn them?’ or more generally ‘how do human beings learn?’ I admit to lacking a scientific theory regarding how human beings learn, but I don’t think that any scientific theory of human learning will require aboutness or intentionality in the sense of a supernatural mind-world connection.
That having been said, aboutness as a heuristic is useful for, as an example, teaching normally endowed children to read. A teacher can show a child a picture of a sheep and the word “sheep” and the child will add the word “sheep” to his pre-existing sheep-concept. The teacher can explain the concept to the child as if it is a connection between the word “sheep” and actual sheep, but it is not. Learning the word “sheep” does not create a supernatural connection between the child’s mind and actual sheep. The fact that this method of teaching relies on brain-brain connections and not mind-world connections becomes clear if instead of teaching a normally endowed child you are teaching a child who suffers from an aphasia.
I think this better explains the “learning about Batman” thing than I did above:
http://www.ncbi.nlm.nih.gov/pubmed/22221820
[…] something quite distinct from what we in fact are (see “Alien Philosophy,” part one and two). One must turn away from the old ways, the old ideas, and dare to look hard at the prospect of a […]