The Philosopher, the Drunk, and the Lamppost
by rsbakker
A crucial variable of interest is the accuracy of metacognitive reports with respect to their object-level targets: in other words, how well do we know our own minds? We now understand metacognition to be under segregated neural control, a conclusion that might have surprised Comte, and one that runs counter to an intuition that we have veridical access to the accuracy of our perceptions, memories and decisions. A detailed, and eventually mechanistic, account of metacognition at the neural level is a necessary first step to understanding the failures of metacognition that occur following brain damage and psychiatric disorder. Stephen M. Fleming and Raymond J. Dolan, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1338–1349doi:10.1098/rstb.2011.0417
As well as the degree to which we should accept the deliverances of philosophical reflection.
Philosophical reflection is a cultural achievement, an exaptation of pre-existing cognitive capacities. It is entirely possible that philosophical reflection, as an exaptation of pre-existing biocognitive capacities, suffers any number of cognitive short-circuits. And this could very well explain why philosophy suffers the perennial problems it does.
In other words, the empirical possibility of Blind Brain Theory cannot be doubted—no matter how disquieting its consequences seem to be. What I would like to assess here is the probability of the account being empirically substantiated.
The thesis is that traditional philosophical problem-solving continually runs afoul illusions falling out of metacognitive neglect. The idea is that intentional philosophy has been the butt of the old joke about the police officer who stops to help a drunk searching for his keys beneath a lamppost. The punch-line, of course, is that even though the drunk lost his keys in the parking lot, he’s searching beneath the lamppost because that’s the only place he can see. The twist for the philosopher lies in the way neglect consigns the parking lot—the drunk’s whole world in fact—to oblivion, generating the illusion that the light and the lamppost comprise an independent order of existence. For the philosopher, the keys to understanding what we are essentially can be found nowhere else because they exhaust everything that is within that order. Of course the keys that this or that philosopher claims to have found take wildly different forms—they all but shout profound theoretical underdetermination—but this seems to trouble only the skeptical spoil-sports.
Now I personally think the skeptics have always possessed far and away the better position, but since they could only articulate their critiques in the same speculative idiom as philosophy, they have been every bit as easy to ignore as philosophers. But times, I hope to show, have changed—dramatically so. Intentional philosophy is simply another family of prescientific discourses. Now that science has firmly established itself within its traditional domains, we should expect it to be progressively delegitimized the way all prescientific discourses have delegitimized.
To begin with, it is simply an empirical fact that philosophical reflection on the nature of human cognition suffers massive neglect. To be honest, I sometimes find myself amazed that I even need to make this argument to people. Our blindness to our own cognitive makeup is the whole reason we require cognitive science in the first place. Every single fact that the sciences of cognition and the brain have discovered is another fact that philosophical reflection is all but blind to, another ‘dreaded unknown unknown’ that has always structured our cognitive activity without our knowledge.
As Keith Frankish and Jonathan Evans write:
The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning. “The Duality of Mind: An Historical Perspective, In Two Minds: Dual Processes and Beyond, 25
We are standing on the cusp of a revolution in self-understanding unlike any in human history. As they note, the process of digesting the implications of these discoveries is just getting underway—news of the revolution has just hit the streets of capital, and the provinces will likely be a long time in hearing it. As a result, the old ways still enjoy what might be called the ‘Only-game-in-town Effect,’ but not for very long.
The deliverances of theoretical metacognition just cannot be trusted. This is simply an empirical fact. Stanslaus Dehaene even goes so far as to state it as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).
As I mentioned, I think this is a deathblow, but philosophers have devised a number of cunning ways to immunize themselves from this fact—philosophy is the art of rationalization, after all! If the brain (for some pretty obvious reasons) is horrible at metacognizing brain functions, then one need only insist that something more than the brain is at work. Since souls will no longer do, the philosopher switches to functions, but not any old functions. The fact that the functions of a system look different depending on the grain of investigation is no surprise: of course neurocellular level descriptions will differ from neural-network level descriptions. The intentional philosopher, however, wants to argue for a special, emergent order of intentional functions, one that happens to correspond to the deliverances of philosophical reflection. Aside from this happy correspondence, what makes these special functions so special is their incompatibility with biomechanical functions—an incompatibility so profound that biomechanical explanation renders them all but unintelligible.
Call this the ‘apples and oranges’ strategy. Now I think the sheer convenience of this view should set off alarm bells: If the science of a domain contradicts the findings of philosophical reflection, then that science must be exploring a different domain. But the picture is far more complicated, of course. One does not overthrow more than two thousand years of (apparent) self-understanding on the back of two decades of scientific research. And even absent this institutional sanction, there remains something profoundly compelling about the intentional deliverances of philosophical reflection, despite all the manifest problems. The intentionalist need only bid you to theoretically reflect, and lo, there are the oranges… Something has to explain them!
In other words, pointing out the mountain of unknown unknowns revealed by cognitive science is simply not enough to decisively undermine the conceits of intentional philosophy. I think it should be, but then I think the ancient skeptics had the better of things from the outset. What we really need, if we want to put an end to this vast squandering of intellectual resources, is to explain the oranges. So long as oranges exist, some kind of abductive case can be made for intentional philosophy. Doing this requires we take a closer look at what cognitive science can teach us about philosophical reflection and its capacity to generate self-understanding.
The fact is the intentionalist is in something of a dilemma. Their functions, they admit, are naturalistically inscrutable. Since they can’t abide dualism, they need their functions to be natural (or whatever it is the sciences are conjuring miracles out of) somehow, so whatever functions they posit, say as one realized in the scorekeeping attitudes of communities, they have to track brain function somehow. This responsibility to cognitive scientific finding regarding their object is matched by a responsibility to cognitive scientific finding regarding their cognitive capacity. Oranges or no oranges, both their domain and their capacity to cognize that domain answer to what cognitive science ultimately reveals. Some kind of emergent order has to be discovered within the order of nature, and we have to have to somehow possess the capacity to reliably metacognize that emergent order. Given what we already know, I think a strong case can be made that this latter, at least, is almost certainly impossible.
Consider Dehaene’s Global Neuronal Workspace Theory of Consciousness (GNW). On his account, at any given moment the information available for conscious report has been selected from parallel swarms of nonconscious processes, stabilized, and broadcast across the brain for consumption by other swarms of other nonconscious processes. As Dehaene writes:
The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result—a conscious symbol—to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing. Consciousness and the Brain, 105
Whatever philosophical reflection amounts to, insofar as it involves conscious report it involves this ‘hybrid serial-parallel machine’ described by Dehaene and his colleagues, a model which is entirely consistent with the ‘adaptive unconscious’ (See Tim Wilson’s A Stranger to Ourselves for a somewhat dated, yet still excellent overview) described in cognitive psychology. Whatever a philosopher can say regarding ‘intentional functions’ must in some way depend on the deliverances of this system.
One of the key claims of the theory, confirmed via a number of different experimental paradigms, is that access (or promotion) to the GNW is all or nothing. The insight is old: psychologists have long studied what is known as the ‘psychological refractory period,’ the way attending to one task tends to blot out or severely impair our ability to perform other tasks simultaneously. But recent research is revealing more of the radical ‘cortical bottleneck’ that marks the boundary between the massively parallel processing of multiple precepts (or interpretations thereof) and the serial stage of conscious cognition. [Marti, S., et al., A shared cortical bottleneck underlying Attentional Blink and Psychological Refractory Period, NeuroImage (2011), doi:10.1016/j.neuroimage.2011.09.063]
This is important because it means that the deliverances the intentional philosopher depend on when reflecting on problems involving intentionality or ‘experience’ more generally are limited to what makes the ‘conscious access cut.’ You could say the situation is actually far worse, since conscious deliberation on conscious phenomena requires the philosopher use the very apparatus they’re attempting to solve. In a sense they’re not only wagering that the information they require actually reaches consciousness in the first place, but that it can be recalled for subsequent conscious deliberation. The same way the scientist cannot incorporate information that doesn’t, either via direct observation or indirect observation via instrumentation, find its way to conscious awareness, the philosopher likewise cannot hazard ‘educated’ guesses regarding information that does not somehow make the conscious access cut, only twice over. In a sense, they’re peering at the remaindered deliverances of a serial straw through a serial straw–one that appears as wide as the sky for neglect! So there is a very real question of whether philosophical reflection, an artifactual form of deliberative cognition, has anything approaching access to the information it needs to solve the kinds of problems it purports to solve. Given the role that information scarcity plays in theoretical underdetermination, the perpetually underdetermined theories posed by intentional philosophers strongly suggest that the answer is no.
But if the science suggests that philosophical reflection may not have access to enough information to answer the questions in its bailiwick, it also raises real questions of whether it has access to the right kind of information. Recent research has focussed on attempting to isolate the mechanisms in the brain responsible for mediating metacognition. The findings seem to be converging on the rostrolateral prefrontal cortex (rlPFC) as playing a pivotal role in the metacognitive accuracy of retrospective reports. As Fleming and Dolan write:
A role for rlPFC in metacognition is consistent with its anatomical position at the top of the cognitive hierarchy, receiving information from other prefrontal cortical regions, cingulate and anterior temporal cortex. Further, compared with non-human primates, rlPFC has a sparser spatial organization that may support greater interconnectivity. The contribution of rlPFC to metacognitive commentary may be to represent task uncertainty in a format suitable for communication to others, consistent with activation here being associated with evaluating self-generated information, and attention to internal representations. Such a conclusion is supported by recent evidence from structural brain imaging that ‘reality monitoring’ and metacognitive accuracy share a common neural substrate in anterior PFC. Italics added, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1343. doi:10.1098/rstb.2011.0417
As far as I can tell, the rlPFC is perhaps the best candidate we presently have for something like a ‘philosopher module’ [See Badre, et al. “Frontal cortex and the discovery of abstract action rules.” Neuron (2010) 66:315–326.] though the functional organization of the PFC more generally remains a mystery. [Kalina Christoff’s site and Steve Fleming’s site are great places to track research developments in this area of cognitive neuroscience] It primarily seems to be engaged by abstract relational and semantic tasks, and plays some kind of role mediating verbal and spatial information. Mapping evidence also shows that its patterns of communication to other brain regions varies as tasks vary; in particular, it seems to engage regions thought to involve visuospatial and semantic processes. [Wendelken et al., “Rostrolateral Prefrontal Cortex: Domain-General or Domain-Sensitive?” Human Brain Mapping, 000:00-00, 2011 1-12.]
Cognitive neuroscience is nowhere close to any decisive picture of abstract metacognition, but hopefully the philosophical moral of the research should be clear: whatever theoretical metacognition is, it is neurobiological. And this is just to say that the nature of philosophical reflection—in the form of say, ‘making things explicit,’ or what have you—is not something that philosophical reflection on ‘conscious experience’ can solve! Dehaene’s law applies as much to metacognition as to any other metacognitive process—as we should expect, given the cortical bottleneck and what we know of the rlPFC. Information is promoted for stabilization and broadcast from nonconscious parallel swarms to be consumed by nonconscious parallel swarms, which include the rlPFC, which in turn somehow informs further stabilizations and broadcasts. What we presently ‘experience,’ the well from which our intentional claims are drawn, somehow comprises the serial ‘stabilization and broadcast’ portion of this process—and nothing else.
The rlPFC is an evolutionary artifact, something our ancestors developed over generations of practical problem-solving. It is part and parcel of the most complicated (not to mention expensive) organ known. Assume, for the moment, that the rlPFC is the place where the magic happens, the part of the ruminating philosopher’s brain where ‘accurate intuitions’ of the ‘nature of mind and thought’ arise allowing for verbal report. (The situation is without a doubt far more complicated, but since complication is precisely the problem the philosopher faces, this example actually does them a favour). There’s no way the rlPFC could assist in accurately cognizing its own function—another rlPFC would be required to do that, requiring a third rlPFC, and so on and so on. In fact, there’s no way the brain could directly cognize its own activities in any high-dimensionally accurate way. What the rlPFC does instead—obviously one would think—is process information for behaviour. It has to earn its keep after all! Given this, one should expect that it is adapted to process information that is itself adapted to solve the kinds of behaviourally related problems faced by our ancestors, that it consists of ad hoc structures processing ad hoc information.
Philosophy is quite obviously an exaptation of the capacities possessed by the rlPFC (and the systems of which it is part), the learned application of metacognitive capacities originally adapted to solve practical behavioural problems to theoretical problems possessing radically different requirements—such as accuracy, the ability to not simply use a cognitive tool, but to be able to reliably determine what that cognitive tool is.
Even granting the intentionalist their spooky functional order, are we to suppose, given everything considered, that we just happened to have evolved the capacity to accurately intuit this elusive functional order? Seems a stretch. The far more plausible answer is that this exaptation, relying as it does on scarce and specialized information, was doomed from the outset to get far more things wrong than right (as the ancient skeptics insisted!). The far more plausible answer is that our metacognitive capacity is as radically heuristic as cognitive science suggests. Think of the scholastic jungle that is analytic and continental philosophy. Or think of the yawning legitimacy gap between mathematics (exaptation gone right) versus the philosophy of mathematics (exaptation gone wrong). The oh so familiar criticisms of philosophy, that it is impractical, disconnected from reality, incapable of arbitrating its controversies—in short, that it does not decisively solve—are precisely the kinds of problems we might expect, were philosophical reflection an artifact of an exaptation gone wrong.
On my account it is wildly implausible that any design paradigm like evolution could deliver the kind of cognition intentionalism requires. Evolution solves difficult problems heuristically: opportunistic fixes are gradually sculpted by various contingent frequencies in its environment, which in our case, were thoroughly social. Since the brain is the most difficult problem any brain could possibly face, we can assume the heuristics our brain relies on to cognize other brains will be specialized, and that the heuristics it uses to cognize itself will be even more specialized still. Part of this specialization will involve the ability to solve problems absent any causal information: there is simply no way the human brain can cognize itself the way it cognizes its natural environment. Is it really any surprise that causal information would scuttle problem-solving adapted to solve in its absence? And given our blindness to the heuristic nature of the systems involved, is it any surprise that we would be confounded by this incompatibility for as long as we have?
The problem, of course, it that it so doesn’t seem that way. I was a Heideggerean once. I was also a Wittgensteinian. I’ve spent months parsing Husserl’s torturous attempts to discipline philosophical reflection. That version of myself would have scoffed at these kinds of criticisms. ‘Scientism!’ would have been my first cry; ‘Performative contradiction!’ my second. I was so certain of the intrinsic intentionality of human things that the kind of argument I’m making here would have struck me as self-evident nonsense. ‘Not only are these intentional oranges real,’ I would have argued, ‘they are the only thing that makes scientific apples possible.’
It’s not enough to show the intentionalist philosopher that, by the light of cognitive science, it’s more than likely their oranges do not exist. Dialectically, at least, one needs to explain how, intuitively, it could seem so obvious that they do exist. Why do the philosopher’s ‘feelings of knowing,’ as murky and inexplicable as they are, have the capacity to convince them of anything, let alone monumental speculative systems?
As it turns out, cognitive psychology has already begun interrogating the general mechanism that is likely responsible, and the curious ways it impacts our retrospective assessments: neglect. In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda. Absent intellectual effort, you could say the default is to remove the band-aid slowly.
Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). If the drunk has no recollection of the parking lot, then as far as his decision making is concerned, the parking lot simply does not exist. Kahneman writes:
Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self. 381
Could it be that this is what philosophers are doing? Could they, in the course of defining and arranging their oranges, simply be confusing their memory of experience with experience itself? So in the case of duration neglect, information regarding the duration of suffering makes no difference in the subject’s decision making because that information is nowhere to be found. Given the ubiquity of similar effects, Kahneman generalizes the insight into what he calls WYSIATI, or What-You-See-Is-All-There-Is:
An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our nonconscious cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. 85
Kahneman’s WYSIATI, you could say, provides a way to explain Dehaene’s Law regarding the chronic overestimation of awareness. The cortical bottleneck renders conscious access captive to the facts as they are given. If information regarding things like the duration of suffering in an experimental context isn’t available, then that information simply makes no difference for subsequent behaviour. Likewise, if information regarding the reliability of an intuition or ‘feeling of knowing’ (aptly abbreviated as ‘FOK’ in the literature!) isn’t available, then that information simply makes no difference—at all.
Thus the illusion of what I’ve been calling cognitive sufficiency these past few years. Kahneman lavishes the reader in Thinking, Fast and Slow with example after example of how subjects perennially confuse the information they do have with all the information they need:
You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance. 201
You could say what his research has isolated the cognitive conceit that lies at the heart of Plato’s cave: absent information regarding the low-dimensionality of the information they have available, shadows become everything. Like the parking lot, the cave, the chains, the fire, even the possibility of looking from side-to-side simply do not exist for the captives.
As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little. We often fail to allow for the possibility that evidence that should be critical to our judgment is missing—what we see is all there is. Furthermore, our associative system tends to settle on a coherent pattern of activation and suppresses doubt and ambiguity. 87-88
Could the whole of intentional philosophy amount to varieties of story-telling, ‘theory-narratives’ that are compelling to their authors precisely to the degree they are underdetermined? The problem as Kahneman outlines it is twofold. For one, “[t]he human mind does not deal well with nonevents” (200) simply because unavailable information is information that makes no difference. This is why deception, or any instance of controlling information availability, allows us to manipulate our fellow drunks so easily. For another, “[c]onfidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it,” and “not a reasoned evaluation of the probability that this judgment is correct” (212). So all that time I was reading Heidegger nodding, certain that I was getting close to finding the key, I was simply confirming parochial assumptions. Once I had bought in, coherence was automatic, and the inferences came easy. Heidegger had to be right—the key had to be beneath his lamppost—simply because it all made so much remembered sense ‘upon reflection.’
Could it really be as simple as this? Now given philosophers’ continued insistence on making claims despite their manifest institutional incapacity to decisively arbitrate any of them, neglect is certainly a plausible possibility. But the fact is this is precisely the kind of problem we should expect given that philosophical reflection is an exaptation of pre-existing cognitive capacities.
Why? Because what researchers term ‘error awareness,’ like every other human cognitive capacity, does not come cheap. To be sure, the evolutionary premium on error-detection is high to the extent that adaptive behaviour is impossible otherwise. It is part and parcel of cognition. But philosophical reflection is, once again, an exaptation of pre-existing metacognitive capacities, a form of problem-solving that has no evolutionary precedent. Research has shown that metacognitive error-awareness is often problematic even when applied to problems, such as assessing memory accuracy or behavioural competence in retrospect, that it has likely evolved to solve. [See, Wessel, “Error awareness and the error-related negativity: evaluating the first decade of evidence,” Front Hum Neurosci. 2012; 6: 88. doi: 10.3389/fnhum.2012.00088, for a GNW related review] So if conscious error-awareness is hit or miss regarding adaptive activities, we should expect that, barring some cosmic stroke of evolutionary good fortune, it pretty much eludes philosophical reflection altogether. Is it really surprising that the only erroneous intuitions philosophers seem to detect with any regularity are those belonging to their peers?
We’re used to thinking of deficits in self-awareness in pathological terms, as something pertaining to brain trauma. But the picture emerging from cognitive science is positively filled with instances of non-pathological neglect, metacognitive deficits that exist by virtue of our constitution. The same way researchers can game the heuristic components of vision to generate any number of different visual illusions, experimentalists are learning how to game the heuristic components of cognition to isolate any number of cognitive illusions, ways in which our problem-solving goes awry without the least conscious awareness. In each of these cases, neglect plays a central role in explaining the behaviour of the subjects under scrutiny, the same way clinicians use neglect to explain the behaviour of their impaired patients.
Pathological neglect strikes us as so catastrophically consequential in clinical settings simply because of the behavioural aberrations of those suffering it. Not only does it make a profoundly visible difference, it makes a difference that we can only understand mechanistically. It quite literally knocks individuals from the problem-ecology belonging to socio-cognition into the problem-ecologies belonging to natural cognition. Socio-cognition, as radically heuristic, leans heavily on access to certain environmental information to function properly. Pathological neglect denies us that information.
Non-pathological neglect, on the other hand, completely eludes us because, insofar as we share the same neurophysiology, we share the same ‘neglect structure.’ The neglect suffered is both collective and adaptive. As a result, we only glimpse it here and there, and are more cued to resolve the problems it generates than ponder the deficits in self-awareness responsible. We require elaborate experimental contexts to draw it into sharp focus.
All Blind Brain Theory does is provide a general theoretical framework for these disparate findings, one that can be extended to a great number of traditional philosophical problems—including the holy grail, the naturalization of intentionality. As of yet, the possibility of such a framework remains at most an inkling to those at the forefront of the field (something that only speculative fiction authors dare consider!) but it is a growing one. Non-pathological neglect is not only a fact, it is ubiquitous. Conceptualized the proper way, it possesses a very parsimonious means of dispatching with a great number of ancient and new conundrums…
At some point, I think all these mad ramblings will seem painfully obvious, and the thought of going back to tackling issues of cognition neglecting neglect will seem all but unimaginable. But for the nonce, it remains very difficult to see—it is neglect we’re talking about, after-all!—and the various researchers struggling with its implications lie so far apart in terms of expertise and idiom that none can see the larger landscape.
And what is this larger landscape? If you swivel human cognitive capacity across the continuum of human interrogation you find a drastic plunge in the dimensionality and an according spike in the specialization of the information we can access for the purposes of theorization as soon as brains are involved. Metacognitive neglect means that things like ‘person’ or ‘rule’ or what have you seem as real as anything else in the world when you ponder them, but in point of fact, we have only our intuitions to go on, the most meagre deliverances lacking provenance or criteria. And this is precisely what we should expect given the rank inability of the human brain to cognize itself or others in the high-dimensional manner it cognizes its environments.
This is the picture that traditional, intentional philosophy, if it is to maintain any shred of cognitive legitimacy moving forward, must somehow accommodate. Since I see traditional philosophy as largely an unwitting artifact of this landscape, I think such an accommodation will result in dissolution, the realization that philosophy has largely been a painting class for the blind. Some useful works have been produced here and there to be sure, but not for any reason the artists responsible suppose. So I would like to leave you with a suggestive parallel, a way to compare the philosopher with the sufferer of Anton’s Syndrome, the notorious form of anosognosia that leaves blind patients completely convinced they can see. So consider:
First, the patient is completely blind secondary to cortical damage in the occipital regionsof the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses,therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. Prigatano and Wolf, “Anton’s Syndrome and Unawareness of Partial or Complete Blindness,” The Study of Anosognosia, 456.
And compare to:
First, the philosopher is metacognitively blind secondary to various developmental and structural constraints. Second, the philosopher is not aware of his metacognitive blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his metacognitive incapacity. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.
“Philosophical reflection” – a flytrap for metaphilosophers; swarm dreams from the future reengineering thought as philosophical humbug.
‘Swarm Dreams’ is going into my list of fictional band names…
haha…
Reblogged this on alien ecologies and commented:
R. Scott Bakker making his point hit home:
“To begin with, it is simply an empirical fact that philosophical reflection on the nature of human cognition suffers massive neglect. To be honest, I sometimes find myself amazed that I even need to make this argument to people. Our blindness to our own cognitive makeup is the whole reason we require cognitive science in the first place. Every single fact that the sciences of cognition and the brain have discovered is another fact that philosophical reflection is all but blind to, another ‘dreaded unknown unknown’ that has always structured our cognitive activity without our knowledge.” Read him and weep… or, better yet, laugh that philosophy can now pursue other things than its tail…
The ‘philosopher module’ sounds more like the Neurosemiotic Gatekeeper: a sort of information marketplace where neuroswarm vies for the highest bid; and the conscious mind takes the leftovers…
There’s no such thing, of course, but rather some bricolage of mechanisms harnessed for employment by the state. The idea is to draw attention to the subpersonal information consumers involved to give people a sense of just how sketchy the enterprise of philosophical reflection is… whatever it amounts to, neurophysiologically speaking. It make it harder to counter (as normativists do, say) that it is irrelevant to the ‘function’ of philosophical reflection.
Yea, the first time I read Kant I thought: “Dam, philosophy is abstract fiction, and concepts are the new illusions. Make your own concepts, have the poets believe it; you’re off and running…”.
Dolan? *snicker*
Is this “confusing their memory of experience with experience itself?” not the temporal perplexity writ large? Memory and experience as a timespace problem: neurotime and realtime events coordinate neuropatterns as sliced timefeeds of extensible visual aids.
This is something I would love to tackle sometime, the way the kinds of chrono-cognitive illusions BBT describes might feed into physics debates. It actually explains what observer effects consist in, and what makes them so difficult to cognize around. It would be quite interesting if it could relieve physics of some controversy by isolating certain problematic intuitions in different debates. There’s no end to the controversy it’s caused in philosophy!
Could the whole of intentional philosophy amount to varieties of story-telling, ‘theory-narratives’ that are compelling to their authors precisely to the degree they are underdetermined? – rehashing Nietzsche’s insights into fictive truth as necessary?
In my more megalomaniacal moments I convinced I’m petting the tiger that Nietzsche circled his entire life. Roy to his Siegfried…
🙂
You say: “As of yet, the possibility of such a framework remains at most an inkling to those at the forefront of the field (something that only speculative fiction authors dare consider!) but it is a growing one.” Why? Is it the framework or the way the framework frames the questions that escapes the current neuroscientific community?
I think it just has to do with neglect.
One of the things I really enjoyed in Deacon’s Incomplete Nature was his discussion of the discovery of zero: it really is amazing how far mathematics developed without it, around it, before Indian mathematicians began using it. What made zero so difficult? It’s like Kahneman says: the human brain has difficulty dealing with nonentities. We have a hard time ‘zeroing in’ on difference makers that make no difference simply because they make no difference. We need to glimpse their shadow first.
Yea, I caught your point of the battle between math and philosophy of math: pataphysical fictions for the mathematician: math as the end game for imaginary solutions…
So, the parental adage of removing the bandaid slowly was not a good analysis of the phenomenology of pain, but instead was just a rationalization so parents would not have to experience (watch and thus feel) the continual slow pain of their child?
There is probably a more generous explanation . . . just follow Kahneman and say that they were idiots, you know, screwed by that poor self-analyzing device.
Also, if you haven’t seen it, there is an enjoyable little 10 minute segment at spacetimemind with Brown and Mandik arguing over representation/information versus what-its-likeness, with Joseph Ledoux sitting in the wing, doing his best to stay out of the fray.
“Self-analyzing Device” is going into my fictional band name list.
I’ll definitely give it a looksee. Those guys are great!
Their records will need a warning label:
Parental Advisory
Implicit Lyrics
I wonder if it’s merely a coincidence that neglect seems to scale from the individual to the social and political. Just as individual memories are more constructed than veridical our history is constructed to provide us with coherent, flattering narratives. Just as our brains flood us with happy chemicals when we figure out how to spin a fact in a way that flatters our self-concept we flood ourselves with reward when a politician or a sitcom validates our prejudices and we reward the politician with votes and the sitcom with ratings. It seems that as individuals and as a society we love ignorance because the more facts you have the harder it is to fit them all into the coherent, flattering narrative that gives us the warm and fuzzy. We seem to have evolved to love certainty and confidence. As Ben Cain might think it, the alphas love their own certainty, the betas worship the certainty of the alphas and the omegas heroically endure the miseries of epistemic humility.
And was it Siegfried or Roy who got his face bitten off?
I was trying to forget about the Tiger thing! The whole strikes me as ridiculous but for some reason that story rattled.
It almost certainly does scale up–that’s what I try to do with Neuroscience as Cognitive Pollution, but I just have the feeling of the bottom dropping out beneath my feet when I push that way. Who the hell knows how the dynamics transform at that level. Of course it’s the one piece that’s gone viral!
When it comes to tigers I think William Blake had the right of it:
Tyger! Tyger! burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?
Artificial Intelligence tygers, nanobot tygers, brain-machine interface tygers… We might all be about to get our faces bitten off.
“As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little. We often fail to allow for the possibility that evidence that should be critical to our judgment is missing—what we see is all there is. Furthermore, our associative system tends to settle on a coherent pattern of activation and suppresses doubt and ambiguity. ”
A toast to the memory of the thousands of men who might still be alive if George W. Bush understood this.
Bit Narnian with the title, aye? Nice 🙂
But these posts – to go with the booze theme, don’t these posts simply pass the alchoholic a series of shot glasses – shot glasses shaped in the form of letters? I mean you can try and pass the cup of water or cup of tea (when the scientists are feeling a bit frisky) from the teetotaling sciences, but sadly here in the transfer the philosopher basically turns water into wine in the hand over! It just feeds the problem!
Somewhat like AA, the first step is admitting the problem. It’s not intricate discussion of the chemical constitution of booze or its history! It’s that it causes a problem – or hurt. Or so I guess – people do go to AA and the ones who do, I’m guessing it’s mostly from recognising that something is being hurt (even if it’s just their own interests (losing a job, for example)).
I mean with AA, it’s not reason that gets someone over the line to admitting a problem. Is it? I might be way off, but it doesn’t seem so – it’s a matter of either they go to the meeting or they don’t, no long chatty in between they can stretch out forever.
So is it about talking repeatedly on neglect and metacognitiostuff multisylable words that philosophers love to knock back (and regurgitate up again in new patterns to be semi consumed back and forth in a way that’d make a roman orgies participants feel sick)? Even David Wallace went to AA and had to go along with cliche, fairly non intellectual statements like “One day at a time/do whats in front of you” in order to engage with AA. So why aren’t you crying ‘scientism!’ and ‘performative contradiction!’ anymore? No, not the reasonings. Not the specs sheet. Why? Okay, bit much of a random internet poster to ask, but the main point is I’m just wondering if the whole technical read out effort simply feeds the problem? It all becomes wine when they recieveth the cup.
As opposed to?
An appeal to the heart. Do you have some amount of personal connection to the people this is aimed toward? Do they have some amount of personal connection to you? Or is it all like a call of duty match and people are just using others to increase their K/D ratio, not to actually interact with other people? I mean what would be the point of just having all your world view twist and sway purely on observing blueprints? That’s kind of what Neil Cassidy made himself into, wasn’t it? Pure blueprint contact. Let the tiger bite his (and everyone elses with it) faces off.
Further people who are or have been teachers sometimes seem to fall into habits that require a authority/submissive relationship to work out. And it just grates on anyone who has not submitted themselves to being a student. As part of a teaching course the post would probably do very well. But otherwise, obviously you only get to work with the relationship others will grant you. I’d guess I’d end on speculating whether the education system tends to teach students how to, in time, ostracise each other, for teaching them only to speak with students and not really with each other when they become teachers.
[…] Nicolaos, who happens to be the brother of Socrates and (spoiler alert!) is engaged to Diotima. 10. A long rant on the whether philosophy has “access to the information it needs to solve the kind of […]
Dudes dudes dudes dudes dudes DUDES
Sorry to spam and go hugely off-topic, but… I just found a new book on Transhumanism came out in March this year, called Human purpose and Transhuman potential:A cosmic vision for our future evolution by Ted Chu. I’m 30 pages in, it’s wondrous and hugely scary. Dr Ted Chu’s been at it for 15 years and this book seems to be on the cutting-edge of Transhumanistic thought and debate. Somehow though there’s the Hero/Messiah archetype in there too, just to pump us readers for the future. It’s a call to arms of sorts and written perhaps to invade the mass public’s ears/eyes/brains. Also there are some passages that make me think the guy’s talking about Kellhus… Anybody read this book and paint their pants brown yet?
Apologies for the off-topic, but had to give my enthusiastic and scared two cents.
It’s glib of me, but I’d wonder what his responce would be if he was asked whether a transhuman (of any variety) would buy his book? Or pay his wages?
I think it’s implicitly touched upon in the book which is an interesting read. It offers an enthusiastic Grand Narrative for humanity or its successors. ‘Cause there was a line that read : “Still, some human populations may be preserved—there will be debates over how to keep humans “natural” while minimizing their suffering from things like diseases and psychological agony over moral issues—but much of the Earth could be allowed to revert to a prehuman state.” Chu’s Grand Narrative is a cosmic one, and the transhuman – Cosmic Being, CoBe for short. Just don’t imagine it dribbling the future alternative of a basketball, it’s the wrong image.
Has he considered any future predictions from any value sets other than his own? Such a ‘put dad in an old folks home/planet’ seems to come from thinking certain values will be carried down to transhumans. Or worse, he either thinks such values are just part of the universe and so have to be transhuman values as well – or even worse, he doesn’t even think whether they are part of the universe, just acts from that principle of them being so.
Sometimes I humour the idea there is a strong predictor mind involved – but it’s lumped only being able to express itself through a positivism spectrum. So it does the best it can. Zombie movies might be one example, where perhaps the predictor has to work through the positivism of there not being zombies (ugh, ugly term – I’ll say the predictor has to deal with how much hate the positivism condones as well). I wonder if stuff like that is hidden in the book?
Any suggestions for errors in thought are welcome. I am trying to see if I understand. What I see going on is that the brain is blind to the ‘dreaded unknown unknowns’. The question is what are they (for me?), you posit some of those unknown unknowns is itself. We are guessing, and more often than not our guesses are wrong. Can anyone elaborate farther…
Medial Neglect: the Order of Things we try to outwit is the Brain’s on compass of which we know nothing, not even that we know… nothing.
What is the on compass. Sorry I’m not a philosopher.
it was a figure of speech… compass as being the coordinating mapping system, etc. not an actual compass in the literal sense… 🙂 I figured with your Absurdist appellation you’d sense that 🙂
Ok, yeah 🙂 I’ve been reading your passage I guess the words I don’t get are in the order of things. And I’m guessing medial neglect is what you are trying explain about what I said. Sorry I used to be smarter or perhaps I’m not as ignorant of all my ignorances as much.
Oh, no problem… Order of things is just a figure of thought for our current accepted scientific knowledge base… nothing extravagant… just a metaphor for human knowledge.
Ok, thanks so what we don’t get is the limitations on our ability to get things. Our compass or in other words why we do what we do.
It’s like a cabby that likes to drive backwards. Why? Because he likes to see who he’s gabbing with in the mirror. Problem is he’s blind as a bat, so has to honk the horn to move and dodge the traffic.
The brain’s a one-way traffic cop: loading the signals while churning the complexity around him; we get the final product (i.e., move this way or that), but we’ll never get the bullshit he went through to get it to begin with (i.e., the brain’s too busy processing trillions of pieces of info at any single microsecond, so all you get are a few minimal signals: stop, go, turn left, turn right.
How much simpler can we make it? Ding… ding!
Yeah that makes sense; thanks
John Searle’s Chinese Room Argument (summarized here (http://plato.stanford.edu/entries/chinese-room/)) purports to prove that human thought is not merely computational and that therefore machines other than human brains cannot think. Ask yourself what information you would need to answer the question ‘is human thought merely computational, something other than computational or something in addition to computational?’ with the same level of intellectual rigor one would expect, for instance of a scientist reconstructing the geological history of the Great Lakes. Then ask yourself whether you could acquire that information by philosophical reflection alone.
I don’t really buy that. I would argue understanding isn’t anything more than knowing the algorithm. Couldn’t it be said though that we don’t know enough of ourselves to clearly answer this question. I ‘intuit’ that our understanding is just manipulation, not necessarily good or bad motives; but I don’t know.
I don’t buy Searle’s argument either. My point, and Scott’s point so far as I understand it, is that the information about how brains work that one would need in order to answer this sort of question in an intellectually rigorous way is not really available to philosophical introspection. It used to be the case that the information was not available anywhere else, so while philosophizing wasn’t doing any good it wasn’t doing any harm.
William James once described philosophy as “an unusually stubborn attempt to think clearly.” As questions about how the brain stores, retrieves and processes information and related questions about whether and if so how the brain has semantics over and above syntax become subject to empirical verification philosophical stubbornness comes to be counterproductive. Sometimes gathering data is a better use of your time than theorizing, especially if your theory tells you there is no data to be gathered.
My own take on the Chinese Room Argument is that God’s eye view arguments of this type are mostly useless. From the perspective of the people outside the room feeding questions into the room the available data are inadequate. Human language-related behaviors include far more than conversation, and a valid assessment of whether a given entity understood a natural language would require interaction with that entity across the entire range of human language-related behaviors. If the behavior of the entity in question was indistinguishable from the behavior of a natural human across the entire range of language-related behaviors I think you would have no intellectually defensible choice other than to conclude that the entity possesses equivalent natural language understanding. If we are assessing language understanding via behavior (and that’s how human beings have done it for virtually the whole of human history) the semantics versus syntax issue is not relevant.
That having been said, God’s eye view arguments are useful when you can actually take a God’s eye view. For most of human history it was not possible to look inside the Chinese Room of another person’s head to determine how they were processing language. As computers become more powerful and as our empirical understand of the brain grows the structure and function of the human brain will inform how we design non-human brains. That’s not to say that it necessarily makes sense to design non-human brains like human brains. We learned a lot about aerodynamics by examining birds, but airplane wings don’t flap.
Ok yeah thanks. That makes sense to me, and I dig it. Though for my own part I’m betting human understanding isn’t anything more than the ability to manipulate our world.
http://www.iarpa.gov/index.php/research-programs/microns/microns-baa
It used to be the case that the information was not available anywhere else
Good point! What else was there to trust? And on a matter that to various degrees must be tied to survival!
so while philosophizing wasn’t doing any good it wasn’t doing any harm.
I don’t know, doesn’t Plato’s cave open up questions that might have made people second think at some point? I guess it’d be interesting to run a science experiment on that, something like people are given a talk about Plato’s cave, then without them knowing it’s staged, put into some situation with a choice involving something they might think they know about themselves. Okay, vague experiment so far. And I’m remembering the good samaritan experiment that was similar – perhaps philosophy might fare better than religion? Let’s race ’em!
You say: “For most of human history it was not possible to look inside the Chinese Room of another person’s head to determine how they were processing language.” True, but even now we do not have direct access to the brain, we are still dependent on interfaces, apparatuses, etc. that translate these processes into images, data, etc. that we can then interpret etc.
One of the problems I still face is that language itself did not exist in a vacuum, it was bound to at least one other person or community of persons. Language is social, therefore as Hofstadter and Sanders in Surfaces and Essences argue we began from the beginning forming continuous categorizations by analogy in this give and take of communication between one or many others to build up these links that over time became part and partial of our brains essential makeup. Therefore these processes of the brain were in some way already external to the very structure that shaped them in the first place making the brain as we’ve come to know it a “social brain”. So by the very nature of this process it was this interaction with the environment and milieu of social interaction that we do have access to the very processes that have shaped the brain, because they cohabit the very structures that have also shaped communication in both the narrow and wider sense.
Next is such aspects as mathematics, etc. in which the symbolic structures of the brain encoded in iconic, symbolic, mathematical structures as well became externalized in data storage devices: clay tablets, papyrus, paper, computer hard drives, etc. In this way we’ve interacted with and externalized models of the brain from the beginning in the very structures of art, math, and linguistic systems. One could say that these very objective systems are the brains structuration and mode of externalization of these processes as they’ve developed through time. Without the externalization of these feature sets of the brain’s processes we would not have been able to develop second-order thought or the recursive self-reflecting modes of systematic conceptuality based on categorizations and analogy that have effected the invention of technologies etc. The objectification of thought is the objectification of the brains processes. Therefore we’ve never had direct access to these processes, but have always had indirect access through the relations of our codified systems of linguistic, iconic, symbolic, mathematical, etc. systems.
There’s something about Anton’s syndrome – I’d suspect more than the blindness. I mean some brain injuries have caused people, for example, to be unable to make a choice over simple things – which pair of socks to wear confounds them possibly for an hour (as they try and reason it out). I should chase down the link for that – it might have found the link here, originally. Anyway the point is I wonder if maybe the sight managing parts of the brain are perhaps near some kind of ‘self doubt’ part of the brain? And both got damaged at once? You might argue there should be cases where such a ‘self doubt’ area was damaged by itself and identified as such – but confidence isn’t treated the same way as blindness, so why would it get identified? Anyway, if you’re gunna hypothesize rape modules in the brain, I get to hypothesize self doubt modules!
Let me try a different tack… We’ve never had direct access to the brain. Even now all we have is indirect access through specialized apparatuses and interfaces that still codify the processes of the brain in computer processes that are read back out as images or transcriptions into various languages: math, signals, etc.
Self-reflection has never been internal to begin with, it has always be external in the sense that we objectify notions, ideas, concepts. We have no direct access to these notions, ideas, concepts accept as we materialize them in speech or written signs. These may be stored in several external storage devices: clay tablets, papyrus, paper, computer drives, etc. Then we can play these processes back through reflection and work on them. In this sense the brain’s processes are externalized in speech and writing as communications devices for others or ourselves.
Over time working on these processes through external and internal reflection we’ve built up linguistic and mathematical meta-languages or languages about languages: grammars of behavior and mental process. The brain was naturalized long ago, it is us who have misunderstood the ubiquity of the brain’s own external processes as it has manifested itself in language and math, etc.
It’s almost like a photo template or negative: in language and math we have the negative plate of the brain’s processes temporalized and externalized as speech or writing. Once we recall them from their material manifestation (i.e., think of how the eye works) we transcribe them back into the brain’s own biochemical processes. But it was the brain itself that started this whole process of developing and externalizing its own processes in language and math, etc. Humans were from the beginning shaped by and in the brain’s own processes and as it has through history developed greater and greater systems of transcription and translation, as well as external storage devices, the more powerful its capacities and powers became.
We’ve tried to bind the brain to the material skull when in fact it has bypassed our limited flesh and developed its own tools and means beyond our human limitations.
I’m going to try to offer a few remarks about both of your posts. “True, but even now we do not have direct access to the brain, we are still dependent on interfaces, apparatuses, etc…” In that respect neuroscience is a bit like particle physics. We can’t directly observe the collisions of subatomic particles, but we have theories about them and we design experimental apparatus to test those theories. When we design and build machines like the Large Hadron Collider we are constructing what we hope is a chain of causality between the subatomic event and the display on our computer screen. In the same way, when we design and build a camera, take a photograph, then develop and print the negative we are constructing what we hope is a chain of causality between the actual thing we photographed and the printed image.
There is a chain, or perhaps a web of causality between the physical structure of the brain and the language, art, mathematics, philosophy etc. that human brains produce. That web of causality was designed by 3 billion years of evolution rather than a few thousand years of engineering and a human brain is at least as much more complicated than a Large Hadron Collider as a Large Hadron Collider is more complicated than my grandfather’s Kodak point-and-shoot. It is not yet possible to work the causality backwards from human culture to the physical structure of the brain. It may never be. My point about John Searle was that he is trying to do something like that. He is trying to reverse engineer the physical, or more precisely the logical structure of the brain from his idea of how human beings use language. I don’t think we know nearly as much about language or the brain as we need to know, or as Searle thought he knew, to make that a viable enterprise.
“One of the problems I still face is that language itself did not exist in a vacuum, it was bound to at least one other person or community of persons. Language is social…” I think there might be a little bit of chicken-and-egg in that. Did human beings evolve language as a way to represent the external world to their own cognitive processes and then realize that it would also be useful for representing their own cognitive processes to other people’s cognitive processes, and for representing their own cognitive processes to their own cognitive processes? Perhaps at this late date it doesn’t matter, but the fact that using language to communicate involves voluntary motor activity and voluntary motor activity is usually preceded by some cognitive process suggests that the beginning of language might slightly precede the beginning of linguistic communication. However it began, language is certainly social now. In my admittedly very limited experience philosophers of mind don’t care for brain-computer analogies, but the idea of brains as nodes in a network does not seem completely unreasonable. A node can’t function effectively as part of a network unless it can be changed in some way by other nodes. Language is a way for nodes in a human network to update each other. It’s wonderful that technology has allowed your brain to update my brain even though we have never met incarnate. It’s even more wonderful that technology has allowed Aristotle’s brain to update my brain even though he died millennia before I was born. Nonetheless I don’t think one should lose sight of the fact that the brain is a biological organ. For any one human being, access to the powers and capacities of the human network depends on the functioning of one’s own node. If I get Alzheimer’s I can’t get a new hard drive and a new processor, then download my backup files and reboot. I can watch my personality disintegrate or I can end it while I still have the strength to put a gun to my head.
And have you seen the movie Lucy? It’s a silly premise but it’s the third film about post-humanism by a major studio in the last two years (Her and Transcendence). Is three a trend?
Yea, I wouldn’t disagree with much of what you say in general terms… at the most basic level we’re tied into the quantum machines, then once these appropriate and organize at the level of cells we begin that trek to our current state, and, yet, we hosts billions of other organisms as well who shared our system, so we are a sort of collective machine of bacteria factories, etc. When it comes down to it we’re a complex biotechnology that has discovered its ability to think, and is under the illusion that it knows just what that means… and, obviously as has been pointed out on this blog many times: it’s all blindness and meta-neglect…
I think all I was trying to qualify is that it was the brain in its evolutionary processes that came up with the solutions of symbolic, iconic, etc. signs and linguistic and/or mathematical systems to adapt its self to the environment of which it is a part. Beyond that it redirected these systems back on itself and presumed that it could uncover the truth of it using the tools build for first-order environmental concerns. But it was wrong in that sense, but has sense in the sciences discovered other indirect ways of accessing and knowing the operations that thought itself was never equipped to do.
Re Lucy, I haven’t seen the film but I saw this in a promo so it’s not spoilers – after escaping she asks a cabbie at gunpoint if they speak english and when one doesn’t, she shoots him. Presumably dead. I mean, WTF is that? I’m not sure I like this unexamining method of media (I’m assuming the action just flows on and nobody talks about it). And I know I read Neuropath, and I know Niel is kinda random in perhaps just such a respect – but there’s a difference between having a character like that in a story and a movie which simply rolls on after such a breach event without any reflection. It just seems to be a movie that’s adopting the random nihilism of the character – endorsing it (simply by practising it, discussion wise). To me it seems to just be failing to grasp the topic (at best) – or simple endorsement.
“In this sense the brain’s processes are externalized in speech and writing as communications devices for others or ourselves.”
But of course they are not externalised but rather stimulated by external speech and writing or we can’t lose the thread that basic stimulus-response mechanisms are still at work. What makes it so mysterious or open to scientific inquiry is that the brain is so complex and has so many mechanisms that perform translational functions. Translational mechanisms that most in the field of philosophy do not have arm-chair understanding of so they can only sit on a_____?
Probably like some of the pictures in previous posts here, but instead of pcitures of machines fading to hazy fragments, have a picture of one object (perhaps a fairly crass, unaesthetic object) that has parts removed from it until it appears to be another object entirely (something more aesthetic). Perhaps because I’m more of a visual based thinker, but it seems the more intuitive way to counter argue to him – that obviously the rose is a rose, of course – but then again if you replace all the parts you took away to make it look like a rose, it doesn’t smell as sweet.
[…] The Philosopher, the Drunk, and the Lamppost […]
I haven’t seen Lucy either, but I’ve killed insects and felt no remorse. How much smarter do you have to be than another being in order for killing that other being to be free of moral significance? I’m not crediting the movie with trying to ask that question, but it’s a question worth asking.
Why does feeling nothing mean there’s no moral significance? I’m not appealing to anything supernatural either – just a butterfly effect that could make other things you treat as significant suffer for the ostensibly insignificant thing you did?
I’m not really subscribed to some sort of get out of jail free boundary limit on killing without remorse. Not that I don’t walk across grass often enough, crushing who knows how many insects without even knowing it. Also deliberately killing slugs who eat my garden vegetables! I’m all punisher on their asses! Or whatever slugs have?
I assume we agree there is no morality other than what human beings create for themselves. There is no divinely ordained moral law which human beings are born knowing. If the consensus within a given community is that killing some particular kind of being has no moral significance then killing that particular kind of being has no moral significance, because moral significance is created by the community. If human beings create morality I think it worthwhile to ask what morality, if any, will be built into the thinking machines with which we are being threatened in Ochlocrat’s IARPA link above. I’m not a big Schwarzenegger fan but I can see why Skynet thinks we’re garden slugs.
And I haven’t seen the new Planet of the Apes movie but I suppose you can count it as post-humanist as well.
The following may be a bit ranty and perhaps jumps on certain wording – that’s because I think it’s pivotal in a general sense, it’s not just to jump on you, Michael. 🙂
If the consensus within a given community is that killing some particular kind of being has no moral significance then killing that particular kind of being has no moral significance.
I’m sorry, this is having your cake and eating it too. There’s nothing in your second repetition – there is no ‘then’ in ‘then killing that particular kind of being has no moral significance.’
There is no ‘then’. Just because a certain behavioural pattern has spread around a bit doesn’t lead to any ‘then’ in your statement.
The best you’ve got is ‘If the consensus within a given community is that killing some particular kind of being has no moral significance…then that’s what occured amongst them’
There is no divine affirmation for their consensus either, is there? IS THERE!? If there isn’t, why repeat any ‘then’ other than ‘then that’s what occured amongst them’?
You’re literally inventing a dogma there that ‘killing that particular kind of being has no moral significance.’ and trying to sell it to me because…there is no dogma.
because moral significance is created by the community.
No.
Sorry, I find this naive nihilism.
Moral significance is not on the periodic chart.
What occurs is a behavioural pattern is reinforced by other behaviour patterns (and the reinforcement occurs because behaving as organised tribes worked darwinistically well for a long time).
They aren’t creating some kind of moral significance thingie.
You’re gripped by your instincts to treat multiple semi aligned behavioural patterns as if they are something more than multiple semi aligned (or ostensibly aligned) behavioral patterns. Because that kludge thinking was the easiest way to think of it all and the most effacious to get a tribe hunting food and getting babies to survive.
And personally I think it’s fine to think that way. It’s human. But if you’re gunna enter the game you have to leave this coat at the door. Cease auto attuning yourself to various behaviour repetitions you might observe, for the time period spent on looking at this ‘game’.
The best I can offer is that yeah, in my post I reached out to various similar thinking strutures in other readers. It’s like they ran a test once on how various people from various parts of the world would name a couple of random shapes. One shape was blobby and one had sharp points – everyone in the world gave the blobby one a name who’s sound sine wave was more rounded, while the sharper object got a name with a sharp sine wave. That’s because our mental architecture is very similar, world round. And that similarity is what I called out to.
Granted, it might seem an appeal to some ‘community made moral significance’, or even to some divien made moral significance. So yeah, my communication was kludgey, I pay that and kneel at the mention of my mistake (I guess kneeling sounds like another of the same appeal, though…).
If human beings create morality I think it worthwhile to ask what morality, if any, will be built into the thinking machines with which we are being threatened in Ochlocrat’s IARPA link above. I’m not a big Schwarzenegger fan but I can see why Skynet thinks we’re garden slugs.
I don’t think you build in morality – perhaps I’m not reading charitably, but you seem to be refering to morality as some delimiter that will ensure behaviour is strapped inside something one might think will stay ‘right’ or ‘good’. But we can’t even do that for ourselves – it’s only because we live such sedentary lives that we generally stay within the ‘right’.
As I take it we all end up building behaviours we think are right.
The machines will only be able to do that as well – ‘2001 a space oddesey’ had the machine go mad when it tried to forfil two contrary commands from humans.
Well build morality into them as much as we do into our children.
They will be our children – and we effectively early teen parents, because we are so young. Children raising children.
And yeah, their mindset is likely to be radical – like raising an autistic child, but perhaps even more a mindset that does not overlap our own very much. They might use sharp noises to name bubbles and rounded noises to name sharp things.
And I haven’t seen the new Planet of the Apes movie but I suppose you can count it as post-humanist as well.
The difference is too clear. People aren’t going to auto attune their morality to a clear ‘other’. But with Lucy? Also I’m wondering if her name is some sort of Lucifer reference – for the sequal ‘I love Lucifer!’
Sorry all, for so many posts…and no links…I don’t have cool links…
I’m sorry, this is having your cake and eating it too. There’s nothing in your second repetition – there is no ‘then’ in ‘then killing that particular kind of being has no moral significance.’
A tautalogy can have those two steps. You can argue against its soundness of reasoning all you want, but it still has those two steps.
You’re literally inventing a dogma there that ‘killing that particular kind of being has no moral significance.’ and trying to sell it to me because…there is no dogma.
Refuting the acceptability of killing as dogmatic is itself dogmatic.
Sorry, I find this naive nihilism.
Moral significance is not on the periodic chart.
If moral significance is generated by the brain, and the brain is physical, then moral significance is on the periodic chart.
I don’t use emoticons but . I think to the extent that we disagree the root of it is here:
because moral significance is created by the community.
No.
If moral significance is not created by the community then by Whom is it created? Do you accept that moral beliefs are non-super-natural in origin? If moral beliefs are non-super-natural in origin where else can they originate other than within the community that claims to be governed by them?
I think Nietzsche was right. The death of God was a disaster. In His absence our moral beliefs do start to see either utilitarian or arbitrary. I do think that human beings and human communities tend to have broadly similar moralities because they have broadly similar neural architectures due to broadly similar evolutionary histories. Within that broad similarity each community will add the details most appropriate for their particular circumstances. Historically, communities have assigned divine origins to their moral precepts precisely in order to prevent members of the community from thinking critically about them. If moral precepts are no longer considered to have divine origin the only justification for obeying them is utilitarian. But if the rules are utilitarian can we still say they’re moral?
Frank,
A tautalogy can have those two steps. You can argue against its soundness of reasoning all you want, but it still has those two steps.
That’s a tautology in itself – interesting meta planing there, keeping my attention on the original tautology while making one to encompass the original situation with the other hand, ala a magicians misdirection. Possibly grabbing your attention too.
I looked up wikipedias definition – ‘a self-reinforcing…’ – sorry, self reinforcing? What is this witchcraft? Got it’s own self now? Wow, not only do we have the hard problem of the self, but now tautology has got a self as well?
What are you going to say, that saying it has two steps also has two steps as well and I can argue against it all I want, but saying it has two steps ALSO has two steps and so on and so forth? Two steps all the way down? Please, you’re losing track of your own ‘when’ statements recursive loop and treating such a loss as a significant indicator.
I know I sound like a glib bastard saying this, but I only know two ways – to lie about it, or a hot knife. Maybe one day I’ll know a third way to talk about it and feel bad I didn’t know sooner.
Refuting the acceptability of killing as dogmatic is itself dogmatic.
K. Was I bitching about having a dogma at all, or bitching about someone trying to sell me a dogma while pretending it wasn’t from a dogma? If you read me again, you’ll find it’s the latter. Yeah, I don’t want the taxi driver in Lucy to get shot for not speaking english – that’s a dogma and I’m cool wid it.
If moral significance is generated by the brain…
It…as in that idea you refer to…isn’t.
It’s no more generated than when playing fallout 3 and your cursor goes green or red over a target, good or evil is being generated. It’s not. It’s just green or red.
How you interpret the colour is the only thing goin’ on. How you react is the only thing going on. How electral impulses stimulate synaptic weights is only thing going on. Or until you tie in some divine or some wacky shit, that’s all.
As I said, hot knife. Maybe one day I’ll know a less cut throat way which does not risk humouring the very things it’s there to cut.
Just a dumb messenger here, don’t shoot me in anything I really need.
Michael,
I’ve simply spoken about there being no ‘then’ with your repetition. That ‘then’ is just a sell to convince me and atleast make me accept the behavioural model as something my behaviourial model takes into account (thus modifying it), if not adopting your behaviour entirely (extensive modification)
As said before, leave your coat at the door on this. Because otherwise your coat will burn and maybe you end up something like Niel Cassidy. I really advocate a quarantine area on the following thinking.
Anyway community isn’t creating anything. As I said, we are impressionable – especially when we are children, less so as we get older.
All you have is a community of clay men, pressing their moulds onto other clay men. Some are crusty and do not take the moulds impression so very much. Others are young and soft and very easy to mould.*
There isn’t a ‘moral significance’. That’s the disturbing thing – the thing that stops someone from mounting the pavement in their car and running you down…the thing you thought stopped that, does not exist.
Just a bunch of delicate mould transfer transactions. So very delicate.
I thought Scott already shot this stuff down with those posts about ‘man the meaning maker’ awhile ago? And hell, he doesn’t even want it to be true, IIRC.
If moral beliefs are non-super-natural in origin where else can they originate other than within the community that claims to be governed by them?
Just imagine, instead of your moral significance, moulds, pressed on/passed on over and over, at this point or that point, over and over. Moulds, all the way down.
* And sometimes…sometimes we undermine the integrity of our mould, at the risk of the other remaining concrete, simply incase we both let them soften and perhaps actually talk rather than press.
I thought “Lucy” was a reference to the African fossil. I’m not trying to sell you a dogma. I’m trying to argue that the members of any community collectively sell themselves dogmas. If my argument seemed to have a bit of grift in it I suspect it comes from the fact that once you remove the divine origin most of the beliefs that divinity formerly underwrote seem to have a bit of grift to them.
And I think at least some of what passes for morality is the fear of getting caught.
Well, you mentioned the movie ‘Lucy’ above.
And have you seen the movie Lucy? It’s a silly premise but it’s the third film about post-humanism by a major studio in the last two years (Her and Transcendence).
I actually got a bit pumped about the movie you mentioned so I talked about it further.
Otherwise my comments have been in regard to the following comment and the ‘then’ part of it. I don’t think it says they just sell it to themselves, the ‘then’ statement is the same sort used in ‘if you boil water then you get steam’. It’s a missplaced use.
If the consensus within a given community is that killing some particular kind of being has no moral significance then killing that particular kind of being has no moral significance.
For “Lucy” here is an interesting review from a neuroscientist. My take is that if Morgan Freeman is in it, it has some gravitas based on his character and “Through The Wormhole” science association.
As far as language and intentionality I always like this PBS Show about Watson the Jeopardy computer http://www.pbs.org/wgbh/nova/tech/smartest-machine-on-earth.html
Great insights about solving the syntax of human language and really, Watson may be the ultimate in neglect.
So what exactly does BBT implicate in terms of action, if anything at all. After all,
“All Blind Brain Theory does is provide a general theoretical framework for these disparate findings, one that can be extended to a great number of traditional philosophical problems. ”
in which case we will eventually throw it into the same grave as philosophy, another abortive attempt to interpret a world that fails to recommend and subsequently succeed in changing it, which has always been one of the implicit promises of philosophy.
Personally its been helpful. Knowing it makes judging others harder, but the reflex is still there.
[…] and others biologically constrained? The evidence that we are so constrained is nothing short of massive. We are not, for instance, functionally constructed to track our functional construction vis a vis, […]
[…] and others biologically constrained? The evidence that we are so constrained is nothing short of massive. We are not, for instance, functionally constructed to track our functional construction vis a vis, […]
[…] this post he takes aim at my claim that his original critique simply begs the question against the Eliminativist. Since the nature of intentional idioms is the issue to be resolved, any argument that resolves the […]
[…] other words, this picture of human metacognition (which is entirely consistent with contemporary research) provides an elegant (if sobering) recapitulation and explanation of what Bayne calls the […]
F*ckin’ remarkable issues here. I’m very glad to peer your article. Thanks a lot and i am looking forward to touch you. Will you please drop me a mail?
Got a fatal error, Gilbert. I can be reached at richard.scott.bakker@gmail.com.