Introspection Explained
by rsbakker
So I couldn’t get past the first paper in Thomas Metzinger’s excellent Open MIND offering without having to work up a long-winded blog post! Tim Bayne’s “Introspective Insecurity” offers a critique of Eric Schwitzgebel’s Perplexities of Consciousness, which is my runaway favourite book on introspection (and consciousness, for that matter). This alone might have sparked me to write a rebuttal, but what I find most extraordinary about the case Bayne lays out against introspective skepticism is the way it directly implicates Blind Brain Theory. His defence of introspective optimism, I want to show, actually vindicates an even more radical form of pessimism than the one he hopes to domesticate.
In the article, Bayne divides the philosophical field into two general camps, the introspective optimists, who think introspection provides reliable access to conscious experience, and introspective pessimists, who do not. Recent years have witnessed a sea change in philosophy of mind circles (one due in no small part to Schwitzgebel’s amiable assassination of assumptions). The case against introspective reliability has grown so prodigious that what Bayne now terms ‘optimism’–introspection as a possible source of metaphysically reliable information regarding the mental/phenomenal–would have been considered rank introspective pessimism not so long ago. The Cartesian presumption of ‘self-transparency’ (as Carruthers calls it in his excellent The Opacity of Mind) has died a sudden death at the hands of cognitive science.
Bayne identifies himself as one of these new optimists. What introspection needs, he claims, is a balanced account, one sensitive to the vulnerabilities of both positions. Where proponents of optimism have difficulty accounting for introspective error, proponents of pessimism have difficulty accounting for introspective success. Whatever it amounts to, introspection is characterized by perplexing failures and thoughtless successes. As he writes in his response piece, “The epistemology of introspection is that it is not flat but contains peaks of epistemic security alongside troughs of epistemic insecurity” (“Introspection and Intuition,” 1). Since any final theory of introspection will have to account for this mixed ‘epistemic profile,’ Bayne suggests that it provides a useful speculative constraint, a way to sort the metacognitive wheat from the chaff.
According to Bayne, introspective optimists motivate their faith in the deliverances of introspection on the basis of two different arguments: the Phenomenological Argument and the Conceptual Argument. He restricts his presentation of the phenomenological argument to a single quote from Brie Gertler’s “Renewed Acquaintance,” which he takes as representative of his own introspective sympathies. As Gertler writes of the experience of pinching oneself:
When I try this, I find it nearly impossible to doubt that my experience has a certain phenomenal quality—the phenomenal quality it epistemically seems to me to have, when I focus my attention on the experience. Since this is so difficult to doubt, my grasp of the phenomenal property seems not to derive from background assumptions that I could suspend: e.g., that the experience is caused by an act of pinching. It seems to derive entirely from the experience itself. If that is correct, my judgment registering the relevant aspect of how things epistemically seem to me (this phenomenal property is instantiated) is directly tied to the phenomenal reality that is its truthmaker. “Renewed Acquaintance,” Introspection and Consciousness, 111.
When attending to a given experience, it seems indubitable that the experience itself has distinctive qualities that allow us to categorize it in ways unique to first-person introspective, as opposed to third-person sensory, access. But if we agree that the phenomenal experience—as opposed to the object of experience—drives our understanding of that experience, then we agree that the phenomenal experience is what makes our introspective understanding true. “Introspection,” Bayne writes, “seems not merely to provide one with information about one’s experiences, it seems also to ‘say’ something about the quality of that information” (4). Introspection doesn’t just deliver information, it somehow represents these deliverances as true.
Of course, this doesn’t make them true: we need to trust introspection before we can trust our (introspective) feeling of introspective truth. Or do we? Bayne replies:
it seems to me not implausible to suppose that introspection could bear witness to its own epistemic credentials. After all, perceptual experience often contains clues about its epistemic status. Vision doesn’t just provide information about the objects and properties present in our immediate environment, it also contains information about the robustness of that information. Sometimes vision presents its take on the world as having only low-grade quality, as when objects are seen as blurry and indistinct or as surrounded by haze and fog. At other times visual experience represents itself as a highly trustworthy source of information about the world, such as when one takes oneself to have a clear and unobstructed view of the objects before one. In short, it seems not implausible to suppose that vision—and perceptual experience more generally—often contains clues about its own evidential value. As far as I can see there is no reason to dismiss the possibility that what holds of visual experience might also hold true of introspection: acts of introspection might contain within themselves information about the degree to which their content ought to be trusted. 5
Vision is replete with what might be called ‘information information,’ features that indicate the reliability of the information available. Darkness, for instance, is a great example, insofar as it provides visual information to the effect that visual information is missing. Our every glance is marbled with what might be called ‘more than meets the eye’ indicators. As we shall, this analogy to vision will come back and haunt Bayne’s thesis. The thing to keep in mind is the fact that the cognition of missing information requires more information. For the nonce, however, his claim is modest enough to acknowledge his point: as it stands, we cannot rule out the possibility that introspection, like exospection, reliably indicates its own reliability. As such, the door to introspective optimism remains open.
Here we see the ‘foot-in-the-door strategy’ that Bayne adopts throughout the article, where his intent isn’t so much to decisively warrant introspective optimism as it is to point out and elucidate the ways that introspective pessimism cannot decisively close the door on introspection.
The conceptual motivation for introspective optimism turns on the necessity of epistemic access implied in the very concept of ‘what is it likeness.’ The only way for something to be ‘like something’ is for it to like something for somebody. “[I]f a phenomenal state is a state that there is something it is like to be in,” Bayne writes, “then the subject of that state must have epistemic access to its phenomenal character” (5). Introspection has to be doing some kind of cognitive work, otherwise “[a] state to which the subject had no epistemic access could not make a constitutive contribution to what it was like for that subject to be the subject that it was, and thus it could not qualify as a phenomenal state” (5-6).
The problem with this argument, of course, is that it says little about the epistemic access involved. Apart from some unspecified ability to access information, it really implies very little. Bayne convincingly argues that the capacity to cognize differences, make discriminations, follows from introspective access, even if the capacity to correctly categorize those discriminations does not. And in this respect, it places another foot in the introspective door.
Bayne then moves on to the case motivating pessimism, particularly as Eric presents it in his Perplexities of Consciousness. He mentions the privacy problems that plague scientific attempts to utilize introspective information (Irvine provides a thorough treatment of this in her Consciousness as a Scientific Concept), but since his goal is to secure introspective reliability for philosophical purposes, he bypasses these to consider three kinds of challenges posed by Schwitzgebel in Perplexities, the Dumbfounding, Dissociation, and Introspective Variation Arguments. Once again, he’s careful to state the balanced nature of his aim, the obvious fact that
any comprehensive account of the epistemic landscape of introspection must take both the hard and easy cases into consideration. Arguably, generalizing beyond the obviously easy and hard cases requires an account of what makes the hard cases hard and the easy cases easy. Only once we’ve made some progress with that question will we be in a position to make warranted claims about introspective access to consciousness in general. 8
His charge against Schwitzgebel, then, is that even conceding his examples of local introspective unreliability, we have no reason to generalize from these to the global unreliability of introspection as a philosophical tool. Since this inference from local unreliability to global unreliability is his primary discursive target, Bayne doesn’t so much need to problematize Schwitzgebel’s challenges as to reinterpret—‘quarantine’—their implications.
So in the case of ‘dumbfounding’ (or ‘uncertainty’) arguments, Schwitzgebel reveals the epistemic limitations of introspection via a barrage of what seem to be innocuous questions. Our apparent inability to answer these questions leaves us ‘dumbfounded,’ stranded on a cognitive limit we never knew existed. Bayne’s strategy, accordingly, is to blame the questions, to suggest that dumbfounding, rather than demonstrating any pervasive introspective unreliability, simply reveals that the questions being asked possess no determinate answers. He writes:
Without an account of why certain introspective questions leave us dumbfounded it is difficult to see why pessimism about a particular range of introspective questions should undermine the epistemic credentials of introspection more generally. So even if the threat posed by dumbfounding arguments were able to establish a form of local pessimism, that threat would appear to be easily quarantined. 11
Once again, local problems in introspection do not warrant global conclusions regarding introspective reliability.
Bayne takes a similar tack with Schwitzgebel’s dissociation arguments, examples where our naïve assumptions regarding introspective competence diverge from actual performance. He points out the ambiguity between the reliability of experience and the reliability of introspection: Perhaps we’re accurately introspecting mistaken experiences. If there’s no way to distinguish between these, Bayne, suggests, we’ve made room for introspective optimism. He writes: “If dissociations between a person’s introspective capacities and their first-order capacities can disconfirm their introspective judgments (as the dissociation argument assumes), then associations between a person’s introspective judgments and their first-order capacities ought to confirm them” (12). What makes Schwitzgebel’s examples so striking, he goes on to argue, is precisely that fact that introspective judgments are typically effective.
And when it comes to the introspective variation argument, the claim that the chronic underdetermination that characterizes introspective theoretical disputes attests to introspective incapacity, Bayne once again offers an epistemologically fractionate picture of introspection as a way of blocking any generalization from given instances of introspective failure. He thinks that examples of introspective capacity can be explained away, “[b]ut even if the argument from variation succeeds in establishing a local form of pessimism, it seems to me there is little reason to think that this pessimism generalizes” (14).
Ultimately, the entirety of his case hangs on the epistemologically fractionate nature of introspection. It’s worth noting at this point, that from a cognitive scientific point of view, the fractionate nature of introspection is all but guaranteed. Just think of the mad difference between Plato’s simple aviary, the famous metaphor he offers for memory in the Theaetetus, and the imposing complexity of memory as we understand it today. I raise this ‘mad difference’ for two reasons. First, it implies that any scientific understanding of introspection is bound to radically complicate our present understanding. Second, and even more importantly, it evidences the degree to which introspection is blind, not only to the fractionate complexity of memory, but to its own fractionate complexity as well.
For Bayne to suggest that introspection is fractionate, in other words, is for him to claim that introspection is almost entirely blind to its own nature (much as it is to the nature of memory). To the extent that Bayne has to argue the fractionate nature of introspection, we can conclude that introspection is not only blind to its own fractionate nature, it is also blind to the fact of this blindness. It is in this sense that we can assert that introspection neglects its own fractionate nature. The blindness of introspection to introspection is the implication that hangs over his entire case.
In the meantime, having posed an epistemologically plural account of introspection, he’s now on the hook to explain the details. “Why,” he now asks, “might certain types of phenomenal states be elusive in a way that other types of phenomenal states are not?” (15). Bayne does not pretend to possess any definitive answers, but he does hazard one possible wrinkle in the otherwise featureless face of introspection, the 2010 distinction that he and Maja Spener made in “Introspective Humility” between ‘scaffolded’ and ‘freestanding’ introspective judgments. He notes that those introspective judgments that seem to be the most reliable, are those that seem to be ‘scaffolded’ by first-order experiences. These include the most anodyne metacognitive statements we make, where we reference our experiences of things to perspectivally situate them in the world, as in, ‘I see a tree over there.’ Those introspective judgments that seem the least reliable, on the other hand, have no such first-order scaffolding. Rather than piggy-back on first-order perceptual judgments, ‘freestanding’ judgments (the kind philosophers are fond of making) reference our experience of experiencing, as in, ‘My experience has a certain phenomenal quality.’
As that last example (cribbed from the Gertler quote above) makes plain, there’s a sense in which this distinction doesn’t do the philosophical introspective optimist any favours. (Max Engel exploits this consequence to great effect in his Open MIND reply to Bayne’s article, using it to extend pessimism into the intuition debate). But Bayne demurs, admitting that he lacks any substantive account. As it stands, he need only make the case that introspection is fractionate to convincingly block the ‘globalization’ of Schwitzgebel’s pessimism. As he writes:
perhaps the central lesson of this paper is that the epistemic landscape of introspection is far from flat but contains peaks of security alongside troughs of insecurity. Rather than asking whether or not introspective access to the phenomenal character of consciousness is trustworthy, we should perhaps focus on the task of identifying how secure our introspective access to various kinds of phenomenal states is, and why our access to some kinds of phenomenal states appears to be more secure than our access to other kinds of phenomenal states. 16
The general question of whether introspective cognition of conscious experience is possible is premature, he argues, so long as we have no clear idea of where and why introspection works and does not work.
This is where I most agree with Bayne—and where I’m most puzzled. Many things puzzle me about the analytic philosophy of mind, but nothing quite so much as the disinclination to ask what seem to me to be relatively obvious empirical questions.
In nature, accuracy and reliability are expensive achievements, not gifts from above. Short of magic, metacognition requires physical access and physical capacity. (Those who believe introspection is magic—and many do—need only be named magicians). So when it comes to deliberative introspection, what kind of neurobiological access and capacity are we presuming? If everyone agrees that introspection, whatever it amounts to, requires the brain do honest-to-goodness work, then we can begin advancing a number of empirical theses regarding access and capacity, and how we might find these expressed in experience.
So given what we presently know, what kind of metacognitive access and capacity should we expect our beans to possess? Should we, for instance, expect it to rival the resolution and behavioural integration of our environmental capacities? Clearly not. For one, environmental cognition coevolved with behaviour and so has the far greater evolutionary pedigree—by hundreds of millions of years, in fact! As it turns out, reproductive success requires that organisms solve their surroundings, not themselves. So long as environmental challenges are overcome, they can take themselves for granted, neglect their own structure and dynamics. Metacognition, in other words, is an evolutionary luxury. There’s no way of saying how long homo sapiens has enjoyed the particular luxury of deliberative introspection (as an exaptation, the luxury of ‘philosophical reflection’ is no older than recorded history), but even if we grant our base capacity a million year pedigree, we’re still talking about a very young, and very likely crude, system.
Another compelling reason to think metacognition cannot match the dimensionality of environmental cognition lies in the astronomical complexity of its target. As a matter of brute empirical fact, brains simply cannot track themselves the high-dimensional way they track their environments. Thus, once again, ‘Dehaene’s Law,’ the way “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). The vast resources society is presently expending to cognize the brain attests to the degree to which our brain exceeds its own capacity to cognize in high dimensional terms. However the brain cognizes its own operations, then, it can only do so in a radically low dimensional way. We should expect, in other words, our brains to be relatively insensitive to their own operation—to be blind to themselves.
A third empirical reason to assume that metacognition falls short environmental dimensionality is found in the way it belongs to the very system it tracks, and so lacks the functional independence as well as the passive and active information-seeking opportunities belonging to environmental cognition. The analogy I always like to use here is that of a primatologist sewn into a sack with a troop of chimpanzees versus one tracking them discretely in the field. Metacognition, unlike environmental cognition, is structurally bound to its targets. It cannot move toward some puzzling item—an apple say—peer at it, smell it, touch it, turn it over, crack it open, taste it, scrutinize the components. As embedded, metacognition is restricted to fixed channels of information that it could not possibly identify or source. The brain, you could say, is simply too close to itself to cognize itself as it is.
Viewed empirically, then, we should expect metacognitive access and capacity to be more specialized, more adventitious, and less flexible compared to that of environmental cognition. Given the youth of the system, the complexity of its target, and the proximity of its target, we should expect human metacognition will consist of various kluges, crude heuristics that leverage specific information to solve some specific range of problems. As Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have established, simple heuristics are often far more effective than optimization methods at solving problems. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23). With complicated problems yielding little data, adding parameters to a solution can compound the chances of making mistakes. Low dimensionality, in other words, need not be a bad thing, so long as the information consumed is information enabling the solution of some problem set. This is why evolution so regularly makes use of it.
Given this broad-stroke picture, human metacognition can be likened to a toolbox containing multiple, special-purpose tools, each possessing specific ‘problem-ecologies,’ narrow, but solvable domains that trigger their application frequently and decisively enough to have once assured the tool’s generational selection. The problem with heuristics, of course, lies in the narrowness of their respective domains. If we grant the brain any flexibility in the application of its metacognitive tools, then the potential for heuristic misapplication is always a possibility. If we deny the brain any decisive capacity to cognize these misapplications outside their consequences (if the brain suffers ‘tool agnosia’), then we can assume these misapplications will be indistinguishable from successful applications short of those consequences.
In other words, this picture of human metacognition (which is entirely consistent with contemporary research) provides an elegant (if sobering) recapitulation and explanation of what Bayne calls the ‘epistemic landscape of introspection.’ Metacognition is fractionate because of the heuristic specialization required to decant behaviourally relevant information from the brain. The ‘peaks of security’ correspond to the application of metacognitive heuristics to matching problem-ecologies, while the ‘troughs of insecurity’ correspond to the application of metacognitive heuristics to problem-ecologies they could never hope to solve.
Since those matching problem-ecologies are practical (as we might expect, given the cultural basis of regimented theoretical thinking), it makes sense that practical introspection is quite effective, whereas theoretical introspection, which attempts to intuit the general nature of experience, is anything but. The reason the latter strike us as so convincing—to the point of seeming impossible to doubt, no less—is simply that doubt is expensive: there’s no reason to presume we should happily discover the required error-signalling machinery awaiting any exaptation of our deliberative introspective capacity, let alone one so unsuccessful as philosophy. As I mentioned above, the experience of epistemic insufficiency always requires more information. Sufficiency is the default simply because the system has no way of anticipating novel applications, no decisive way of suddenly flagging information that was entirely sufficient for ancestral problem-ecologies and so required no flagging.
Remember how Bayne offered what I termed ‘information information’ provided by vision as a possible analogue of introspection? Visual experience cues us to the unreliability or absence of information in a number of ways, such as darkness, blurring, faintness, and so on. Why shouldn’t we presume that deliberative introspection likewise flags what can and cannot be trusted? Because deliberative introspection exapts information sufficient for one kind of practical problem-solving (Did I leave my keys in the car? Am I being obnoxious? Did I read the test instructions carefully enough?) for the solution of utterly unprecedented ontological problems. Why should repurposing introspective deliverances in this way renovate the thoughtless assumption of ‘default sufficiency’ belonging to their original purposes?
This is the sense in which Blind Brain Theory, in the course of explaining the epistemic profile of introspection, also explodes Bayne’s case for introspective optimism. By tying the contemplative question of deliberative introspection to the empirical question of the brain’s metacognitive access and capacity, BBT makes plain the exorbitant biological cost of the optimistic case. Exhaustive, reliable intuition of anything involves a long evolutionary history, tractable targets, and flexible information access—that is, all the things that deliberative introspection does not possess.
Does this mean that deliberative introspection is a lost cause, something possessing no theoretical utility whatsoever? Not necessarily. Accidents happen. There’s always a chance that some instance of introspective deliberation could prove valuable in some way. But we should expect such solutions to be both adventitious and local, something that stubbornly resists systematic incorporation into any more global understanding.
But there’s another way, I think, in which deliberative introspection can play a genuine role in theoretical cognition—a way that involves looking at Schwitzgebel’s skeptical project as a constructive, rather than critical, theoretical exercise.
To show what I mean, it’s worth recapitulating one of the quotes Bayne selects from Perplexities of Consciousness for sustained attention:
How much of the scene are you able vividly to visualize at once? Can you keep the image of your chimney vividly in mind at the same time you vividly imagine (or “image”) your front door? Or does the image of your chimney fade as your attention shifts to the door? If there is a focal part of your image, how much detail does it have? How stable is it? Suppose that you are not able to image the entire front of your house with equal clarity at once, does your image gradually fade away towards the periphery, or does it do so abruptly? Is there any imagery at all outside the immediate region of focus? If the image fades gradually away toward the periphery, does one lose colours before shapes? Do the peripheral elements of the image have color at all before you think to assign color to them? Do any parts of the image? If some parts of the image have indeterminate colour before a colour is assigned, how is that indeterminacy experienced—as grey?—or is it not experienced at all? If images fade from the centre and it is not a matter of the color fading, what exactly are the half-faded images like? Perplexities, 36
Questions in general are powerful insofar as they allow us to cognize the yet-to-be-cognized. The slogan feels ancient to me now, but no less important: Questions are how we make ignorance visible, how we become conscious of cognitive incapacity. In effect, then, each and every question in this quote brings to light a specific inability to answer. Granting that this inability indicates either a lack of information access and/or metacognitive incapacity, we can presume these questions enumerate various cognitive dimensions missing from visual imagery. Each question functions as an interrogative ‘ping,’ you could say, showing us another direction that (for many people at least) introspective inquiry cannot go—another missing dimension.
So even though Bayne and Schwitzgebel draw negative conclusions from the ‘dumbfounding’ that generally accompanies these questions, each instance actually tells us something potentially important about the limits of our introspective capacities. If Schwitzgebel had been asking these questions of a painting—Las Meninas, say—then dumbfounding wouldn’t be a problem at all. The information available, given the cognitive capacity possessed, would make answering them relatively straightforward. But even though ‘visual imagery’ is apparently ‘visual’ the same as a painting, the selfsame questions stop us in our tracks. Each question, you could say, closes down a different ‘degree of cognitive freedom,’ reveals how few degrees of cognitive freedom human deliberative introspection possesses for the purposes of solving visual imagery. Not much at all, as it turns out.
Note this is precisely what we should expect on a ‘blind brain’ account. Once again, simply given the developmental and structural obstacles confronting metacognition, it almost certainly consists of an ‘adaptive toolbox’ (to use Gerd Gigerenzer’s phrase), a suite of heuristic devices adapted to solve a restricted set of problems given only low-dimensional information. The brain possesses a fixed set of metacognitive channels available for broadcast, but no real ‘channel channel,’ so that it systematically neglects metacognition’s own fractionate, heuristic structure.
And this clearly seems to be what Schwitzgebel’s interrogative barrage reveals: the low dimensionality of visual imagery (relative to vision), the specialized problem-solving nature of visual imagery, and our profound inability to simply intuit as much. For some mysterious reason we can ask visual questions that for some mysterious reason do not apply to visual imagery. The ability of language to retask cognitive resources for introspective purposes seems to catch the system as a whole by surprise, confronts us with what had been hitherto relegated to neglect. We find ourselves ‘dumbfounded.’
So long as we assume that cognition requires work, we must assume that metacognition trades in low dimensional information to solve specific kinds of problems. To the degree that introspection counts as metacognition, we should expect it to trade in low-dimensional information geared to solve particular kinds of practical problems. We should also expect it to be blind to introspection, to possess neither the access nor the capacity required to intuit its own structure. Short of interrogative exercises such as Schwitzgebel’s, deliberative introspection has no inkling of how many degrees of cognitive freedom it possesses in any given context. We have to figure out what information is for what inferentially.
And this provides the basis for a provocative diagnosis of a good many debates in contemporary psychology and philosophy of mind. So for instance, a blind brain account implies that our relation to something like ‘qualia’ is almost certainly one possessing relatively few degrees of cognitive freedom—a simple heuristic. Deliberative introspection neglects this, and at the same time, via questioning, allows other cognitive capacities to consume the low-dimensional information available. ‘Dumbfounding’ often follows—what the ancient Greeks liked to call, thaumazein. The practically minded, sniffing a practical dead end, turn away, but the philosopher famously persists, mulling the questions, becoming accustomed to them, chasing this or that inkling, borrowing many others, all of which, given the absence of any real information information, cannot but suffer from some kind of ‘only game in town effect’ upon reflection. The dumbfounding boundary is trammelled to the point of imperceptibility, and neglect is confused with degrees of cognitive freedom that simply do not exist. We assume that a quale is something like an apple—we confuse a low-dimensional cognitive relationship with a high-dimensional one. What is obviously specialized, low-dimensional information becomes, for a good number of philosophers at least, a special ‘immediately self-evident’ order of reality.
Is this Adamic story really that implausible? After all, something has to explain our perpetual inability to even formulate the problem of our nature, let alone solve it. Blind Brain Theory, I would argue, offers a parsimonious and comprehensive way to extricate ourselves from the traditional mire. Not only does it explain Bayne’s ‘epistemic profile of introspection,’ it explains why this profile took so long to uncover. By reinterpreting the significance of Schwitzgebel’s ‘dumbfounding’ methods, it raises the possibility of ‘Interrogative Introspection’ as a scientific tool. And lastly, it suggests the problems that neglect foists on introspection can be generalized, that much of our inability to cognize ourselves turns on the cognitive short cuts evolution had to use to assure we could cognize ourselves at all.
Uhm, I got drowned in jargon rather quickly, and I got the suspicion that when the jargon is so specialized then the flaw in the reasoning might depend on the fact the taxonomy is wrong.
The way I see it:
introspection = self-observation
So that already implies that it’s a second-order observation of a process already concluded. Something DETACHED, alien.
In general, no one trusts “the process”. We trust the outcome. The same as we “believe” in science not as something ultimately true, but as something that as a compromise works better than believing in magic. We can’t be sure why it works, but we know it does.
So, why today someone drove to work? Introspection can fabricate you a number of good reasons. Why it was desirable doing so, compared to not doing so.
Of course we do not know anything about the process. But we can figure out if it eventually makes sense in a usable way. Those motivations are likely reusable in different contexts and predict a number of outcomes. As if a character in a novel starts acting illogically compared to his disclosed motivations.
It’s like a computer game. Do you load up the lasers or do you fire the missiles? That’s what a player sees, without knowing or caring about the inner workings of the actual code. But maybe the information exposed to the player for the high order decision making is revealed wrong, and so a player might guess something is broken, and the decision making needs to rely on different elements.
Once again there’s a gap of unknown between what the player sees and the actual code. But observation leads to different inferences and conclusions. In the end the “model” the player builds might be completely different than the actual program that directs everything, but if the model predicts efficiently what happens, then it works. Might not be accurate, but it is at least satisfying.
There is no “insecurity” as a thing that is variably more or less accurate, only function. Thoughts, being symbolic, do not need to imitate the shape of what they symbolize. So they aren’t accurate *at all*. They need only to accurately model in order to function. And whether or not they are good enough and function well it’s all something that is simply retried and experimented.
The confusion is about mixing the symbolic thought with a chemical process. Of course they are not the same. Of course introspection, meaning a symbolic thought, is not epistemically correct. That’s the hard problem, the gap between being and representation. A rock is. A human being represents and simulates. Consciousness implies identity, and identity implies self-observation, so a distinction. Because of this distinction you CANNOT at the same time observe and be. And since we do observe, we cannot “be”. Meaning, concretely, that we only model things and live in this modeled symbolic artificial world we called “consciousness”.
That’s why, in 2015, we aren’t anymore satisfied with the model of consciousness we built, and so are looking for a better, more detailed model that satisfies us better.
Why again? Because introspection (aka the ongoing recursion of observation) keeps making differentiations, split the hair. And so demanding more accuracy the more we discover.
“Short of magic, metacognition requires physical access and physical capacity.”
Nope, it only needs a model that works. Same as we don’t know if “math” is a real thing or not, but we keep using it as long it provides useful outcomes.
That’s an “emergent” thing. We mostly care at results on our level of consciousness. A “choice” is likely a chemical process, but you can evaluate and represent that choice even if you are completely unaware of its chemical process.
P.S.
About metacognition, again. Just think as if it happens to separate human beings. Like, I’m hidden here, and I’m watching you going with your life. I’d start inferring your choices, and soon enough I’d have a nice enough model that may work well to predict what you’re going to do. Even if I never had any insight about what actually goes on in your brain.
The bottom line is: I don’t need to.
What are “levels of consciousness,” and why do they appear? For that matter, what are “levels” of anything? Why does the universe manifest itself to us in “levels,” don’t you think there is something odd about this? Is this a property of the universe, or our brains? Both?
Are you asking that to me?
If you mean the level of emergence with the reductionist level, it’s once again because information theory says that “observation” always creates a separation: the subject observing and the object being observed.
You cannot know anything without a distinction, and so without a line of separation. Something that is undivided is something not recognized, not known.
Why are you conscious? Because you have identity, as something that makes you feel distinguished from everything else around you.
“All that exists has, by existing and by not being the only thing that exists, individuality.
Pain and pleasure exist in the form they are pertinent to individuality compared to the “world”. Your pain, your pleasure. Individuality. For consciousness to exist, it needs be separate. Recognize itself.
The moment the environment loops itself in a way that makes itself a thing, is the moment the barrier between “me” and everything else is created. The inside is “me”, the outside is all the rest. Here and there.
Since consciousness exists within its projected world, this makes it as separate from the rest. Exactly because the brain isn’t complex enough to replicate all the complexity outside, it can only create a model that heuristically only represents a minimal amount of, hopefully relevant, information.
this should help:
Click to access Wimsatt%20-%20The%20ontology%20of%20complex%20systems.pdf
alabeni exospective model building rides on access, and the robustness of our exospective model building capacity rides on our ability to vary parameters of the exospective access relations, and indeed on the ability go cross modal and compare what is gleaned from different channels of access. see the discussion in the wimsatt paper i just posted, in the first few sections on levels and robustness.
Thanks DBZ, this paper should be helpful for my medical research methodology. All this time I’d been using the concept of robustness, without realizing how robust it actually is!
“Even if I never had any insight about what actually goes on in your brain.
The bottom line is: I don’t need to.”
That’s roughly what heuristics are all about: solving using as little information as possible.
I thought I had a handle on what you were saying, Abe, until you mentioned that metacognition doesn’t need access and capacity.
I think we made a point down this discussion.
I understand and agree that our heuristics work relatively well when applied to the outside, but are kind of crap when it comes to introspection. So I understand why metacognition, being self-reflection, actually needs to access specific information and can’t just simply solve stuff while sticking to the high level.
The problem is that we become effectively ALIEN to evolution. We can still imagine we are no different from apes, dogs and everything else, but what happens when human beings take control of genetics and start making their own choices? Evolution that self-observes and CORRECTS can still be considered evolution?
I wonder if there ever was some other event in the past that actually modified the rules of evolution.
I think the transformation is at least as significant as the development of multicellular life. I have a post on this from a couple years back… Can’t remember the title the tho.
Reblogged this on Adrian Nathan West and commented:
Fascinating piece on the possible limits of introspection at The Three Pound Brain, courtesy of Germán Sierra.
“Many things puzzle me about the analytic philosophy of mind, but nothing quite so much as the disinclination to ask what seem to me to be relatively obvious empirical questions.”
Philosophy is a game, and science is cheating.
Or philosophy is a religion and science is blasphemy. The two statements may be equivalent.
I am starting to suspect it is just as impossible to speak about the intelligibility of introspection, as it is to make a proper medical diagnosis using only English.
BBT is, most basically, a diagnosis.
But doctors’ mistaken reliance on semantics–language—has left medicine stranded against the overwhelming biochemical complexity that is us.
Medicine’s evolutionary history started with horribly misleading oversimplifications–“diagnostic categories”–and the intellectual organisms that evolved from these primitive building blocks are clearly incapable of solving complex dynamical systems. This failure manifests everywhere–in the interminable controversy over the DSM-V, in the tortured lives of people with complex multisystem neuroimmune pathologies including CFS/ME.
“It’s all in your head,” whether in cognitive science or in medicine, is doctor-speak for “I have no fucking idea what I’m talking about, so I’ll blame it on you.”
So do we continue to work with these hopeless tools, and try to exapt clarity from broken toys?
Or do we give up, embrace mysterianism *at the semantic level,* and leave the real solutions to the math, the data, and technologies more capable than ourselves?
The real solutions to what, Kat? The problems the math, the data and the technologies have?
I’m trying to suggest that technology-driven thought (including mathematical and data tools) is better at problem-solving than human brains are.
Perhaps I have a hangover from the last post on AI’s, but to me maths and data tools are inert – unless you go full AI. So I read you as leaving the solutions to AI’s.
In the end, math and data tools are still just a kind of gun that we fire. They don’t solve any problem that we don’t aim them at.
Computer-assisted cognition. That’s what I be talking about haha
But in another sense you could say the problem is simply the cost of doing cognitive business at a given level of resolution. Our capacity to systematically interact with other systems is limited by the finite nature of our sensitivities and our cognitive tools. So in a sense, we’re doomed to dwell among cartoons, the question ultimately is one of how much they effect, enable. Sometimes, unfortunately, all they enable is a certain kind of social-signalling, one bent of conserving power and privilege.
The question of what to do once we embrace the finite nature of our tools is a damn interesting one. The thing to avoid, I think anyway, is letting the traditional dichotomies own our cartoons. As soon as you cut the activity of cognition from the activities cognized, framed things in subject/object terms, you’ve entirely elided its high-dimensional embedded nature, transformed everything into a ‘for-you’ and an ‘in-itself,’ the latter being unknowable by definition. But if you begin with the premise that scientific theoretical cognition is the only reliable theoretical cognition we have, refuse to take the traditional philosophical step and ask what this means in intentional (subject/object) terms, then you’re left with the picture of natural systems engaging natural systems, sometimes reliably, sometimes not.
I agree that we are clearly stranded with cartoons, so sure, why not see how these cartoons evolve once we cognize them for what they are?
But once we accept this picture of natural systems engaging natural systems, there’s the question for scientific theoretical cognition: what sorts of these engagements are reliable, and which aren’t? I’m arguing that the more reliable engagements are likely to result from some cooperation between old technologies (us) and new technologies (AI). There’s a great book by Tyler Cowen, Average is Over, in which he notes that the highest-scoring chess teams are neither humans nor AIs, but collaborations between the two (“freestyle chess.”) I think these kinds of collaborations are more reliable sources of robust systematic engagement, and they are likely to replace, more and more, “natural” human scientific theoretical cognition.
I think this is pretty much what mathematics has already become. But the processes driving this transformation are ramping up, which means the collaboration will become more and more one-sided, and that the human side of the equation will be progressively relegated to translation and PR.
As a systems engineer, experience teaches that it is difficult to understand and solve on a higher systems level when you don’t have an understanding on a very basic or component level. Understanding which is actually the scientific equal to belief is difficult when those basic functions and rules are not in place.
Over on Massimo’s Scientia blog, neurobiologist Brian Key has a very good essay on pain reception in fish in which he gives a nice summary of the structure of the human neocortex, the very structure which we evolved to give us introspection ability, higher language capability and higher social interaction capability.
What makes me an optimist is that similar problems can be solved by people with the proper background and training. What makes me a pessimist is that the field is dominated by Neanderthal philosophical minds who can barely plug the tv into the wall.
As a systems engineer, experience teaches that it is difficult to understand and solve on a higher systems level when you don’t have an understanding on a very basic or component level.
Are you even sure?
Take for example high level programming compared to low level. It’s universally accepted that high level programming is more efficient because it lets you think and work with stuff that makes more sense on the level of semantic you’re accustomed to. If you need a result, and need it fast, the high level is the best.
So you solve problems much faster, without any idea of the inner workings of what you’re actually doing.
It’s not “optimized”. And this is coherent with the idea that consciousness is heuristic, meaning it’s just a fast, efficient way that is likely not the “best”.
This also means (and I think that’s what you actually meant) that, as Scott says, the high level is terrible at “knowing itself”, meaning that it has an imprecise idea, or just no idea at all, of how it actually works. But it might instead be good doing its job, which is problem-solving the outside.
But if this true it basically reverts the BBT: consciousness would be extremely good because able to translate information to a model that works faster and more efficiently, without being frozen in complexity (where to process a single second you’d need a billion years).
This was actually a key point in the study of AI, I think. The difference from simply calculating all possible moves and instead find the best result when you’re limited by time (since we’re mortal). The very beginning of “Gödel, Escher, Bach” was about this.
Well to a high level programmer or even a tech who is good at troubleshooting the machine, whom you would consider as a ‘laymen’ to an engineer, it is not necessary for them to understand the engineering architecture and circuitry of how the computer works. However the programmer, tech, salesman etc. has an idea that the computer is a machine that runs on timing of the digital circuitry etc. Our biology has the same timing characteristics but of course go a lot deeper with feelings and perception.
Brains are biologically complex but have a lot of ‘give aways’ that they are naturally evolved engineering systems by studying the brains of lower organisms. As thinkers we get caught in blindness because we are limited by our conceptualization and get caught up in semantics. The hard problem to many is the inability to conceptualize what causes qualia, perception of an inner self and how the larger outer environment binds inside of us. Even if we understand the qualia concept as something physical, the machinery is still very complex for a non-technical person.
To summarize, what follows from simpler forms of nature is a basic stimulus response system that evolves with more complex sensorimotor systems up the tree of evolution. What makes them more baffling than computers is the emergence of the larger environments they allow the organism to operate in along with first person experience, and then follow on complexity of introspective observation, third person, language, social interactive dispositional, normativity etc.
Like listening to Beethoven’s Ninth Symphony, we can hear all of the music and machinery running for forty five minutes before the voice and language emerges in the fourth movement.
When you work in high level languages you lose the flexibility you’d have if you worked at a lower level. You are stuck within the structures the programmer made when writing the language – stuck at the level of access they gave the language. Stuck within their paradigm. In other words you lose power. Worse if all those accomidations become invisible.
Sometimes I think the biblionic tone of the books is a reference to that act being one of speaking closer to machine code.
‘Thou shalt not’ strikes deep code.
…I guess you’re having a different conversation?
Because while I don’t understand a whole lot about what is written in that link, it seems more about evaluating the validity of information.
It doesn’t seem to be related to possible “levels of consciousness” or the question “Is this a property of the universe, or our brains”.
My perspective about that question is that the property is the property of language. More precisely a property of consciousness, so the brain. The universe doesn’t need any level, since the universe arguably has no idea of self or self-knowledge.
The hard problem, the separation between matter and soul, is the original first separation we “feel”. That’s the first level. The correct question becomes: why we feel separate if consciousness is simply produced by the physical brain?
So it’s kind of obvious that these levels are created by perception, not because they are actually there and exist. It’s human knowledge that to know needs to use language, and so levels.
You don’t need any epistemology (or you can’t expect any other to make sense) than remembering that the universe exists in the brain, and not the opposite.
The brain is ALWAYS the only thing you can see (some of it, at least).
And you can only see through language.
Err… that last message was about the conversation above with Katherine.
But I agree with you that the ‘feel’ and ‘feels’ of qualia is actually something synchronous between the interconnected neurons. They ARE physically interconnected by dendrites and axons and when those fire, SOMETHING happens….duh. Sorry if I’m losing my patience but they constantly pick on vision or the muddle of emotional sensation to be ‘scientific’.
Vision may be important but I feel fairly sure about what I can hear and the words I can read on this screen.
The Guardian article said a couple of weeks back ‘Why Can’t The Greatest Minds In The World Figure Out Consciousness’, like People Magazine saying George Clooney is the “most eligible bachelor’. Does the self acclaim have a connection to the introspective reliability arguments? Are their book publishers also the owners or sit on the board of People Magazine?
than remembering that the universe exists in the brain, and not the opposite.
What?
I could try and charitably guess what you mean, Abalieno, but even when being charitable one can invent strawmen – so to avoid that, what do you mean?
I could try and charitably guess what you mean, Abalieno, but even when being charitable one can invent strawmen – so to avoid that, what do you mean?
It’s not really tricky. The brain is always the proxy. You can only know the universe through the brain, so you can only know the amount of universe you can fit in the brain.
Like the “brain in the vat” hypothesis, you can’t be sure of what’s out there, the fact there’s even an universe, but only on the fact that the brain feeds you information. Information about the universe is still information contained in the brain.
So I simplified and said the brain contains the universe.
You appear to be following the second order cyberneticists and radical constructivists here. But what they can not account for and what BBT does account for is why this ‘feeling’ is so difficult to integrate into a naturalized understanding. They can’t account for the shape of experience, and they don’t treat any of the various asymptotes of horizonality that characterize experience. Von Foerster in Understanding Understanding does however give a couple of ruminations on blindness, and seeing your blindness as what seeing consists in. Interestingly, he also had an interest in magic, but never really seemed to connect it to a naturalist understanding.
Abalieno,
Yeah, I dunno if I’m going to be pedantic or this is pivotal, but using ‘fit’ (as in ‘so you can only know the amount of universe you can fit in the brain’)? I think you’re drawing a distinction line by using the idea of a container, when there is no container and there is no fitting of anything inside of anything.
I agree with drawing distinction lines, but generally in a way that acknowledges their arbitraryness rather than gives them the credibility of physical objects that containers and filling have.
DivisionbyZer0,
yes, when it comes to epistemology, or the nature of truth, I think you can only go as far as the brain. Everything else is one step further, and toward more doubt. Thought, and so the brain, is the lens through which we know everything else. So the lens itself is the truest thing we can have.
So “constructivism” is the only principle I consider somewhat reliable.
BBT doesn’t seem to me as something contradicting constructivism. It’s just something added to the same picture.
Other things are maybe just my own limit. But I still don’t understand what is so “complex” about the feel of consciousness. It’s the mix of recursion + self-observation that makes consciousness possible. The “feel” is the result of that particular shape it’s built on.
I mean that it’s what “recursion + self-observation” feel like.
That explanation satisfies me, and I don’t even understand why others find it unsatisfying or incomplete, instead.
Callan,
the brain still contains information. Information needs a container. It’s the same as saying “you can only know the amount of universe whose information you can fit in a brain”.
the brain still contains information.
No, it doesn’t. Draw a boundary between that and whatever reference designates you, if you want. Fair enough, IMO.
Use levels of quarantine – at a certain level of discussion, sure, talk about information. But that’s just a quarantine, not how it is. Just avoiding a painful elephant that’s in the room (is the room).
Outside the quarantine, no, the brain contains/is matter (complicated matter, granted). It does not contain information.
It’s not right of you to insult Neanderthals when they didn’t have philosophy and are not around to defend themselves.
The wikipedia article on Neanderthals said they were the first to bury their dead in the ground, perhaps an early form of tenure?
Victor! Harsh! I loled!! 🙂
I plugged in a TV just today, in fact!
“As a systems engineer, experience teaches that it is difficult to understand and solve on a higher systems level when you don’t have an understanding on a very basic or component level. Understanding which is actually the scientific equal to belief is difficult when those basic functions and rules are not in place.”
One of the holy grails in this strange quest has to be understanding why high-dimensional understanding of components possesses the monstrously big ecology it does compared to our other heuristic modes of understanding. Absent mechanical knowledge, we can only solve in ad hoc, narrow ways, it seems.
Actually I thought Eric, Massimo, Yourself and most philosophical types are techno-nerds at heart. My apologies for my snarkiness.
As I say, I think those neurons are doing something like a metabolic lock dance to give us this higher cartoon we call reality.
In the next life we may all be just pixels..
Derrida purportedly couldn’t operate a VCR.
Speaking of Derrida, have you read Livingston’s paper comparing Differance to diagonalization procedures in metamathematics?
No, but I’m regularly underwhelmed by attempts to generalize Gödel beyond maths. For me, difference can be understood as what mechanical irreflexivity looks like from the inside, you could say. My review of Hagglund’s Atheism book goes into this…
Derrida purportedly couldn’t operate a VCR.
That’s why he had a time machine. So he could go back and watch programs he’d missed.
Its not a rumination on godel as much as it is derrida and his relationship to formal systems sciences.
“…you can’t be sure of what’s out there, the fact there’s even an universe, but only on the fact that the brain feeds you information.”
Maybe I’m over-interpreting a grammatical artifact, but if your brain feeds you information is there a “you” separate from your brain but capable of receiving information?
Probably what I write is hard to understand because I switch modality very often.
Yes, there’s a separate “you”. But only in perception, not because there’s really this split in two.
Are you your finger? Nope, you can cut the finger and you’d still feel like yourself. The “you” is your thought, your idea of self, your consciousness. When you fall asleep and don’t dream, this “you” vanishes, because it’s again just the conscious part.
Consciousness is an observing system. Who does this observation is “you”.
Cleric shirt!
VERY cool.
We used a lot of electricity a few posts ago on dualism and the distinction between 1st order and 2nd order uses of intentional language. If you really think there is a “you” separate from your brain what is the nature of this “you?” I ask this sort of question of everyone I see use dualist-looking language and I never seem to get a straight answer. You also said
“The hard problem, the separation between matter and soul, is the original first separation we “feel”. That’s the first level.”
Do you mean “soul” in its religious/mystical sense?
Human beings, the idea of them that we have, not the scientific taxonomy, exist in a artificial world that is made of language.
So consciousness, the “you” explained above, lives in this artificial world shaped by language. The “soul” is again the same as the “you”. The idea that there’s this dualism between the body and soul.
We know dualism is an illusion, but from our perspective this dualism is a thing that is true. We perceive ourselves as separate.
The separation itself is created by language. Since we live in language our reality is about that separation.
Definitely straighter than usual. Thank you.
“to argue the fractionate nature of introspection, we can conclude that introspection is not only blind to its own fractionate nature, it is also blind to the fact of this blindness.”
Apparently not.
“The blindness of introspection to introspection”
… Has obviously been ludicrously exaggerated. Blindness would mean an incapacity. If any such incapacity really existed then you could never speak of “the fact of this blindness” anymore than a non-seeing thing could know the difference between brown and blue or what it is like to be coloured or not-coloured. It sounds like you are claiming to be blind but notwithstanding that I should let you lead the way.
“In nature, accuracy and reliability are expensive achievements, not gifts from above.”
Problematic at best if not actually totally wrong.
My own existence was not an expensive achievement. I totally received it – I am overwhelmingly indebted to “gift” in the form of nature and of nurture and this necessarily goes back the whole chain. Natural being receives its being from others normally and causes outside itself.
Indeed, from a philosophical perspective, treating natural being as “expensive achievement” sounds a lot like saying something is the cause of its own existence; whereas, in nature, this is actually never the case (and impossible anyways). The earth is not the cause of its own existence; and to be sure, nothing on earth could be either. To that extent, the earth and everything on earth is indebted to other causes. Indeed, even granting that, say, fundamental particles are not or were not all of them produced, where they are and happen to be would still, presumably, be on account of something besides themselves as, e.g., forces that acted on them bringing or moving them to where they now are.
Also, what are “accuracy” and “reliability [in nature]” even supposed to mean above? Do you mean a relative stasis? In that case, the stasis in things again like fundamental particles seems achieved rather naturally, which is to say these things by no means had to labour in order to acquire their present states and typical characteristics. They have them by nature. But even something in nature such as, e.g., that it is normally cold in the winter months in Canada is a stasis of sorts and usually naturally reliable.
And what about “accuracy”? Do you mean something like a species success in producing viable specimens, which generally they do absent extrinsic factors (a natural disaster say that wildly alters the habitat and suddenly makes the species ill-suited for living there)?
Finally, concepts like the laws of physics are, as it were, just there or given; and these things are above all what makes nature reliable and hence predictable and to that extent, I suppose, accurate too.
To us the visible spectrum appears to be the spectrum of existence. Our day to day visual experiences are overflowing with such details that we could spend an eternity discussing all its properties and intricacies. The eye is also quite the evolutionary marvel. Despite its weirdly inverted wiring and its propensity to malfunction due to protracted use, age, genetics, malnutrition, disease, unusual inputs, and a whole host of other reasons, it is still absolutely an achievement in how, when functioning well, it enhances our ability to survive and interact with our environment.
Yet our eyes can only detect a tiny sliver of the electromagnetic spectrum. And not only are we blind to that which lies outside the visible spectrum, we were for the longest time also blind to this blindness. There is no sensory equivalent of an indicator that points to the lack of visible light, and so you go through your days surrounded by a profound darkness that you cannot see. With no inkling of what we were missing, we assumed by default the sufficiency of our vision.
We didn’t start systematically investigating EM radiation outside of the visible spectrum until around the 19th century, which is incredibly late when compared to the history of visual sight. That only became possible when we started to develop tools that could interact with EM radiation in a way that our eyes couldn’t hope to match.
A similar development is happening to neuroscience today. The scope of introspection appears as wide as the ocean to the thinker, but we are inventing ever more capable tools that can detect a vast array of neural activity that lies outside of conscious experience. Yet as far as consciousness is concerned, what doesn’t register in experience might as well not exist. Just like our eyes, when it comes to the brain, it’s no longer a smart bet to think that you can innately “see” even a significant fraction of what’s actually going on.
My own existence was not an expensive achievement. I totally received it – I am overwhelmingly indebted to “gift” in the form of nature and of nurture and this necessarily goes back the whole chain. Natural being receives its being from others normally and causes outside itself.
If you look at this more radically you’ll figure out there’s no “gifting” from something to something else. Just one whole ongoing process.
Natural beings don’t receive anything, they are just one moment flowing smoothly. Causes that come before simply flow onward (and past you).
John Fowles describes this well. “Being” means the undivided process, and when there’s “being” there’s no knowledge, since knowledge requires separation, and so individuality.
So “your own existence” begins with a part of this process acquiring individuality. But it’s also a fundamental lie, since you aren’t really separate from the process.
this is all bordering on the kind of metaphysics that BBT simply abandons. but i’d insist that there is some kind of prelinguistic separating processes occuring. for instance, in cosmology there are regions of the universe literally causally disjoint, singnalling to and from these disjoint regions is impossible. in more germane situations things are “connected” to the extent that they can affect reciprocal perturbations on one another. everything has an interactional scope or field of possible interactions, and this itself can change spatially and temporally. i don’t see why this would be obviated if there were no language or even that this is an effect of linguistic discrimination. wimsatt, again discusses this in the paper i posted, where he reverses the thesis of linguistic constructivism and claims that language has the structure that it does because of a more originary prelinguistic structure of reality.
You think so?
I’d say BBT is just using different language.
If BBT closes the gap of the hard problem by describing how information works, then the conclusion is the same: everything is just naturally flowing information.
Of course there separations. Soaking in water is not the same as soaking in lava.
But again it’s the same environment that configures in different ways. The same as you can gather some sand and configure it in the shape of a castle. It’s still sand. The pattern you configured it into makes it look like it’s a new, different thing. But it’s still the same stuff.
It’s not metaphysics, it’s theory of information described in a more intuitive, metaphoric way.
Language only affects perception and knowledge, so it cannot “obviate”. It doesn’t touch things and make them change.
And OF COURSE language has a structure because of the structure, or pattern, of reality. Because language is still a natural phenomenon that starts from the natural world, it’s not some magically independent abstraction.
Kabbalah explains this pretty well with the idea of the language of root and branches. Just because there’s a spiritual world that is of a completely different nature doesn’t mean that there isn’t a connection between the spiritual world and the physical one.
But when you translate something into something else, like prelinguistic structure of reality into language, you can’t expect to retain perfect identity of what you started with. Something is lost too.
… Has obviously been ludicrously exaggerated. Blindness would mean an incapacity. If any such incapacity really existed then you could never speak of “the fact of this blindness” anymore than a non-seeing thing could know the difference between brown and blue or what it is like to be coloured or not-coloured. It sounds like you are claiming to be blind but notwithstanding that I should let you lead the way.
It’s no more ludicrous than a pilot saying they are flying blind. Instead of relying on their eyes, they rely on emperical instruments.
Have a go at him for not properly detailing the reliance on emperical instruments at each point, if you want. He’s declared his stance in regard to emperical measures before, but if he didn’t do so clearly enough here where it counted, fee free to have a go at him on that.
My own existence was not an expensive achievement. I totally received it
Along with Abalieno’s break down (which can be broken down even further), Scott refers to ‘in nature’. This is as something like refering to ‘in driver culture’ – all you’re doing is changing the subject ‘as a car, getting petrol was not an expensive achievement. I totes recieved it’. Yes. But you’re not the subject. You’re just the vehicle. We’re talking about the driver.
Indeed, from a philosophical perspective, treating natural being as “expensive achievement” sounds a lot like saying something is the cause of its own existence; whereas, in nature, this is actually never the case (and impossible anyways).
Perhaps I’m reading too charitably, but life, in responce to Darwinistic forces, forms it’s own uber heuristics. The easiest way to understand those sorts of heuristics without resorting to massive math equations is to project human emotional attitudes onto it. Which is fairly appropriate since our attitudes, with some tweaks, arise from the uber heuristic*.
To the over heuristics of nature, there are expensive achievements. To put it in a simplistic, non math equation way.
If Scott put that poorly, just say. I’m sure he’ll be very happy to retool his approach when he knows what he needs to retool.
Finally, concepts like the laws of physics are, as it were, just there or given; and these things are above all what makes nature reliable
It’s this reversal that haunts discussion.
No, the laws of physics do not make nature reliable.
It’s the staticness of nature that makes our laws of physics reliable/our knowledge reliable.
You’re going to repeat your reversal, of having your semantics (Atlas like) hold up the world rather than the other way around, many times before you are done with it.
* Soon enough Scott will have to give up ‘heuristic’ I think, despite its clinicism, given it’ll get all ‘meaning-weening’ in the eyes of others. Betraying the remnant meaning in it (that others will build castles from)
I don’t know how you can break it down further, but maybe my explanation wasn’t all that clear about the “undivided process”.
If we have a naturalistic explanation of reality, and so of the brain. Then we solve the hard problem, meaning there’s not anymore any dichotomy of body/mind-soul.
What does that mean? That instead of two sides there’s just one. That it’s all part of one process. But a process just executes and follows rules. There cannot be any agency in the execution of rules, only in the creation of them. Hence the abstraction of a creator, that would retain an hypothesis of agency.
But since we cannot go there we stay on our level, that is the fact that with no discrete agency, so no gifting from something to something else, then everything is on one undivided side. No individuality of parts.
All this fits the romantic idea perfectly. The idea of human being banished from “heaven”. Like away from home, stranded. Separated from god. In hell. Damnation and so on. The fact that perception forces us to be on a individual level, and so suffer the separation. It starts from the original dichotomy.
All this romantic stuff describes the more rigorous stuff I described above. Which is why I use to say that religion sometimes describes the same thing, just with metaphoric language instead of the rigorous, precise, scientific one.
But, and here there’s the metaphysical leap that Scott would refuse outright, could the imposition of boundaries on a deterministic process generate a form of agency?
I think so. Perspective creates agency. This agency is true as true is our perception of a reality. Our existence rules this in, not out.
Which is my claim that free will is actually compatible with a completely deterministic world. Not because agency can “exist” within a deterministic world, but because perspective and boundaries CREATE a form of agency that is as true as we can accept.
I’m still waiting Scott to actually ENGAGE with this argument. but he always dodges it 😉
Which is my claim that free will is actually compatible with a completely deterministic world. Not because agency can “exist” within a deterministic world, but because perspective and boundaries CREATE a form of agency that is as true as we can accept.
It sounds like Benjamin Cane’s stuff, where he uses about five words which implies physical existance for everyone one word that implies ‘true as we can accept’…ie, words which obliquely acknowledge just entering make believe.
But in this case the words I use are not founded in abstract semantics.
If I speak about the imposition of boundaries it’s about operations on a formal system. I haven’t switched the language.
It’s not about make believe, but the amount of information you can access formally. If you can access only a part of information, then you are limited to a point of view, so boundaries, and so the application of free will (because you have a perspective).
There are problematic aspects about this, but they aren’t about semantics. I can actually provide a counter-argument to all this, but it only works if I tap into more metaphysics. If we stay strict then what I said is true.
Bakker didn’t want to engage with this merely because it’s speculative, not because it’s formally incorrect. He just doesn’t care speculating on this stuff.
If I speak about the imposition of boundaries it’s about operations on a formal system.
That’s abstract semantics. There is no imposition. Or boundaries. Or formal. Or system.
More unexplained explainers, to use the local parlance.
This is why I said it can be broken down further.
No, the theme is abstract, the semantics are rigorous.
What I’m doing is not different than “shut up and calculate”. Using the same tools we’ve been using, like information theory.
The problem is not that. The problem is, like with “shut up and calculate”, that some aspects cannot be verified, and so arguably become speculation and outside science.
If I’m simply using math to calculate things, then I’m using a tool that has been proved correct to figure out lots of things. Many of these are also verifiable. But some of them are not, even if I’m still rigorously using math and we trust it.
The same I did with that I wrote above. I’m using a rigorous, free of opinion and vagueness, system. But those conclusion are only hypothesis and cannot be verified.
The idea of possibility of Free Will exists in a scenario where there are various degrees of freedom, and limited knowledge at various degrees. This is strictly the description of a formal system. One where you can or cannot access all information available, same as our consciousness can or cannot access all information in the brain.
The only difference is about “accessing all information” becomes an impossible abstraction. Which is what happens with “science” as a concept.
Science itself, as a principle, is an abstract that is not verifiable, and relies on notions not verifiable.
Until you start speaking physics, no, the terms aren’t rigorous.
There are no ‘formal system’ molecules. There are no imposition molecules. No boundary molecules.
These are all just reflections of the physical structure of the life forms that groped forward and did not die.
Oh, come on, don’t argue for the sake of arguing.
The “physical structure” and “molecules” still exist because there’s behind a formal system, laws, that operates them. The formal system is the basis, unless you believe there’s instead an arbitrary system that changes rules on the fly.
And of course molecules have boundaries and limits.
You seem to be doing the same thing timocrates did (my reply to him) – having the idea semantics makes matter work. You say it yourself – “The “physical structure” and “molecules” still exist because there’s behind a formal system”
No, the formal system is just our cartoon attempt to grasp what is going on.
But you’re putting it as if they exist because of the cartoon, rather than the cartoon being crudely drawn and interacted with because of (as our way of coping with) what already exists.
Then you use the idea the universe exists because of a ‘formal system’ to, presumably, back your idea of imposition of boundaries.
All based on confusing how you model the world as being the actual world. You say as much here “The formal system is the basis, unless you believe there’s instead an arbitrary system that changes rules on the fly.”
The formal system you refer to is just the cartoon we use to second guess the universe. But to you it is THE basis – not a hypothetical idea of what holds up the world, but the actual Atlas involved.
Then you’re blurring the lines between that cartoon and the imposition of boundaries, as if cartoons running into cartoons counts as something more than just more cartoons.
I guess my invocation of ‘physics’ would have just triggered your commitment that your idea of ‘physical structures’ is somehow involved in physical structures.
Possibly, like timocrates can avoid thinking ‘a thought is a thought’ for lack of recursive training, I might sound stupid at that point – you’ll think ‘physical structure is involved in physical structure!’
But that’s why I used scare quotes around it. To show it’s just an idea. Just a cartoon. Unless you want to say the universe runs off cartoons?
Bother – I meant ‘like timocrates can’t avoid thinking’
You seem to make a mess of symbol and meaning.
I don’t use any cartoon, I use “formal system” as the abstract of whatever rules there are. A cartoon would be describing these rules, and the description would be a cartoon because it wouldn’t be precise. But that’s why I avoid a description and I use an abstraction to mean “whatever is there”.
“Formal” simply means that these rules are objective and fixed instead of arbitrary. It’s merely the hypothesis that the brain is connected to an independent reality out there.
You seem somewhat persuaded that your use of language is more correct. Maybe I just don’t understand you, but I find it baffling.
But whatever the case, your radical way of “you can’t speak of anything because it’s always imprecise” just confirms the whole point I was making: since we cannot know any other, more correct, reality, our own limited one creates the possibility of “relative truth”.
If truth can’t be superseded, then it’s an absolute one.
Truth implies something verifiable. Basically: Free Will exists and is present because we’ll never be able to prove it doesn’t. It exists relatively to us, and, because of certain rules, we’ll never explain it out as Bakker says.
It didn’t happen until now, and it won’t happen tomorrow, or in a far future.
Your use of your understanding of how it’s assumed things works works out fine for the most part when living life. But your ‘imposition of boundaries’ stuff treats it as if those assumptions of how things work are actually the very stuff of how things work. Even if you’re dealing with a xerox of the rules of the universe, it doesn’t matter if you crumple up the xerox copy and the boundaries imposition within the crumple – you’re just crushing up a copy. It doesn’t make your ‘imposition of boundaries’ part of the universe. But you’re trying to sell it as if it is part of the world.
Truth implies something verifiable. Basically: Free Will exists and is present because we’ll never be able to prove it doesn’t.
And this rock keeps away dragons because you’ll never be able to prove it doesn’t.
This isn’t an honest way to deal with your fellow man.
And this rock keeps away dragons because you’ll never be able to prove it doesn’t.
Revert this silly example and I’ll agree with it.
That line is problematic because you’d have to prove the existence of dragons, and then be sure the rock is effective.
It means the line feels absurd because it applies to a scenario that doesn’t exist and won’t be tested. In the case I was describing it’s the opposite, I’m stating the truth of something that is true and useful /right now/, and that can’t be disproved later.
You might not be satisfied with a relative truth compared to an absolute one, but it’s just your moral choice to make. You’d rather believe in a world without up and down just because you’d argue ontology until the end.
It’s the same as saying to make a choice you need to think for a thousands years, but nope, you haven’t as much and you have to decide now. That’s a relative choice/truth. Infinity of time is not something you’re allowed to rely on.
In the case I was describing it’s the opposite, I’m stating the truth of something that is true and useful /right now/, and that can’t be disproved later.
From over here it looks like pure circular logic. One moment you’re saying the truth of something that is true right now. The next you’re saying ‘Free Will exists and is present because we’ll never be able to prove it doesn’t.’
Folk can’t prove it doesn’t exist, so it exists. It exists, so you’re talking about something that is true. It’s true, and folk can’t prove it doesn’t exist…
That seems to summerise your position. And that’s the circular logic I’m seeing.
You’d rather believe in a world without up and down just because you’d argue ontology until the end.
It’s about me? I’m flattered. I wish it hinged on me.
My whole argument that is it doesn’t hinge on anything ‘human’. Let alone a particular individual.
But you seem to focus just on me, as if all that is involved is another human being.
You might not be satisfied with a relative truth compared to an absolute one
I’m not seeing qualifiers like ‘relative’ in your descriptions. And ‘make believe’ isn’t anything you wanted to accept. This is my main problem with what your saying and like I said, similar to Ben Cain, using four factual things for every one made up thing, but making no distinguishment between them. Did you want to buy this rock or not?
I’m not seeing qualifiers like ‘relative’ in your descriptions.
What? It’s all about that. The idea of Free Will I say “exists”, it does exist on the basis it’s a relative Free Will. Relative, partial access to information, and so a limit. Same as consciousness can access only a small amount of the processes going on in the brain.
That small amount means there’s a relative truth. So relative Free Will.
This relativity is falsified when it’s superseded by knowledge. The moment you know more the previous stance becomes invalid, and so it becomes false.
But what happens when that place can’t be reached and you cannot reach new information that invalidates a position? Relativity becomes an absolute. There’s not something more complete, and so the relativity is complete, not partial.
One moment you’re saying the truth of something that is true right now. The next you’re saying ‘Free Will exists and is present because we’ll never be able to prove it doesn’t.
Where’s the contradiction? Something that is true right now is something that I say exists.
The fact that you can’t prove it wrong simply means it’s true now and will not get invalidated even later on. So it creates a scenario where the relative truth is the ONLY truth, since you can’t get to a deeper one.
And the simple idea of an absolute truth is the biggest cartoon of all. The biggest abstraction. Which is why I consider silly the appeals to ontology you make. Ontology of this kind is deeply metaphysical and completely unmotivated.
Because it all seems the same as the subject of the Meaning Fetishism post. You’re basically saying people will accept money for goods – nobody can prove they wont. So money has relative intrinsic value, you’re saying.
AFAICT It’s either just wallpapering over the facts of the matter or you genuinely believe, so to speak, the intrinsic value of rectangular paper. Of free will.
Yeah, I have a moral concern with this. But I get I’m not getting through with the technical description. So looking past the moral concern for now, this is some critical feedback about your model. Maybe it doesn’t make sense, or maybe it doesn’t sense for you just at the moment. I’ll humour it maybe makes no sense (ever) if you’ll humour maybe its just that it’s not making sense for you at the moment. Only maybe, not definate. So that can be a kind of possitive end to finish on (or you can propose some other ending, of course)
Because it all seems the same as the subject of the Meaning Fetishism post. You’re basically saying people will accept money for goods – nobody can prove they wont. So money has relative intrinsic value, you’re saying.
…No.
Even that post posits a “systematic understanding” that lifts perception to a different level. You always need a bigger picture available to judge the small one invalid.
But in this case there’s no bigger picture, beside a completely theoretical and unachievable one.
“Intrinsic” implies there’s an extrinsic possibility. In this case there’s none.
You always need a bigger picture available to judge the small one invalid.
That’s a rule, is it?
It’s funny, I was just recently thinking how the soul might be defined as something that feels the world is the smaller circle that fits inside of it, when really it is the thing surrounded by and in the world/as the world.
It’s just a hardwired habit to always think one encapsulate the matter, rather than being the one encapsulated. And with the habit comes insisting that it’s a rule.
It’s just a hardwired habit to always think one encapsulate the matter, rather than being the one encapsulated.
“Rather than” how? You always evoke some vague elsewhere. Encapsulated by what?
All your rhetoric ends up with more and more thinner abstractions and undefined explanations.
There’s no metaphysical elsewhere to evoke, that’s all.
There’s no encapsulation because encapsulation has its basis on a outside to divide from an inside. That’s way way above I explained the dichotomy BEING / KNOWING.
Metaphysics – how a skull can surround an entire universe.
I’m genuinely thinking of making a flow chart of your argument. To show how you combo ‘my teapot orbiting mars exists until you prove otherwise’ with the following quote:
You always need a bigger picture available to judge the small one invalid.
Not a flowchart as a mockery, but to show how it looks to me and how the arrows loop around and maybe for you to point out what I’m missing in the chart.
On the other hand, I’m lazy, so I’m batting the shuttlecock idea into the air to see if it comes back from the other side of the net.
It literally bogs down to this:
We live in a world of absolute illusion, but the illusion is recognized as illusory only when you are able cross the boundary and look back. Same as when you wake from a dream and declare you had a dream, but couldn’t tell it was so while dreaming. If the boundary cannot be crossed, then the illusion is truth.
Since a truth is relative to a context, and we have an absolute truth only as a theoretical, metaphysical abstraction.
Pulling the ladder up after ye. How do you know you’re dealing with an illusion then, if the only way to see such is to cross the boundary and look back? Especially when it’s an absolute illusion, as you say?
I honestly think you should have taken up the money analogy “Look, it’s all made up, but you’ll find people are still gunna give you food for scraps of paper – freakin’ have a little faith in that, Callan!”. That sells me more.
But you call it an illusion, then say illusions can’t be known except from the outside – thus pulling the ladder up after yourself after having already defined it an illusion. It both IS an illusion, but don’t question it because you can’t question it because you can only see an illusion from the outside.
Why not buy it all as being absolute down to earth fact rather than illusion, then? Because how do you know it’s illusion – did you go outside it’s boundary to find that out? But you can’t, you say, but you say it is, etc.
What if your dreams are actually your waking life and now is the dream?
Why not flip that around at this point, when we talk of unknowable illusions (as if we know??)?
Pulling the ladder up after ye. How do you know you’re dealing with an illusion then, if the only way to see such is to cross the boundary and look back? Especially when it’s an absolute illusion, as you say?
…There is such a thing as agnosticism?
I postulated the illusion to show you it makes no difference. But we cannot know if it’s an illusion or not. YOU DON’T KNOW.
It both IS an illusion, but don’t question it because you can’t question it because you can only see an illusion from the outside.
Not it’s NOT an illusion. It MIGHT be one, but you don’t know. Alright?
You can question it, but you cannot have an answer.
And so the result is of relative truth. A truth that is true as long its context is valid. And since THIS context is permanent, I say, the relative truth is all we have, since we cannot achieve a deeper one, EVEN if it MIGHT exist.
So, this MIGHT all be illusory, but since we’ll never know, for us the relative truth becomes an absolute one.
For some god-like entity that has crossed the boundary and looks back to us, we would look like zombies living an illusory life, but as long you don’t believe we GET TO BECOME god-like then this higher existence is only an abstract possibility, but not something that is part of our life, and so part of our relative truth.
I postulated the illusion to show you it makes no difference. But we cannot know if it’s an illusion or not. YOU DON’T KNOW.
How do you know I don’t know? What if I say ‘YOU DON’T KNOW I DON’T KNOW.’. Who comes out on top? The most capslockiest?
This is what you get when you pull the ladder up after you – you’re using raw assertion!
With the audience you’re dealing with/that I come from, I trust that scientific practice can obliquely show us the illusions (if not all at the same time – which is an important distinction to make). That’s why I accept the idea there’s lots of illusion/it’s almost all illusion. Because a tiny slither of it, through trusting scientific prosthetics, actually is the case.
But you’re working from a different ‘base’ – that it’s all illusion because…you say so. You don’t appeal to me to trust scientific practice…you just appeal to me to trust your word on that (even as you say illusion can’t be seen except from the outside – which would apply to you as well, presumably. So begging the question ‘How do you know?’)
I don’t just assume it’s all illusion – I’m coming from an entirely different trust – one in scientific practice (and within that, it’s findings in cognitive science). I don’t buy your ‘it’s all illusion’ angle, when what it rests on is raw assertion all the way down. Even the Scylvendi worked off the idea the stars were just holes in a tent, as flawed as that observation was, rather than just raw assertion the world is a lie. What do you work from?
You could, like Jim Butcher, have a go at my trusting scientific practice, but that’d be more another topic.
Given? The laws of physics required one of the greatest experimental expenditures in human history to discover. So you think metacognition is magic? That it somehow grasps the essences of all these things you guys can never agree on without relying on the caprice of biology and evolution.
Or are you simply playing on a sense of ‘blindness’ that doesn’t admit degrees?
Shoot free throws. It takes a great deal of effort to make 90% from the line. It takes thousand of practice rounds before you can consistently hit a man sized target at 500 meters. If you don’t accept those as natural acts surely we can agree that throwing rocks is a natural act, the old school equivalent of shooting a basketball or a rifle. It takes a great deal of rock throwing practice, meaning a great deal of time and a great deal of effort before a cave man can reliable throw a rock accurately enough to kill a small mammal.
Dude… WHERE IS THE UNHOLY CONSULT???
Exactly.
Care to weigh-in on this discussion, Bakker?
Seems like the thread for converting grimdark to grimfun
Reddit user Wolfdrop made some Earwan heraldry:
http://imgur.com/a/s7zX1
Some music for the Siege of Shimeh:
[…] spawns from a discussion over at Bakker’s blog. I wrote there a lot and about many different things, but the bottom line is that it’s about […]
Earwan humor thread:
http://www.second-apocalypse.com/index.php?topic=1345.0
Reblogged this on Joseph Ratliff's Notepad.
Bit off topic, but I’ve been waiting for this blog to have a stab at a sort of Amish direction that might occur. Like we all laugh at the Amish – they essentially locked down their culture to a certain historical period. But given all the technologically enforced social changes and how they might be pretty dang inhuman, how many other people are going to look at freezing time as well – not to the Amish time period, but probably at about this era (though what they do when immortality gets unlocked…that’ll be a culture breaker). But probably second guessing what upcoming cultural trends will be made fun of probably means it wont be made fun of (since, I guess, the urge to make fun is being observed rather than being pure observer), but hey, a long with the book I’ve been waiting for a stab against neo-Amish cultural movements! 🙂
[…] all. Small wonder ‘phenomenal consciousness’ provokes wonder. How could the most obvious thing possess so few degrees of cognitive freedom? How could light itself deliver us to […]
[…] neglect, what faculty renders us aware of our awareness? The traditional answer, of course, is introspection. But then the question becomes one of what introspection consists […]