Three Pound Brain

No bells, just whistling in the dark…

Month: September, 2015

How Science Reveals the Limits of ‘Nooaesthetics’ (A Reply to Alva Noë)

by rsbakker

As a full-time artist (novelist) who has long ago given up on the ability of traditional aesthetics (or as I’ll refer to it here, ‘nooaesthetics’) to do much more than recontextualize art in ways that yoke it to different ingroup agendas, I look at the ongoing war between the sciences and the scholarly traditions of the human as profoundly exciting. The old, perpetually underdetermined convolutions are in the process of being swept away—and good riddance! Alva Noë, however, sees things differently.

So much of rhetoric turns on asking only those questions that flatter your view. And far too often, this amounts to asking the wrong questions, in particular, those questions that only point your way. All the other questions, you pass over in strategic silence. Noë provides a classic example of this tactic in “How Art Reveals the Limits of Neuroscience,” his recent critique of ‘neuroaesthetics’ in the The Chronicle of Higher Education.

So for instance, it seems pretty clear that art is a human activity, a quintessentially human activity according to some. As a human activity, it seems pretty clear that our understanding of art turns on our understanding of humanity. As it turns out, we find ourselves in the early stages of the most radical revolution in our understanding of the human ever… Period. So it stands to reason that a revolution in our understanding of the human will amount to a revolution in our understanding of human activities—such as art.

The problem with revolutions, of course, is that they involve the overthrow of entrenched authorities, those invested in the old claims and the old ways of doing business. This is why revolutions always give rise to apologists, to individuals possessing the rhetorical means of rationalizing the old ways, while delegitimizing the new.

Noë, in this context at least, is pretty clearly the apologist, applying words as poultices, ways to soothe those who confuse old, obsolete necessities with absolute ones. He could have framed his critique of neuroaesthetics in this more comprehensive light, but that would have the unwelcome effect of raising other questions, the kind that reveal the poverty of the case he assembles. The fact is, for all the purported shortcomings of neuroaesthetics he considers, he utterly fails to explain why ‘nooaesthetics,’ the analysis, interpretation, and evaluation of art using the resources of the tradition, is any better.

The problem, as Noë sees it, runs as follows:

“The basic problem with the brain theory of art is that neuroscience continues to be straitjacketed by an ideology about what we are. Each of us, according to this ideology, is a brain in a vat of flesh and bone, or, to change the image, we are like submariners in a windowless craft (the body) afloat in a dark ocean of energy (the world). We know nothing of what there is around us except what shows up on our internal screens.”

As a description of parts of neuroscience, this is certainly the case. But as a high-profile spokesperson for enactive cognition, Noë knows full well that the representational paradigm is a fiercely debated one in the cognitive sciences. But it suits his rhetorical purposes to choose the most theoretically ill-equipped foes, because, as we shall see, his theoretical equipment isn’t all that capable either.

As a one-time Heideggerean, I recognize Noë’s tactics as my own from way back when: charge your opponent with presupposing some ‘problematic ontological assumption,’ then show how this or that cognitive register is distorted by said assumption. Among the most venerable of those problematic assumptions has to be the charge of ‘Cartesianism,’ one that has become so overdetermined as to be meaningless without some kind of qualification. Noë describes his understanding as follows:

“Crucially, this picture — you are your brain; the body is the brain’s vessel; the world, including other people, are unknowable stimuli, sources of irradiation of the nervous system — is not one of neuroscience’s findings. It is rather something that has been taken for granted by neuroscience from the start: Descartes’s conception with a materialist makeover.”

In cognitive science circles, Noë is notorious for the breezy way he consigns cognitive scientists to his ‘Cartesian box.’ For a fellow anti-representationalist such as myself, I often find his disregard for the nuances posed by his detractors troubling. Consider:

“Careful work on the conceptual foundations of cognitive neuroscience has questioned the plausibility of straightforward mind-brain reduction. But many neuroscientists, even those not working on such grand issues as the nature of consciousness, art, and love, are committed to a single proposition that is, in fact, tantamount to a Cartesian idea they might be embarrassed to endorse outright. The momentous proposition is this: Every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in your brain. We may not know how the brain manages this feat, but, so it is said, we are beginning to understand. And this new knowledge — of how the organization of bits of matter inside your head can be your personality, thoughts, understanding, wonderings, religious or sexual impulses — is surely among the most exciting and important in all of science, or so it is claimed.”

I hate to say it, but this is a mischaracterization. One has to remember that before cognitive science, theory was all we had when it came to the human. Guesswork, profound to the extent that we consider ourselves profound, but guesswork all the same. Cognitive science, in its many-pronged attempt to scientifically explain the human, has inherited all this guesswork. What Noë calls ‘careful work’ simply refers to his brand of guesswork, enactive cognition, and its concerns, like the question of how the ‘mind’ is related to the ‘brain,’ are as old as the hills. ‘Straightforward mind brain reduction,’ as he calls it, has always been questioned. This mystery is a bullet that everyone in the cognitive sciences bites in some way or another. The ‘momentous proposition’ that the majority of neuroscientists assume isn’t that “[e]very thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in [our] brain,” but rather that every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition involves our brain. Noë’s Cartesian box assumption is nowhere so simple or so pervasive as he would have you believe.

He knows this, of course, which is why he devotes the next paragraph to dispatching those scientists who want (like Noë himself does, ultimately) to have it both ways. He needs his Cartesian box to better frame the contest in clear-cut ‘us against them’ terms. The fact that cognitive science is a muddle of theoretical dissension—and moreover, that it knows as much—simply does not serve his tradition redeeming narrative. So you find him claiming:

“The concern of science, humanities, and art, is, or ought to be, the active life of the whole, embodied, environmentally and socially situated animal. The brain is necessary for human life and consciousness. But it can’t be the whole story. Our lives do not unfold in our brains. Instead of thinking of the Creator Brain that builds up the virtual world in which we find ourselves in our heads, think of the brain’s job as enabling us to achieve access to the places where we find ourselves and the stuff we share those places with.”

These, of course, are platitudes. In philosophical debates, when representationalists critique proponents of embodied or enactive cognition like Noë, they always begin by pointing out their agreement with claims like these. They entirely agree that environments condition experience, but disagree (given ‘environmentally off-line’ phenomena such as mental imagery or dreams) that they are directly constitutive of experience. The scientific view is de facto a situated view, a view committed to understanding natural systems in context, as contingent products of their environments. As it turns out, the best way to do this involves looking at these systems mechanically, not in any ‘clockwork’ deterministic sense, but in the far richer sense reveal by the life sciences. To understand how a natural system fits into its environment, we need to understand it, statistically if not precisely, as a component of larger systems. The only way to do this is figure how, as a matter of fact, it works, which is to say, to understand its own components. And it just so happens that the brain is the most complicated machine we have ever encountered.

The overarching concern of science is always the whole; it just so happens that the study of minutiae is crucial to understanding the whole. Does this lead to institutional myopia? Of course it does. Scientists are human like anyone else, every bit as prone to map local concerns across global ones. The same goes for English professors and art critics and novelists and Noë. The difference, of course, is the kind of cognitive authority possessed by scientists. Where the artistic decisions I make as a novelist can potentially enrich lives, discoveries in science can also save them, perhaps even create new forms of life altogether.

Science is bloody powerful. This, ultimately, is what makes the revolution in our human self-understanding out and out inevitable. Scientific theory, unlike theory elsewhere, commands consensus, because scientific theory, unlike theory elsewhere, reliably provides us with direct power over ourselves and our environments. Scientific understanding, when genuine, cannot but revolutionize. Nooaesthetic understanding, like religious or philosophical understanding, simply has no way of arbitrating its theoretical claims. It is, compared to science at least, toothless.

And it always has been. Only the absence of any real scientific understanding of the human has allowed us to pretend otherwise all these years, to think our armchair theory games were more than mere games. And that’s changing.

So of course it makes sense to be wary of scientific myopia, especially given what science has taught us about our cognitive foibles. Humans oversimplify, and science, like art and traditional aesthetics, is a human enterprise. The difference is that science, unlike traditional aesthetics, revolutionizes our collective understanding of ourselves and the world.

The very reason we need to guard against scientific myopia, in other words, is also the very reason why science is doomed to revolutionize the aesthetic. We need to be wary of things like Cartesian thinking simply because it really is the case that our every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition turns on our biology in some fundamental respect. The only real question is how.

But Noë is making a far different and far less plausible claim: that contemporary neuroscience has no place in aesthetics.

“Neuroscience is too individual, too internal, too representational, too idealistic, and too antirealistic to be a suitable technique for studying art. Art isn’t really a phenomenon at all, not in the sense that photosynthesis or eyesight are phenomena that stand in need of explanation. Art is, rather, a mode of investigation, a style of research, into what we are. Art also gives us an opportunity to observe ourselves in the act of knowing the world.”

The reason for this, Noë is quick to point out, isn’t that the sciences of the human don’t have important things to say about a human activity such as art—of course it does—but because “neuroscience has failed to frame a plausible conception of human nature and experience.”

Neuroscience, in other words, possesses no solution to the mind-body problem. Like biology before the institutionalization of evolution, cognitive science lacks the theoretical framework required to unify the myriad phenomena of the human. But then, so does Noë, who only has philosophy to throw at the problem, philosophy that, by his own admission, neuroscience does not find all that compelling.

Which at last frames the question of neuroaesthetics the way Noë should have framed it in the beginning. Say we agree with Noë, and decide that neuroaesthetics has no place in art criticism. Okay, so what does? The possibility that neuroaesthetics ‘gets art wrong’ tells us nothing about the ability of nooaesthetics, traditional art criticism turning on folk-psychological idioms, to get art right. After all, the fact that science has overthrown every single traditional domain of speculation it has encountered strongly suggests that nooaesthetics has got art wrong as well. What grounds do we have for assuming that, in this one domain at least, our guesswork has managed to get things right? Like any other domain of traditional speculation on the human, theorists can’t even formulate their explananda in a consensus commanding way, let alone explain them. Noë can confidently declare to know ‘What Art Is’ if he wants, but ultimately he’s taking a very high number in a very long line at a wicket that, for all anyone knows, has always been closed.

The fact is, despite all the verbiage Noë has provided, it seems pretty clear that neuroaesthetics—even if inevitably myopic in, this, the age of its infancy—will play an ever more important role in our understanding of art, and that the nooaesthetic conceits of our past will correspondingly dwindle ever further into the mists of prescientific fable and myth.

As this artist thinks they should.

Anarcho-ecologies and the Problem of Transhumanism

by rsbakker

So a couple weeks back I posed the Augmentation Paradox:

The more you ‘improve’ some ancestral cognitive capacity, the more you degrade all ancestral cognitive capacities turning on the ancestral form of that cognitive capacity.

I’ve been debating this for several days now (primarily with David Roden, Steve Fuller, Rick Searle, and others over at Enemy Industry), as well as scribbling down thoughts on my own. One of the ideas falling out of these exchanges and ruminations is something that might be called ‘anarcho-ecology.’

Let’s define an ‘anarcho-ecocology’ as an ecology too variable to permit human heuristic cognition. Now we know that such an ecology is possible because we know that heuristics use cues possessing stable differential relations to systems to solve systems. The reliability of these cues depends on the stability of those differential relations, which in turn depends on the invariance of the systems to be solved. This simply unpacks the platitude that we are adapted to the world the way it is (or perhaps to be more precise (and apropos this post) the way it was). Anarcho-ecologies arise when systems, either targeted or targeting, begin changing so rapidly that ‘cuing,’ the process of forming stable differential relations to the target systems, becomes infeasible.  They are problem-solving domains where crash space has become absolute.

I propose that Transhumanism, understood as “an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities,” is actually promoting the creation of anarcho-ecologies, and as such, the eventual obsolescence of human heuristic cognition. And since intentional cognition constitutes a paradigmatic form of human heuristic cognition, this amounts to saying that Transhumanism is committed to what I’ve been calling the Semantic Apocalypse.

The argument, as I’ve been posing it, looks like this:

1) Heuristic cognition depends on stable, taken-for-granted backgrounds.

2) Intentional cognition is heuristic cognition.

/3) Intentional cognition depends on stable, taken-for-granted backgrounds.

4) Transhumanism entails the continual transformation of stable, taken-for-granted backgrounds.

/5) Transhumanism entails the collapse of intentional cognition.

Let’s call this the ‘Anarcho-ecological Argument Against Transhumanism,’ or AAAT.

Now at first blush, I’m sure this argument must seem preposterous, but I assure you, it’s stone-cold serious. So long as the reliability of intentional cognition turns on invariant, ancestral backgrounds, transformations in those backgrounds will compromise intentional cognition. Consider ants as a low-dimensional analogue. As an eusocial species they form ‘super-organisms,’ collectives exhibiting ‘swarm intelligence,’ where simple patterns of interaction–chemical, acoustic, and tactile communicative protocols–between individuals scale to produce collective solutions to what seem to be complex problems. Now if every ant were suddenly given idiosyncratic communicative protocols–different chemicals, different sounds, different sensitivities–it seems rather obvious that the colony would simply collapse. Lacking any intrasystematic cohesion, it just would not be able to resolve any problems.

Now of course humans, though arguably eusocial, are nowhere near so simple as ants. Human soldiers don’t automatically pace out pheromone trails, they have to be ‘convinced’ that this is what they ‘should’ do. Where ants need only cue one another, humans need to both cue and decode each other. Individual humans, unlike ants, possess ‘autonomy.’ And this disanalogy between ants and humans, I think, handily isolates why most people simply assume that AAAT has to be wrong, that it is obviously too ‘reductive’ is some way. They understand the ‘cue’ part of the argument, appreciate the way changing those systems that intentional cognition takes for granted will transform ancestrally reliable cues into miscues. It’s the decode part, they think, that saves the transhumanist day.  We humans, unlike ants, are not passive consumers of our social environments. Miscues can be identified, diagnosed, and then overcome, precisely because we are autonomous.

So much for AAAT.

Except that it entirely agrees. The argument says nothing about the possibility of somehow decoding intentional miscues (like those we witnessed in spectacular fashion with Ashley Madison’s use of bots to simulate interested women), it only claims that such decoding will not involve intentional cognition, insofar as intentional cognition is heuristic cognition, and heuristic cognition requires invariant backgrounds, stable ecologies. Since Transhumanism does not endorse any coercive, collective augmentations of human capacities, Transhumanists generally see augmentation in consumer terms, something that individuals are free to choose or to eschew given the resources at their disposal. Not only will individuals be continually transforming their capacities, they will be doing so idiomatically. The invariant background that intentional cognition is so exquisitely adapted to exploit will become a supermarket of endless enhancement possibilities–or so they hope. And as that happens, intentional cognition will become increasingly unreliable, and ultimately, obsolete.

To return to our ant analogy, then, we can see that it’s not simply a matter of humans possessing autonomy (however this is defined). Humans, like ants, possess specifically social adaptations, entirely unconscious sensitivities to cues provided by others. We generally ‘solve’ one another effortlessly and automatically, and only turn to ‘decoding,’ deliberative problem-solving, when these reflexive forms of cognition let us down. The fact is, decoding is metabolically expensive, and we tend to avoid it as often as we can. Even more significantly (but not surprisingly), we tend to regard instances of decoding successful to the extent that we can once again resume relying on our thoughtless social reflexes. This is why, despite whatever ‘autonomy’ we might possess, we remain ant-like, blind problem-solvers, in this respect. We have literally evolved to participate in co-dependent communities, to cooperate when cooperation served our ancestors, to compete when competition served our ancestors, to condemn when condemnation served our ancestors, and so on. We do these things automatically, without ‘decoding,’ simply because they worked well enough in the past, given the kinds of systems that required solving (meaning others, even ourselves). We take their solving power for granted.

Humans, for all their vaunted ‘autonomy,’ remain social animals, biologically designed to take advantage of what we are without having to know what we are. This is the design–the one that allows us to blindly solve our social environments–that Transhumanism actively wants to render obsolete.

But before you shout, ‘Good riddance!’ it’s worth remembering that this also happens to be the design upon which all discourse regarding meaning and freedom happens to depend. Intentional discourse. The language of humanism…

Because as it turns out, ‘human’ is a heuristic construct through and through.

 

 

Akrasis

by rsbakker

Akrasis (or, social akrasis) refers to the technologically driven socio-economic process, already underway at the beginning of the 20th century, which would eventually lead to Choir.

Where critics in the early 21st century continued to decry the myriad cruelties of the capitalist system, they failed to grasp the greater peril hidden in the way capitalism panders to human yens. Quick to exploit the discoveries arising out of cognitive science, market economies spontaneously retooled to ever more effectively cue and service consumer demand, eventually reconfiguring the relation between buyer and seller into subpersonal circuits (triggering the notorious shift to ‘whim marketing,’ the data tracking of ‘desires’ independent of the individuals hosting them). The ecological nature of human cognition all but assured the mass manipulative character of this transformation. The human dependency on proximal information to cue what amount to ancestral guesses regarding the nature of their social and natural environments provided sellers with countless ways to game human decision making. The global economy was gradually reorganized to optimize what amounted to human cognitive shortcomings. We became our own parasite.

Just as technological transformation (in particular, the scaling of AI) began crashing the utility of our heuristic modes of meaning making, it began to provide virtual surrogates, ways to enable the exercise of otherwise unreliable cognitive capacities. In other words, even as the world became ever more inhuman, our environments became ever more anthropomorphic, ever more ‘smart’ and ‘immersive.’ Thus ‘akrasis,’ the ancient term referring to the state of acting against one’s judgment, which here describes a society acting against the human capacity to judge altogether, a society bent upon the systematic substitution of actual autonomy for simulated autonomy.

Humans, after all, have evolved to leverage the signal of select upstream interventions, assuming it a reliable component of their environments. Once we developed the capacity to hack these latter signals, the world effectively became a drug.

Akrasis has a long history, as long as life itself, according to certain theories. Before the 21st century, the process appeared ‘enlightening,’ but only because the limitations of the technologies involved (painting, literacy, etc.) rendered the resulting transformations manageable. But the rate of transformation continued to accelerate, while the human capacity to adapt remained constant. The outcome was inevitable. As the bandwidth of our interventions approached then surpassed the bandwidth of our central nervous systems, the simulation of meaning became the measure of meaning. Our very frame of reference had been engulfed. For billions, the only obvious direction of success—the direction of ‘cognitive comfort’—lay away from the world and into technology. So they defected in their billions, embracing signals, environments, manufactured entirely from predatory code. Culture became indistinguishable from cheat space—as did, for those embracing virtual fitness indicators, experience itself.

By 2050, we had become an advanced akratic civilization, a species whose ancestral modes of meaning-making had been utterly compromised. Art was an early casualty, though decades would be required to recognize as much. Fantasy, after all, was encouraged in all forms, especially those, like art or religion, laying claim to obsolete authority gradients. To believe in art was to display market vulnerabilities, or to be so poor as to be insignificant. No different than believing in God.

Social akrasis is now generally regarded as a thermodynamic process intrinsic to life, the mechanical outcome of biology falling within the behavioural purview of biology. Numerous simulations have demonstrated that ‘outcome convergent’ or ‘optimizing’ systems, once provided the base capacity required to extract excess capacity from their environments, will simply bootstrap until they reach a point where the system detaches from its environment altogether, begins converging upon the signal of some environmental outcome, rather than any actual environmental outcome.

Thus the famous ‘Junkie Solution’ to Fermi’s Paradox (as recently confirmed by the Gala Semantic Supercomputer at MIT).

And thus Choir.

The Augmentation Paradox

by rsbakker

So, thanks to the great discussion on the ‘Knowledge of Wisdom Paradox,’ here’s a sharper way to characterize the ecological stakes of the posthuman:

The Augmentation Paradox: The more you ‘improve’ some ancestral capacity, the more you degrade all ancestral capacities turning on the ancestral form of that capacity.

It’s not a paradox in the formal sense, of course. Also note that the dependency between ancestral capacities can be a dependency within or between individuals. Imagine a ‘confabulation detector,’ a device that shuts down your verbal reporting system whenever the neural signature of confabulation is detected, effectively freeing you from the dream world we all inhabit, while effectively exiling you from all social activities requiring confabulation (you now trigger ‘linguistic pause’ alerts), and perhaps dooming you to suffer debilitating depression.

It seems to me that something like this has to be floating around somewhere–in debates regarding transhumanism especially. If most all artificial augmentations entail natural degradations, then the question becomes one of what is gained overall. One can imagine, for instance, certain capacities degrading gracefully, while others (like the socio-cognitive capacities of those conned by Ashley Madison bots, for instance) collapsing catastrophically. So the question has to be, What guarantee do we have that augmentations will recoup degradations?

The point being, of course, that we’re not tinkering with cognitive technologies on the ground so much as on the 115th floor. It’s 3.8 billion years down!

Either way, the plausibility of the transhumanist project pretty clearly depends on somehow resolving the Augmentation Paradox in their favour.

BBT Creep: The Inherence Heuristic

by rsbakker

Exciting stuff! For years now the research has been creeping toward my grim semantic worst-case scenario–but “The inherence heuristic” is getting close, very close, especially the way it explicitly turns on the importance of heuristic neglect. The pieces have been there for quite some time; now researchers are beginning to put them together.

One way of looking at blind brain theory’s charge against intentionalism is that so-called intentional phenomena are pretty clear cut examples of inherence heuristics as discussed in this article, ways to handle complex systems absent any causal handle on those systems.  When Cimpion and Saloman write,

“To reiterate, the pool of facts activated by the mental shotgun for the purpose of generating an explanation for a pattern may often be heavily biased toward the inherent characteristics of that pattern’s constituents. As a result, when the storytelling part of the heuristic process takes over and attempts to make sense of the information at its disposal, it will have a rather limited number of options. That is, it will often be forced to construct a story that explains the existence of a pattern in terms of the inherent features of the entities within that pattern rather than in terms of factors external to it. However, the one-sided nature of the information delivered by the mental shotgun is not an impediment to the storytelling process. Quite the contrary – the less information is available, the easier it will be to fit it all into a coherent story.” 464

I think they are also describing what’s going on when philosophers attempt to theoretically solve intentionality, intentional cognition, relying primarily on the resources of intentional cognition. In fact, once you understand the heuristic nature of intentional cognition, the interminable nature of intentional philosophy becomes very easy to understand. We have no way of carving the complexities of cognition at the joints of the world, so we carve it at the joints of the problem instead. When your neighbour repairs your robotic body servant, rather than cognizing all the years he spent training to be a spy before being inserted into your daily routines, you ‘attribute’ him ‘knowledge,’ something miraculously efficacious in its own  right, inherent. And for the vast majority of problems you encounter, it works. Then the philosopher asks, ‘What is knowledge?’ and because adducing causal information scrambles our intuitions of ‘inherence,’ he declares only intentional idioms can cognize intentional phenomena, and the species remains stumped to this very day. Exactly as we should expect. Why should we think tools adapted to do without information regarding our nature can decode their own nature? What would this ‘nature’ be?

The best way to understand intentional philosophy, on a blind brain view, is as a discursive ‘crash space,’ a point where the application of our cognitive tools outruns their effectiveness in ways near and far. I’ve spent the last few years, now, providing various diagnoses of the kinds of theoretical wrecks we find in this space. Articles such as this convince me I won’t be alone for much longer!

So to give a brief example. Once one understands the degree to which intentional idioms turn on ‘inherence heuristics’–ways to manage causal systems absent any behavioural sensitivity to the mechanics of those systems–you can understand the deceptiveness of things like ‘intentional stances,’ the way they provide an answer that functions more like a get-out-of-jail-free card than any kind of explanation.

Given that ‘intentional stances’ belong to intentional cognition, then the fact that intentional cognition solves problems neglecting what is actually going on reflects rather poorly on the theoretical fortunes of the intentional stance. The fact is ‘intentional stances’ leave us with a very low dimensional understanding of our actual straits when it comes to understanding cognition–as we should expect, given that it utilizes a low dimensional heuristic system geared to solving practical problems on the fly and theoretical problems not at all.

All along I’ve been trying to show the way heuristics allow us to solve the explanatory gap, to finally get rid of intentional occultisms like the intentional stance and replace them with a more austere, and more explanatorily comprehensive picture. Now that the cat’s out of the bag, more and more cognitive scientists are going to explore the very real consequences of heuristic neglect. They will use it to map out the neglect structure of the human brain in ever finer detail, thus revealing where our intuitions trip over their own heuristic limits, and people will begin to see how thought can be construed as mangles of parallel-distributed processing meat. It will be clear that the ‘real patterns’ are not the ones required to redeem reflection, or its jargon. Nothing can do that now. Mark my words, inherence heuristics have a bright explanatory future.

Bonfire bright.

The Knowledge of Wisdom Paradox

by rsbakker

Consider: We’ve evolved to solve environments using as little information as possible. This means we’ve evolved to solve environments ignoring as much information as possible. This means we’ve evolved to take as much of our environments for granted as possible. This means evolution has encoded an extraordinary amount of implicit knowledge into our cognitive systems. You could say that each and every one of us constitutes a kind of solution to an ‘evolutionary frame problem.’

Thus the ‘Knowledge of Wisdom Paradox.’ The more explicit knowledge we accumulate, the more we can environmentally intervene. The more we environmentally intervene, the more we change the taken-for-granted backgrounds. The more we change taken-for-granted backgrounds, the less reliable our implicit knowledge becomes.

In other words, the more robust/reliable our explicit knowledge tends to become, the less robust/reliable our implicit knowledge tends to become. Has anyone come across a version of this paradox anywhere? It actually strikes me as a very parsimonious way to make sense of how intelligence manages to make such idiots of some individuals. And its implications for our future are nothing if not profound.