Intentional Philosophy as the Neuroscientific Explananda Problem
by rsbakker
The problem is basically that the machinery of the brain has no way of tracking its own astronomical dimensionality; it can at best track problem-specific correlational activity, various heuristic hacks. We lack not only the metacognitive bandwidth, but the metacognitive access required to formulate the explananda of neuroscientific investigation.
A curious consequence of the neuroscientific explananda problem is the glaring way it reveals our blindness to ourselves, our medial neglect. The mystery has always been one of understanding constraints, the question of what comes before we do. Plans? Divinity? Nature? Desires? Conditions of possibility? Fate? Mind? We’ve always been grasping for ourselves, I sometimes think, such was the strategic value of metacognitive capacity in linguistic social ecologies. The thing to realize is that grasping, the process of developing the capacity to report on our experience, was bootstapped out of nothing and so comprised the sum of all there was to the ‘experience of experience’ at any given stage of our evolution. Our ancestors had to be both implicitly obvious, and explicitly impenetrable to themselves past various degrees of questioning.
We’re just the next step.
What is it we think we want as our neuroscientific explananda? The various functions of cognition. What are the various functions of cognition? Nobody can seem to agree, thanks to medial neglect, our cognitive insensitivity to our cognizing.
Here’s what I think is a productive way to interpret this conundrum.
Generally what we want is a translation between the manipulative and the communicative. It is the circuit between these two general cognitive modes that forms the cornerstone of what we call scientific knowledge. A finding that cannot be communicated is not a finding at all. The thing is, this—knowledge itself—all functions in the dark. We are effectively black boxes to ourselves. In all math and science—all of it—the understanding communicated is a black box understanding, one lacking any natural understanding of that understanding.
Crazy but true.
What neuroscience is after, of course, is a natural understanding of understanding, to peer into the black box. They want manipulations they can communicate, actionable explanations of explanation. The problem is that they have only heuristic, low-dimensional, cognitive access to themselves: they quite simply lack the metacognitive access required to resolve interpretive disputes, and so remain incapable of formulating the explananda of neuroscience in any consensus commanding way. In fact, a great many remain convinced, on intuitive grounds, that the explananda sought, even if they could be canonically formulated, would necessarily remain beyond the pale of neuroscientific explanation. Heady stuff, given the historical track record of the institutions involved.
People need to understand that the fact of a neuroscientific explananda problem is the fact of our outright ignorance of ourselves. We quite simply lack the information required to decide what it is we’re explaining. What we call ‘philosophy of mind’ is a kind of metacognitive ‘crash space,’ a point where our various tools seem to function, but nothing ever comes of it.
The low-dimensionality of the information begets underdetermination, underdetermination begets philosophy, philosophy begets overdetermination. The idioms involved become ever more plastic, more difficult to sort and arbitrate. Crash space bloats. In a sense, intentional philosophy simply is the neuroscientific explananda problem, the florid consequence of our black box souls.
The thing that can purge philosophy is the thing that can tell you what it is.
Reblogged this on NeuroDivergent and commented:
Sometimes I think R. Scott Bakker is the only one who “gets” me and the things I think.
Occasionally, when considering the problem of the problem, I wonder at the idea of metacognition through the framework of society/species as metabrain. The way we continually seem to refine a few very specific ideas over and over through the detritus produced in our search for meaning. Perhaps we’re the metadimensional equivalent of a cluster of interneurons responsible for a specific type of experiential processing, able only to do what we were shaped to do in ways we’re incapable of understanding, unable to even grasp our role in the larger schema.
Or perhaps we’re simply shrieking, terrified narrative stapled onto complex organic machinery.
Second-order blindness on the collective level will be every bit as trenchant as on the individual, for sure. I think one productive way of looking at Big Data is in terms of gaining a systematic grasp on what’s going on. The way to see the blindness isn’t as a blank field, but as the absence of any field altogether. Wherever social physical techniques prove more actionable, they will displace folk techniques, and the world will become both better known (more manipulable) and more alien to intuition. So the problem isn’t so much our ignorance of our ‘place,’ but the mandatory nature of the equipment we evolved to cope with that ignorance. Our growing knowledge of the brain is destroying the information ecology required for meaning heuristics to do real lifting. Information from the former jams the latter, and in so many ways, the latter is all we got.
why would “Wherever social physical techniques prove more actionable, they will displace folk techniques” be so?
Money, basically. We’re actually undergoing an organizational renovation as profound as that witnessed in the Enlightenment as we speak. Signals intelligence become social engineer.
hmm i do management/project consulting with a lot of engineering and coding companies and university research labs this sounds like accelerationist scifi to moi
Am I falling for the hype? Could be possible. What did you think of Sandy Pentland’s Social Physics?
I don’t see how big data is going to re-engineer individuals so they can reorganize themselves.
can’t tell you how many big data types I talk to can’t grasp that they are applying their tech/maths to the results of their own prejudices.
what bodily/neuro hacks would we need to undo our current cog-biases?
But since the results are driven by external data (rather than interpretations of content), the problem of blinders will always only be a liability, a cost of doing business as opposed to an insuperable barrier. Our biases need only be systematic to be heuristically manipulated, so we need not change to be efficiently compelled. In fact, it would be probably easier if we didn’t.
This shit has all but swallowed the marketplace whole, revolutionized it–do you really think the workplace is that far behind?
not sure I follow how you think the data is getting produced (junk in is still junk out) or better yet being employed/institutionalized.
also the nonsense about “social/network fields” and all should ring yer post-structuralist spider-senses.
was just at a conference with a prof @Wallstreet’s school for day traders and things aren’t so different yet for many, and yeah on the business front think about what it would take to somehow get people to adjust to whatever data/nudges might be computed. if you look at the ubers and all who are trying to maximize work-hours/efficiency with sort of micro-scheduling the personnel and liability issues (not to mention govt/regulatory pushback) are mounting daily. As far as I can tell it hasn’t significantly gotten any easier to get good ideas to spread, part of why the cyborg/robot dreams still hold so much sway for managers/planners.
But the point is that no data (so long as it is data) is junk, isn’t it? It may start out as useless, but as it accumulates, and as the learning algorithms evolve, more opportunities for heuristic gerrymandering become possible. You don’t need to understand anything to hack it. You just need to track it long enough, meticulously enough. You archive everything useful, wait for advances in technology to make hacking more and more complicated systems feasible. This seems pretty true to way signal intelligence has evolved thus far: why suppose it will stop?
Otherwise, no po-mo hackles are raised because it’s all mechanical, which is to say, genuinely horrifying.
the other day i was meeting with some big data comp science folks and they were crunching numbers on a couple of medical treatment protocols but they had decided what elements to register (of all the possible factors) and they were treating all the numbers as being representative/accurate some of which were say roughly mechanical measures like body temps others as fuzzy as the scaling of treatment outcomes recorded by rehab therapists, so in other words lots of junk in with the more reliable stuff, and even where the data was more solid/representative the framing of the subject/phenomena is still being done by people and than people have to make decisions about what if any changes to make and than there are the attempts (largely futile to get hospital administrators to take the numbers seriously which doesn’t even get us into trying to change the large number of professions and protocols involved and so on. even when the MITish folks decide to track (say with smartphones) certain kinds of interactions/contacts (and not others) the machines can only crunch what is made available to them in the environment.
What is more interesting but not largely operational is the ways in which machines can start to comb thru data sets looking for signals that we haven’t anticipated but even then we have chosen/framed the sets, they can’t really get out and about on their own.
Most big data right now in areas like e-sales is being used for flagging potential crashes of sites and or of related supplies, like a kind of alarm system if you will of certain preset perimeters, worth a lot of money but not so “smart”.
So the problem is basically implementation, and basically bureaucratic? the fact they lack a meta-social physics! What I’m probably doing is mapping the in-principle omni-applicablity a little to eagerly across what are, in fact, hopelessly messy ground floors? Does that sound right?
All this raises the very interesting question of traction, if you think about it, the kinds of base-line ecologies you need to bootstrap social-physical efficiencies (in spreading ideas or what have you).
well 2 problems really first is data quality (not unlike fitness perhaps in evolution?) the machine doesn’t know/care if the numbers match up with the world (represent anything real so matters/problems of framing still play out, how much of a system, loosely speaking in terms of human doings, does one have to account/calculate for?), but yeah the social ‘engineering’ is the big kicker and why folks like Stephen Turner still matter, what is the means/mechanism for encoding/co-ordinating behavior, you’d be amazed how difficult it is even in super specialized settings like high-tech surgical theaters to enforce protocols or otherwise change behaviors. See how much of production engineering is now pushing for mechanization, so how machine-like would we need to become to get in gear with this kind of research?
But the problems hitherto have to do with the ‘brittleness’ of conventional computation. More and more (and in short order, if you ask me), these systems (‘computer-assisted decision-making’ and the like) are going to behave much more organically, exhibit graceful degradation, real time adaptability, and so on. The invasion of the factory floor has already begun (like Brookes latest project). They’ll still be two dimensional, specialized, but all they need is glean slivers of efficiency to rationalize their expense…
You and I are starting to bank a bunch of different side bets on technological capacities, Dirk!
ah yeah, well I’ve been involved one way or another with various aspects of cybernetics for going on 3 decades now and I can appreciate the enthusiasm but engineering/invention is bloody hard work with lots of dead-ends along the way so time will tell, I think we are probably closer on the alltoohuman aspects of social physics and related hopes for social engineering (so many of his assumptions about human-being/learning are pretty thin, reminds me of bogus social psychology on social “contagion”) so perhaps the big question is at what point will the differences between men and machines be negligible?
I still think the infrastructures (physical/economic/political/etc) for
big science/engineering are on the brink…
could you give an example here scott. i only briefly saw a clip on some show on tv with pentland. i remember being genuinely disturbed but i didnt see anything pertaining to an application or to it being used.
By isolating the pattern of communicative activity that most facilitated creative problem solving and using it as an organizational template he was apparently able to increase the productivity of a Bank of America call centre by 20% – this is the big one he writes and talks about. There’s other examples. But if you want a visceral sense, there’s an episode of Brain Games that features his team, if I remember aright. It set my skin a-tingle.
Since this particular science is just getting off the ground, there’s a good chance it will have us tied in more knots than we imagine sooner than we think.
divby0 http://socialphysics.media.mit.edu/blog/
How do they know what constitutes ‘creative problem solving’ before they know they’ve solved the problem? Or even before they know what the problem is)? Sounds more like marketing spin.
‘Or perhaps we’re simply shrieking, terrified narrative stapled onto complex organic machinery.’
I love it. Having said that there really does seem to me, and this seeming itself is both the best and worst of what I’ve got after cashing out BBT, that the ‘terrified narrative’ part of it can be not so much reasoned out into just ‘narrative’ as bludgeoned into meek acceptance and submission by cognitive override, the way you can train a stroke victim to regain use of a limb.
This brings me to the other part of your comment and the idea of framing metacognition in terms of society. A big part of me has always hoped that widespread education on our how our mind works would help us navigate the ‘crash spaces’ not so much in ourselves (very hard if not impossible) with and for one another, while at the same I fear that in time we would see this same knowledge creating the very risk it warned us of. I’ll start believe that because I know these things, I must have won the magical belief lottery. Is there no escape from this escapism!?
Self-perpetuating hegemony masquerading as open ended experimentation. Dogmatic conviction proposing itself as critical thinking.
On another level, if BBT isn’t offering me a way out of this predetermined mechanically induced delusion of agency, self , intention, etc. then who cares about it? At best all its gonna do is erode my native capacity for delusion, and that’s all I’ve got!!!
Well, I can always hope either way.
Is it really the growing knowledge that is the issue, or a lack of a particular boundary and unconsidered implementation outside that boundary? I’ll describe what I mean by comparing boardgames and roleplay games. With boardgames their rules are generally sealed tight, there’s no out – play is inside the circle of the boardgame and indeed generally play within that inside area is given the blessing of the game designer.
Roleplay games, however, it’s like there are plenty of gaps in that circle. Frankly, it’s like a sieve! And I’ve seen players, time and time again, cross that boarder and fall outside it (reading this or that text with an ambiguous interpretation in just the way that suits them), all the while insisting they are still inside the game designer blessed circle – indignant, even, at any resistance to what they are doing and acting entitled as if they still stood within the blessed line. I think the crash space story shows some or a lot of that outside the line entitlement.
So it’s not the knowledge itself – it’s the tumble outside of any line (here, instead of it being a line a designer drew, it’s more a line drawn from a historic average of human behavior). But there’s no sense of tumbling at all for the subject – no sense of leaving entitled ground. Indeed, having had a sense of free will freedom all their lives, the idea of a line ever having been there (even a dotted line) would just seem alien, rather than them falling into the alien.
Of course what bugs me about this is that given an apparent history of misogyny and if the line is defined by a historical look at human behavior (a valid measure, IMO) the practice of even some degree of feminism and some degree of equality is probably falling outside the line as well! So how much can one lay into line breaches, when one is committed to some degree to a line breach oneself?
hey is there a BBT book in the works?
http://meaningoflife.tv/videos/32997
Interesting website for sure. Feel free to let them know I’ll play devils advocate, if they want!
I seem to bumping into Hoffman everywhere, nowadays. That man knows how to market. As soon as I can get a sense for how serious his computer simulation work on veridical versus practical cognition, I intend to use it to make my own case (he’s still thinking heuristics through the lens of heuristics).
Yes… the Book. I actually have the intro completed.
don’t know those folks but have an in on the science-reporting show on bloggingheads which is the main site.
I don’t have the math to know if Hoffman’s modeling is of any use but seemed like another take along the lines of BBT I stop listening to him when he starts to posit what he thinks Reality might be like beyond the very limits he endeavors to highlight.
let me know when the book is ready for readers and I’ll see how it might fit or not with the various folks research interests I know.
It’s as if what we need is a test against what we don’t know rather than what we do; as if what is important is to find those cracks in our heuristic frameworks that fail the test. Just there at that knot of failure is the kernel of truth we need, which cannot be explained nor explained away. In our failures science begins. That means praxis before theory at least as we bring our functional models against the empirical data. Whatever fails the model is the place to begin the next test: the site of failure is that which is ontologically valid in the empirical that cannot be absorbed into the model, but from which the model must adapt to its next stage of testing.
This is a great way to frame it via intentional idioms, I think. I prefer to see the process as a high-dimensional (mechanical) one because it brings home the blindness of the apparatus (considered in high-dimensional sum, as cognizer cognizing cognized).
But it is the way I see science as working as well.
Yesh. The gears are engaged on this end. Shit is suddenly moving fast.
Please do one with just ‘Yatwer!’ on it 🙂
‘Luke, I am ur-mother!’
http://www.newyorker.com/magazine/2015/11/16/politics-and-the-new-machine
What a great crash space tale. Trippy, because so nigh.
yeah, will have to consider adding a crash space tales category to go along with my failing state watch
Sounds poifect.
all is grist for the gestalt mill
“the hope and the fantasy” that big data will start to steer research/activity
http://www.abc.net.au/radionational/programs/futuretense/we%E2%80%99re-all-data-now:-what-big-data-could-mean-for-law-&-policy/6988048
http://tinyletter.com/Intelligence_Autonomy
Just sounds like a genre. Ie “In that sense the international side of data, data isn’t equal, you can’t just look at data as a neutral thing, you’ve got think about what the politics was involved in putting that data together, and then I suppose in releasing it.”
as with all such tech ya gotta take into account the human ‘interface’, now getting engineers and coders to grasp that is almost as hard as getting non-tech types to grasp what the tools can and can’t do for them, either way no revolutionary practices in sight.
“The low-dimensionality of the information begets underdetermination, underdetermination begets philosophy, philosophy begets overdetermination. [. . .] In a sense, intentional philosophy simply is the neuroscientific explananda problem, the florid consequence of our black box souls.”
Its funny to me how this is so wonderfully well-put and how completely offensive the statement is to the discipline! Philosophy is the sort of field of study that often draws the kind of person who regularly mistakes vagueness for wisdom and proceedingly attempts to pass off ambiguity as profundity. Even worse, if you try to get this person to clarify their meaning they often tend to double-down and try to pass off ad-hoc rationalizing as articulate, reasoned defense, ultimately resorting to different words with the same categorically underdetermined meaning, which also happens to make their “opinion” difficult to reason or argue with. After all, how can I disagree or agree when I don’t know what you are actually saying?
I know all this because I fall into this category of person. But this is sort of my problem with this whole BBT thing: the theory seems to implicitly suggest itself as a way to overcome ‘crash space’ and the informational neglect native to metacognition while explicitly stating that it is unlikely there is any manner in which this can be achieved conclusively. Even now I’m beginning to feel like I’m just spinning my wheels!! Still, maybe this is more of a product of me wishing BBT implies this than it is of BBT actually implying it, but it seems to me that being blind and not knowing it would be a completely different experience with a much more difficult set of problems than being blind and knowing that you are blind. Am I imagining this, or does BBT really suggest that given an awareness (however dim) of the role information scarcity and neglect play in our cognitive process to some degree you can (over)compensate for this blindness.
Either way I love the line “The low-dimensionality of the information begets underdetermination, underdetermination begets philosophy, philosophy begets overdetermination.”
Super funny!
Glad you liked. Spread the word!
“But this is sort of my problem with this whole BBT thing: the theory seems to implicitly suggest itself as a way to overcome ‘crash space’ and the informational neglect native to metacognition while explicitly stating that it is unlikely there is any manner in which this can be achieved conclusively.”
This is why 1) the empirical bases of the theory are so important; and 2) it’s mere plausibility is so destructive. The science is clearly trending its way: metacognition is pretty clearly turning out to be the fractionate mess of heuristic adaptations it has been predicting. Meanwhile, research into varieties of environmental cognition is beginning, at least, to hypothesize its posits, and to begin the work of testing them. This is more than enough to place the tradition on the shelf, I think. With BBT, time will tell, unlike anything relying on intentionalism/normativism.
The fact is, given what we know, we should expect to find ourselves in this dilemma, stranded with basic systems that can only resolve issues in low-dimensional information environments, and to become increasingly maladapted as the behind the scene high-dimensional invariants are accessed and transformed by science.
i mean, it depends on what you mean. everyone implicitly knows that adding or coordinatiing between modalities can pad out and ameliorate certain effects of neglect. thus people who can hear are less surprised by people who come into their visual fields, because their audition is giving them information about what to expect. but we don’t correspondingly expect that audition doesn’t itself has it’s own neglect structure. and it just has to be the case that the brain has native limits between the kinds and extent of these modal patches to its neglect structure. there is only so much that the BBT can do here. because actual neglect is not conceptual. you can know the BBT and you are still going to be surprised by someone walking out of nowhere into your visual field if you are deaf.
[…] some ways Scott Bakker’s short post Intentional Philosophy as the Neuroscientific Explananda Problem succinctly shows us the central problem of our time: medial neglect. But what is medial neglect? […]