BBT Creep…
by rsbakker
“Given the inability of SDT-based models to account for blind insight, our data suggest that a more radical revision of metacognition models is required. One potential direction for revision would take into account the evidence, mentioned in the Introduction, that neural dynamics underlying perceptual decisions involve counterflowing bottom-up and top-down neural signals (Bowman et al., 2006; Jaskowski & Verleger, 2007; Salin & Bullier, 1995). A framework for interpreting these countercurrent dynamics is provided by predictive processing, which proposes that top-down projections convey predictions (expectations) about the causes of sensory signals, with bottom-up projections communicating mismatches (prediction errors) between expected and observed signals across hierarchical levels, with their mutual dynamics unfolding according to the principles of Bayesian inference (Clark, 2013). Future models of metacognition could leverage this framework to propose that both first-order and metacognitive discriminations emerge from the interaction of top-down expectations and bottom- up prediction errors, for example by allowing top-down signals to reshape the probability distributions of evidence on which decision thresholds are imposed (Barrett et al., 2013). We can at this stage only speculate as to whether such a model might provide the means to account for the blind-insight phenomenon and recognize that predictive coding is just one among a variety of potential frameworks that could be applied to that challenge (Timmermans et al., 2012).” Ryan B. Scott et al, “Blind Insight: Metacognitive Discrimination Despite Chance Task Performance,” 8
Just thinking in these terms renders traditional assumptions regarding the character and capacity of philosophical reflection deeply suspect. Is it really just a coincidence that all the old riddles regarding the human remain just as confounding? You need only consider the challenge the brain poses to itself to realize the brain simply cannot track its own activities the way it tracks activities in its environments. The traditionalists would have you believe that reflection reveals an alternate order of efficacy, if not being. So far, the apparent obviousness of the intuitions and the absence of any credible account of the work they seem to do has allowed them to make an abductive case. Reflection, they argue, discriminates autonomous/irreducible/transcendental functions and/or phenomena. Of course, they don’t so much agree on the actual discriminations they make as they agree that such discriminations can and must be made.
My bet is that the brain does a lot of causal (Bayesian) predictive processing troubleshooting its environments and relies on some kind of noncausal predictive processing to troubleshoot itself and other brains. You only need to look at the dimensions missing in the ‘mental’ or the ‘normative’ or the ‘phenomenological’ to realize they’re precisely the kinds of information we should expect an overmatched metacognition to neglect. Where the brain is able to articulate efficacies into mechanistic (lateral) relationships in certain, typically natural environments, it must posit unarticulated efficacies in other, typically social environments. My hypothesis is that the countless naturalistically inscrutable, ontologically exceptional, alternate orders of efficacy posited by the traditionalist amount to nothing more than this.
Either way, this research is killing traditional philosophy as we speak.
the title made me think it was name calling season! aw shucks
Not sure I see your point – yes, although our private experience is “ours” we observe it from “the outside” and try to make sense of it, just as we do for the “outer” world. If you want to speak in brain dynamics terms it seems very likely that we’re applying some sort of predictive bayesean scheme to do neural computations. How does this answer the question of whether we can or can’t “know” something? That is while abstractly we use similar algorithms (writ LARGE) to do a manifold of things they vary greatly in their efficacy.
the brain isn’t intrinsically immanently good at knowing more than some basic things neither about the world or the mental world. it just so happens that we feel much more confident about our knowledge about the world than we do regarding our inner workings at this point in time. 400 years that was much less the case, if at all. and it isn’t like our predictive machinery did magic here – rather say physics is a result of countless years of work a a great many who sweated blood and tears to sift putative facts and patterns. and maybe as importantly – none of this would be possible without the conceptual scaffolding – namely the pertinent math, which i can’t see how you could claim is a result of predictive schemes, but rather of conceptual cum symbolic reasoning.
You could say, we’ll just wait for science to clear this up for us, but to what extent it can remains to be seen, and until then it’s fair game for philosophical reasoning. This is not to say that the common crappy arm chair philosophy done by people who can’t be arsed to even be familiar with the relevant science is helping anybody, but this doesn’t preclude the potential valuable contributions philosophy could be making to cognitive sciences.
So what’s the evidence that ‘conceptual scaffolding’ as you call it is what the tradition insists it is?
It’s shocking how difficult this question is to answer.
The fact is, the capacity to solve problems without knowing how we solved them is a hallmark of human cognition. I’m not saying philosophy hasn’t generated rare solutions here and there, only that it has no way of cognizing what those solutions consist in. We’ve been unsolved solvers all along, merely plugging our ears with speculation. You have to admit, it’s certainly what things look like!
If by “philosophy” you mean what’s been going on in the anglosaxon/analytical sphere for the last hundred years or so then sure. Some of us though, never thought anything would come out of this given the premises: 1) physics is magical. it somehow transcends the fact that it’s a human endeavour (hence a product of mind and experience in the phenomenal sense) 2) cognition is something yo should think of in equations in first order predicate calculus.
If we remain neutral about the facts nothing here is terribly surprising: for a system to “know something” that is have reliable information about it that it can act upon requires a lot of computational effort. therefore there’s an inherent limit as to what extent a system can represent (hold reliable information) as to it’s own activity meaning that
a) if it were to try explicitly represent (monitor, update predictive variables) every process it carries out that would have to stop at some point because resources are limited, and of course it could represent the representing processes ad infinitum without leading to an infinite regress
b) the noise involved in the assessment of input (feedback and other) and drawing inferences (or making decision or whatever) relating to it or things pertaining to the “outer world” would be pretty much the same.
but maybe even more important are c) and d) namely
c) it’s far for clear that our systems are actually TRYING to faithfully represent our cognition, in fact i suspect that the opposite is true – they are geared towards confabulation because it works so much better in the social context and so on
d) it’s unclear if there’s a “fact of matter” when it comes to cognition, namely where we have a standard for neat simple (in some formal sense) and tight explanations, it might be that for the most part we employ a host of quick and dirty heuristics running in parallel, stuff which will never lend itself to out expectations.
so i don’t see any mystery* here (unknowns for sure but that’s different), or “unsolved solvers”, just people having trouble being intellectually honest due to fear of what they might discover.
sorry for rambling – but as an example i was just at this workshop on mental causation. everybody pretty much started by expressing their commitment for mental states being causally efficacious, because you know, that’s clear as day …
there was no discussion more or less about how all the basic tools employed : the notions of “state” “cause” “disposition” “belief” are so blatantly unfaithful to what’s going down (or even plausible) that no wonder that this got us nowhere ….
*and to be fair, in philosophy of science they’ve been referring to “folk psychology” that is going to sooner or later die out since forever basically and we can’t expect anyone to think that his OWN work in part of the problem and not the solution, right 🙂
No it isnt pretty much the same! There’s cross cultural experiments showing that infants possess an inbuilt sense of persistence and spatiotemporal contiguity of objects (that most other animals just lack). This sense underwrites the effectivity of mechanical intervention more than ‘physics’ per se. Its why we went through newtonian mechanics before crossing the rubicon of quantum mechanics and why people are still stamping their feet about things like ‘boundaries’ and ‘persistence’ being universal metaphysical conditions of ‘identity’ and so on.
Just curious about your interpretation of portions of that excerpt.
bottom-up projections communicating mismatches (prediction errors) between expected and observed signals across hierarchical levels
What does this mean – as it’s written I can think of a number of ambiguous intentions?
for example by allowing top-down signals to reshape the probability distributions of evidence on which decision thresholds are imposed
Isn’t this a fairly specific and biased prediction through BBT-lens or otherwise?
Cheers.
The first is just a statement of how predictive processing leverages contexts, isn’t it? Error is farmed out and out until finally snuffed and canonized in consciousness experience.
The second is meant to sketch an alternate, possible explanation of the dissociation between first-order accuracy and metacognitive discrimination, which undermines the bottom-up picture implied by signal detection theory. Somehow, reporting low confidence predicts lower-than-chance decision accuracy: it’s as if they somehow know they’re making an error that they nevertheless cannot correct for. It would be like blurting ‘Green!’ every time you saw red, and then explaining, ‘Whenever I say green I mean red.’ Maybe that isn’t such a good analogy… Either way, it seems a promising phenomena to explain via the top-down, bottom-up, side-to-side tangling characteristic of predictive processing.
>>for example by allowing top-down signals to reshape the probability distributions of evidence on which decision thresholds are imposed
This is exactly what you would expect from an evolved system. You WANT false positive matches, because it helps you avoid false negative errors. Tigers in the brush and all that jazz.
If that’s what the brain does with external information, one has to wonder what kind of data fudging it does on *internal* information about itself.
Still, as it kills traditional philosophy, it might be kindling the new spark of experimental philosophy- maybe as science creeps inwards toward the bulwark of the phenomenal and private subject, startling new paradigm shifts can be expected. They may be dire and depressing, but I hold hope that they’ll be *interesting*.
Ayuh, one both accounts. The fact that the blackboard has been wiped clean is about as exciting as you could imagine. Everything needs a fundamental rethink in post-intentional terms. And the promise of a new New Synthesis is heady as well. The pipe may be empty, but the vaporizer is full!
Deer intenshunalists, if self do esist, how me a braine lurne magix?
lolololololololol
10101010101010101
🙂
http://www.geektime.com/2014/12/01/how-intelligent-is-artificial-intelligence-we-ask-the-big-thinkers/
Adrian Collins, founder of Grimdark Magazine, will be telling readers how to access an excerpt from Bakker’s new story! Check his twitter account (https://twitter.com/AdrianGdMag) on Tuesday, December 9th at 10:30 PM US eastern time. If you sign up for Grimdark’s mailing list, you will receive the excerpt next week!
UPDATE! Register and get the excerpt delivered to you next week!
http://grimdark-magazine.myshopify.com/account/register
http://inthespaceofreasons.blogspot.com/2014/12/some-notes-on-david-papineaus-durham.html
dmf, do you by any chance have access to this article?
http://oxfordmedicine.com/view/10.1093/med/9780199238033.001.0001/med-9780199238033-chapter-006
afraid i’m outside the paywall these days but thanks for trying
Could you say what story of yours was going to be published here? I’m curious!
http://www.goodreads.com/book/show/22819388-unveiled?from_search=true
Thanks for the tip, BF. I was wondering what was happening with these guys. The story’s called “The Crack in the Wall,” and I sent it to them months and months ago. Time to look for another venue…
I think I’ve read this before, it’s the one about “there’s a place in France where the naked ladies dance…”
Hey, I don’t know if this is worthwhile or not, but have you thought about selling singles like this thing from Chuck Palahniuk?
You could probably sell the right from this site and keep all the cash! I’m sure someone somewhere could set that up for you. There’s some super-savvy folks on TSA.
I WOULD BUY
Maybe you could even do a month by month subscription and serialize stuff? Lol, idk, it worked for Dickens.
I’m having trouble just parsing the abstract?
a reliable relationship between confidence and judgment accuracy (demonstrating metacognition) despite judgment accuracy being no better than chance.
Why does it say a reliable relationship of confidence in regards to accuracy, yet at the same time judgement (judgement being confidence, really) accuracy being no better than chance?
We identified participants who performed at chance on the discrimination task, utilizing a subset of their responses, and then assessed the accuracy and the confidence-accuracy relationship of their remaining responses
I’m not sure this is statistically legit – it’s taking their past performance and averaging it (I presume. So as to determine if they are on chance – because how else would you figure that but by an average?), but then at an arbitrary point they stop averaging and…presumably start taking a second average to figure accuracy? That does not seem legit?
Science explainers, come! And explain without definately assuming it’s all absolutely correct (anything can be explained perfectly well, no matter how mad, if you just assume it’s perfectly correct!)
Your BBT hypothesis is outstanding and although I don’t agree with it in its entirety, I often refer people to this website when explaining my views on the hard problem.
But why the new certainty? Am I picking up on some slight change here since I last read in early 2013? It seems dangerous to replace Pyrrhonic skepticism with outright rejection of philosophy, not because philosophy is so great, but because lacking a philosophical method that allows us to question specific philosophies leaves us open for philosophy to creep in through the back door. After all, these are still human brains we’re thinking with, even in 2014.
One great example of this would be the cultlike Bayesian singularity crowd (sorry, that’s just my personal impression). Philosophy creeps into their worldview not through their mathematics, but through the questionable assumptions that come prior to their priors.
As an aside, how can you take this viewpoint seriously when its leading proponents, Bostrom and Yudkowsky, are so dismissive toward actual evidence and science? Doesn’t their odd conclusion that “we” likely exist in a “simulation,” whatever that means, rest on assumptions that are far more abstract, intangible and hence unreliable than the simple physical observation that brains are very different from computers? I don’t see how Lee Smolin’s question here has been answered.
Returning to philosophy: another example of philosophy creep might be the thinkers you describe as nonintentional. Most of these patterns of thought seem formally similar to Laruelle’s method for creating a sort of backward or inverse heuristic that allows us to grasp, on some level, the unilaterality from our brains to our “minds.” But ironically, this understanding can create the illusion that we are immune to merely mental thinking, an illusion that has effects so isomorphic to the typical social behavior of “we are uniquely immune from (x)” found in religious groups that, even more ironically, it seems to be yet another symptom of what we were supposed to have understood– the illusory nature of our minds. Thinkers of all classes started making a lot more sense to me after I started looking at them in terms of their brains and not their ideas. It’s weird, I can think about this stuff reasonably well but I can’t settle on any statement or position on it, so don’t get me wrong I’m not claiming certainty on anything here either, just trying to describe what I see and understand if I’m not seeing something clearly.
Welcome Cracked Egg. By ‘traditional philosophy’ I mean philosophy that takes the deliverances of metacognitive reflection as its primary basis. I’ve thought it bunk for quite some time, not because of faith in the truth of BBT (which can only be one bet among many, at this point), but because of the way the possibility of BBT opens up the force of what I call the ‘Big Fat Pessimistic Induction,’ the observation that prescientific theories are always overthrown once science colonizes their domain. All you need is ONE plausible eliminativistic account of intentionality to block all the abductive exits that allow intentionalists to warrant their theoretical claims. Short the magic of reflective intuition (and so on), inferences to the best explanation is all that intentionalists have, aside from rhetoric.
Regarding your aside, I agree with you regarding the digitalism in general, say, but I haven’t read enough Bostrom (on this topic) or Yudkowsky (at all) to comment one way or another. I’m keen to hear more. Simulation arguments strike me as hokum in general though – crypto-cosmology (which suffers its own ‘philosophy creep’!). I just don’t see what the predicate ‘is a simulation’ can add when it can be applied to every proposition. Some distinctions are just too damn big to make any difference. And the empty can, as they say, rattles the loudest.
I certainly appreciate your confusion regarding illusion. One thing that kind of straightens out kinks is to consider how just how we should expect to be confused by the recursive application of intentional heuristics: as soon as we theoretically reflect on some intentional notion we’ve drawn it into a cognitive regime that cannot possibly hope to solve it. To consider one thing that drives some batshit about my position, BBT suggests that illusion is an illusion. Now this is simply incoherent baldly stated in this form: what I’m really saying is that illusion as traditionally construed is illusory. Illusion, stripped of philosophical claptrap, the way you used it when you were 10 years old, say, is a damn useful way to solve a number of problems. It picks out instances where we think something’s there that’s not. As such, it also works well in a variety of cognitive scientific contexts. Used outside adaptive problem-ecologies, however, ‘illusion’ becomes illusory without any incoherence whatsoever. Why? because ecological misapplications of ‘illusion’ actually belong to the effective problem-ecology of ‘illusion.’ ‘Sophistry!’ the critic cries, to which I cry, ‘Neglect!’ Why presume illusion possesses universal application? I’d like to see that case (because no one I know has made it!). If it is heuristic (and what else could it be?), that is, if it does contribute to solutions in the absence of information, then it has to be ecological. If it’s ecological, then it makes perfect sense to say that illusion can be illusory.
I’m not saying the confusion goes away, only that it itself becomes understandable. You are a small-engine mechanic with small-engine tools staring at a supercomputer.
Thanks, and thanks for the clarification regarding traditional philosophy!
As for illusion, what you are saying does not sound like sophistry at all, especially this part:
“as soon as we theoretically reflect on some intentional notion we’ve drawn it into a cognitive regime that cannot possibly hope to solve it.”
This squares very well with my own understanding of the mental trap that most if not all theory falls into to some extent or another. My confusion is not so much about the concept of illusion itself although your explanation certainly helped to refine my understanding of what you are saying (thanks), and to familiarize me better with your terminology. I’m not formally trained in critical theory or philosophy, so excuse my occasional misstep in communication.
I am more confused regarding how exactly you expect (if indeed you do?) that nonintentional or postintentional thought will immunize itself from the undetected or unexpected emergence of intentionality. I would think intentional concepts would often emerge within this area of thought just because our brains are, as you say, so damn unreliable. It would seem that any such projects would have to acknowledge, as you implied, that the confusion never really goes away, that it may be possible to improve our understanding of these things but never to step out completely from our cognitive situation, at least not until our biology changes.
But maybe that is what you are already saying?
The kinds of metacognitive illusions pursued by the tradition only come to light as such against the background of cognitive science. More will likely come to light, and even BBT will be seen running afoul neglect. The confusion never goes away: the hope is that eventually we’ll be able to diagnose and map it, so as to avoid grinding our gears in ways deceptive enough to keep us grinding.
Again, the important thing is that BBT shows it’s entirely possible to see ourselves in terms continuous with the way we see the universe. A priori arguments against that possibility suddenly find themselves laying upon the horns of the science.
I’m trying to parse that illusion illusion thing? Forgive a soap opera version, but is it like a character thinking they saw Michelle across a crowded square – but Michelle is dead! It can’t be, the character thinks – it must have been a trick of the light or a look alike – an illusion! But Michelle is alive and it was her! That sort of miss identification of where an illusion is present?
what I’m really saying is that illusion as traditionally construed is illusory.
How has it been construed?
Cheers for posting this up, Scott. The upshot of the “Blind Insight” article, then, is that the brain has methods of determining reliability of first order judgements that do not utilize the same inputs the first order inputs analyze. This is because the first order judgements are no better than chance, whereas the second order judgements of their accuracy are better than chance.
This is different from the architecture exploited by the backpropagation hybrid net in the Cleeremans et al article (cited passingly in PL) because that was trained up by utilizing information about the accuracy of the first-order network. It’s worth noting an observation about the first hybrid net mentioned in Cleeremans et al. where the second order net is merely trained to recognize states of a first order net’s hidden units. Here there’s an early period whether the SON is actually better at discriminating the states of the FON than the latter is at its digit recognition task. The authors speculate that this is because while the accuracy of the FON is below par, hidden-unit stabilities in its behaviour emerge at this point which the SON can learn about.
So this model seems to question the assumption that when we reflect on our mental states to discern their nature and content we are accessing the actual representational content of those lower order states. Rather we are imputing content on the basis of higher order information that does not repose in the states on which we reflect. Introspection is really a kind of higher order interpretation.
To play devils advocate for a moment, wouldn’t a lot of pragmatists be pretty easy with this? It’s neurocomputational evidence against the myth of the given.
Exactly as we should expect, I would argue. For someone who’s swung through a number of wildly different theoretical articulations of the implicit, convinced at each turn that this or that brand of reflection had more or less revealed the implicit for what it was, it seems clear that metacognition is anything but passive or receptive. Add to this the confabulatory tendencies of verbal reporting given neglect, and it becomes very difficult to understand how any of our auto-explicitations could be anything other than instrumental at best, and confabulatory at worst. Since norms, etc., are also products of this imbroglio, the pragmatist would have to lean pretty hard in the eliminativist’s direction to take any comfort in this, I would argue. My guess is that they would take the same route as the representationalists, insist that our actual metacognitive capacity trades in functions, not the contingencies that ‘realize’ those functions. But then of course all the same questions apply, in addition to those involving spooky entities and efficacies. The difference is that they can make their abductive case relying on folk psychology and phenomenology.
If this generalizes, if metacognition informs the systems informing generally, then philosophical reflection on the implicit should really be looked at as a kind of noisemaker, a way to produce novelties, like sifting through a rock pile looking for something usable as a tool. If you consider the vast heaps of inert verbiage the institution has produced, this analogy doesn’t seem to be all that far off! Why a noisemaker? It’s just the way Bayesian learning works: in cases of environmental perception, it allows for the history of prior encounters to condition the results of ongoing encounters in a manner that facilitates problem-solving. It provides a mechanical means to resolve the ambiguities of perceptual information: but think of all the evolutionary stage-setting required! The senses harvest information, difference making differences, which run the gamut of the onboard difference making differences laid down in previous harvests, all of it tuned to triggering and modulating effective behaviours. The whole thing requires sustained, structured environmental contact across a variety of scales.
Now ask yourself what kind of evolutionary pedigree philosophical reflection has. As an exaptational capacity, it would be nothing short of a miracle if our priors, our top-down difference making differences could derive anything actionable from the bottom-up difference making differences engaged. In almost all cases, it would generate interference, the same way ‘observer effects’ do, rather than facilitate environmental problem solving. Pseudo-cognitive noise. But where misfiring priors generate obvious illusions in vision, say, theoretical reflection has no such hard-earned capacity, and seems easily convinced that this or that set of illusions is what makes reality possible!
This certainly would explain why theory is so treacherous in the absence of sustained environmental feedback!
Hey Scott! Is this like a TDTCB prequel?
http://www.abebooks.com/servlet/BookDetailsPL?bi=14000227472&searchurl=sortby%3D1%26an%3DBakker%2C+Scott+%28R.+Scott+Bakker%29
LOL! We call it fromage where I come from…
https://twitter.com/LoveseatBlog/status/544988262201585664
I guess that ties into the monty python sketch with the cheese shop that has no-cheese……see! Not missing any of the subtext here!!!1!
No-Gouda
Great answer. So the upshot is that (yes) empirical bayesian learning can evolve efficient models of an organism’s customary environment because there’s environmentally constrained error reduction operating at all levels in the hierarchy.
But philosophical reflection on experience – including our experience of concept-use and meaning – is not so constrained. We presumably have circuits that allow us predict what folk will say when, but their operation is not part of that environment. To the extent that we model them at all, it’s liable to serve the ends of social co-ordination or cognitive efficiency in special task domains. Norms of material inference, for example, are abstract patterns exhibited by efficiently coupled but richly contentful neural representations, not sentence-sentence transformations fortuitously hitched with language-entry and exit-rules. On this model, we should expect material inferential capacities to be deeply imbricated with our perceptual models of the world. Statements exhibit specific inferential roles not because of the proprieties of discourse, but because of the neural dynamics they entrain.
It seems almost a certainty that at some point, neuroscientific structures discharging quite different functions (inevitably geared to perception and behaviour) will be systematically related to apparently normative functions. I just don’t understand what any of them think will happen at this point. This is one of places where I disagree with Dennett: causal explanation scales up to the level of intentional generalities quite well. Once our biomechanical understanding reaches a certain level, the normative is in for a helluva round of recontextualization. Will they still be stamping their feet and declaring autonomy then?
“Statements exhibit specific inferential roles not because of the proprieties of discourse, but because of the neural dynamics they entrain.”
Dear god why is this even surprising? What else could it have been? but that said, that’s sort of a tautology, i mean if the neural dynamics perfectly implement first order predicate logic (if that were possible) then we are at square one.
“neural dynamics” is about as informative as spirits, humours, angels or whatever without further qualification. As long as we don’t have enlightening ways to conceptualize said dynamics. one of the problems here is that (with all due respect to Bayesian inference and such) we don’t really have the math in place to do a good job here. one major issue would be “input”. while dynamical systems theory has gone a long way (i’ve been told 🙂 ), it was conceived to deal with initial conditions->then unfold. in scenarios in which you have a concurrent stream of inputs alongside your dynamics (an “open” dynamical system) things become much more complicated, and i think that at this point it’s not quite clear how to systematically deal with such scenarios.
Enormous challenges to be true. One of the things we need to avoid though is the assumption that we have the foggiest as what logic is (as opposed to how it makes us feel, doing it). The impulse will be to connect what we find with what we think we’re doing under the assumption that we have any real second-order grasp of what we’re doing. The connection is far more likely to be one that explains away our traditional assumptions. It could be the case that the complexities and indeterminacies overwhelm us (as say Dupre or Schwitzgebel or Uttal might argue), and that we’re stranded with some kind of prosthetic understanding, perhaps not so different from researchers reliant on super-computers today. DST might be an integral component of that, maybe. We could end up doing an end run around the old Feynman quip about understanding and manipulation, where we simply use data generated by the brain to train up algorithms that then allow us to intervene in various ways. Either way, it’s death to ghosts!
” It could be the case that the complexities and indeterminacies overwhelm us (as say Dupre or Schwitzgebel or Uttal might argue), and that we’re stranded with some kind of prosthetic understanding, perhaps not so different from researchers reliant on super-computers today. DST might be an integral component of that, maybe. We could end up doing an end run around the old Feynman quip about understanding and manipulation, where we simply use data generated by the brain to train up algorithms that then allow us to intervene in various ways.” sounds about right to me
dynamics yield *that* of raw behavior. the interesting fact is not inputs, but how the that of raw behavior becomes *this* of the scene of a context, perspective, stance, or concern that is characteristic of lived cognition. inputs are variegated to problem domains, and many domains and problems transect the brain. an input as far as understanding tumor development in brain cancers are going to be completely disparate from inputs in putative signal processing on sensory data. you may find some of von foerster’s writings on this issue to be of interest. he actually goes into inputs and neural nets implimenting logic, and does some detailed functional modelling using finite state machines and recursive function theory. http://www.alice.id.tue.nl/references/foerster-2003.pdf
Death to ghosts sounds great. but there’s overkill/underkill here as always ….
Brain data, or brain modelling in itself is not going to help us without some preconceived notion, intuition or whatever. this would mean that you would have to choose your poison, and try to conceptualize then model what you think are the essential properties of phenomenal experience
does that include self, logical operations, intentionality (whatever the frack that even means)? no, as in my book faiw non of this pertains to a slug and it’s glimmer of mind.
i would think that there would be some sort of basic spatio-temporal structure, surely not in the kantian sense, but still. not sure i would start there though
what i think has to be there from the getgo is something basic about how perceptual content writ LARGE is organized, with explicit care to the fact that a neural system that can realize something like this can also FAIL to do so at times (e.g. under anaesthesia).
the problem is that our discussion is always about this or that content where mind is concerned, and experimental paradigms tend to target things that are not abstract or general in any meaningful way, and are clearly premature given our current state of understanding …..
You let the information drive you to more effective accommodations. So for instance, you stop thinking ‘conceptualization’ in normative terms. Them maybe you stop thinking in terms of conceptualization at all, because you possess some more sophisticated understanding of the way hierarchically coordinated top-down feedback conditions bottom-up signals.
There’s no trap here. It’s simply a matter ratcheting our way to a more empirically informed self-understanding, isn’t it?
Are neuroscientists more susceptible than (for instance) geologists or astronomers to sneaking philosophy? When a scientist constructs a theory to explain the red shift of galaxies the scientist has no doubt the galaxies are external to the scientist and has no doubt of their objective existence or of the objective existence of the red shift. When neuroscientists construct theories to explain ‘phenomenal experience’ it seems as though they are trying to explain something before agreeing about what it is they are trying to explain. I suspect TJ is right that attempts to construct theories of mind are way premature. I think premature theorizing creates a space for philosophizing to creep in the back door. At least for the time being, it might be well to settle for trying to explain the things we observe and leave the things we merely intuit or introspect until we can replace intuition and introspection with observation and experimentation.
When a scientist constructs a theory to explain the red shift of galaxies the scientist has no doubt the galaxies are external to the scientist and has no doubt of their objective existence or of the objective existence of the red shift.
Yeah, but is this observation of yours also subject to sneaking philosophies? The manouvering of skepticism to raise questions about the other guys commitments, which draws attention away from ones own?
Why not also be skeptical of ones own commitments (if in conflict with the scientists) and atleast indulge the scientists commitments might be the case? And even indulge it to further degrees – as with cognitive science findings, that their further findings (and even further findings based on the prior findings) might be the case?
Though I guess that might be indulging the scientists commitments possibly more than how much one indulges ones own commitments, given how many claims scientists have (a lot!).
http://motherboard.vice.com/read/the-dominant-life-form-in-the-cosmos-is-probably-superintelligent-robots
The hall eye in the cosmos picture is good art! I wonder if the red dot is Earwa?
A story comes to mind that earth at current stage isn’t messed with by AI’s because to them that’d be like for us if we messed with a placenta. Maybe they messed with things to stop nuclear war occuring, only because the developing baby was going into complications.
Sorry. I meant ‘HAL’ eye.
That last statement in the first paragraph ” We can at this stage only speculate as to whether such a model might provide the means to account for the blind-insight phenomenon and recognize that predictive coding is just one among a variety of potential frameworks that could be applied to that challenge.”
The key is “we can … only speculate” … How is this different from a cave man speculating on the use of a obsidian rock to make cutting his meat and skin easier. Scientists try their best to describe the processes and interactions in the brain, but they are stuck with a language that cannot capture the nuances of these processes in either mathematical (set or categorical theoretic), nor in the human-linguistic chain of meaning that they’ve tried to transform into such bottom-up/top-down predictive etc. meanings of scientific description.
My point is that there will always remain anomalies, disjunctive processes that will not be captured, reduced, brought under the gaze of scientific description. There are things that we may know are there, yet they are still known unknowns that we cannot bring into the net of math or natural language: that escape the net of meaning. And, as you say over and over: “Oh, but someday they will be able to describe it all.”
My point is that that the universe and even the brain will always escape our gaze, our languages, our tools… there will always be something in excess of our meanings.
Absolute knowledge is a pipe-dream, sure. Effective knowledge is all that’s on the table, and that seems to be more than troubling.
http://syntheticzero.net/2014/01/07/assembling-ethics-in-an-ecology-of-ignorance-paul-rabinow/
I guess when it comes down to it: scientists will discover things, but they – like us, are still bound by linguistic tropes or mathematic notation, and will not be able to convey or describe just exactly what they are finding without some kind of framework and terminology that we can all agree on.
Obviously applied sciences have always been locked into certain modes of explanation and procedures; protocols and standards, etc. In some ways your notion of recontextualization takes in this need of revising the whole field of linguistic and mathematical notation and reference, which seems to be clunky at best. Even reading the descriptions you quote from the other scientists do not really explain a dam thing and are circular arguments at best.
I’m not disagreeing with BBT just realizing it is a narrow concern that you are fixated on at the moment in your life. Not sure why you harp on it so. Yes, I’ve understood what your doing. My problem is that your prose is bloated with jargon that needs a great deal of revision and clarification. Your terms at times truly obfuscate the meaning and context of your statements rather than martialing a definitive statement to grapple with.
Each of your essays or posts revises your current approach from another angle, but always return to the one theme of blindness in infinite variation. Why? Are you not satisfied with your own thought? Why repeat yourself over and over? Is this all that interests you? Are is this the only theme that goads you on?
Just curious why you keep beating your virtual head against the wall?
“My problem is that your prose is bloated with jargon that needs a great deal of revision and clarification. Your terms at times truly obfuscate the meaning and context of your statements rather than martialing a definitive statement to grapple with.”
This is the kind of blanket statement that can levelled at any novel theory. Short of specifics, it just seems like a way to express dissatisfaction. What is it that confounds you?
“Each of your essays or posts revises your current approach from another angle, but always return to the one theme of blindness in infinite variation. Why? Are you not satisfied with your own thought? Why repeat yourself over and over? Is this all that interests you? Are is this the only theme that goads you on?”
Revision and clarification is a big part of the motive. Having an empirically responsible theory possessing the conceptual resources to naturalize meaning is a kind of big thing! If I’m right about neglect and heuristics, I’ve found the basis for a new ‘grand synthesis.’ Being an institutional waterbug means that Sigler’s Law almost certainly will apply, but hey, a kid can dream, can’t they?
But more generally, why the sudden conviction that my concepts are too broad and my interests are too narrow, Craig? I don’t get a lot of love in the theoretical circles you move in, but it is nice to be at least tolerated here and there! 😉
Scott, I admire you. I think what I’m saying is at this stage to help rather than to be a fan boy. If I’m critical it is because your ideas are worth it. So please don’t take it personal. The point is that I went back to that site you posted your essay on recently and reread all the statements pro and con and reread your essay and realize it is too literary, too bloated and full of non-standard jargon for even the scientific framework and community. That is my point. Your an outsider trying to make inroads in being accepted (or not?) by those you seek to understand your ideas and work. But you make it difficult to accept when it is so incrusted with obtuse terms and literary embellishments.
I know your style is combative and comes out of your skepticism and empiricism, yet people who are not literary and philosophical (i.e., most scientists not their ingroup jargon, but have little use for outside literary or philosophical literature, etc.).
Reread the basic complaints on you theory. For most of those on that site it came down to the simple conclusion that your notion of BBT is narrow and limited and not well-defined, overly literary and not written within the prescribed terms of the scientific literature, etc.
If I learned anything from Foucault it is that we have to master the terms of a ‘field’; and, even if we disagree with it we combat it from within not outside its notational conveyance. Not sure if this advice is worth a dam, but you do exist in a vacuum outside the norm, and whether that is an advantage or detriment is not something I can judge fairly. Tone and style are difficult masteries. Your literary proclivities that work in your fantasy may not work well in scientific literature. You have a tough nut to chew because of already being a fantastic writer (and a good one). More than once in those comments I saw these authors dismissing you just because you are a famous fantasy writer – as if, “I dare this upstart fantasist come into our turf and tell us what is going on.” etc. Laughable, of course. Small minded, for sure… but, it is there, warts and all. To break down those barriers and have your ideas heard will need a subtle readjustment. How you do that is up to you. I want tell you that.
I didn’t write that piece for Scientia Salon, I wrote that piece and thought, hey, this is almost normal enough to be accepted by some place like Scientia Salon.
If I had any control over my output, this would be sound advice! But I’ve never had, and every attempt I’ve made has sent me to the brink. So I just let it write itself now. Sometimes its friendly, sometimes it’s not. Sometimes it strives to fit in, but typically it recoils. I appreciate that it can be frustrating, even infuriating, but it is what it is. I’ve lost count of the number of times I’ve been told that I would be a millionaire if I took this ethic to my novels. Since I depend on them for a living, it has caused me numberless personal woes. It’s cost me a good number of friendships. The ability to play along in every way is pretty damn important.
It certainly isn’t courage, I can tell you that much. More like a happy deformity. I say happy because I think someone like me would have been gobbled up a long time ago. But somehow I persist on the back of my obsessions. In the greater scheme of things, that’s damn near miraculous. And I guess I have faith as well that the Great Recontextualization will count in my favour, that the crazier things get, the more sane I will sound.
“My problem is that your prose is bloated with jargon that needs a great deal of revision and clarification. Your terms at times truly obfuscate the meaning and context of your statements rather than martialing a definitive statement to grapple with.”
This is the kind of blanket statement that can levelled at any novel theory. Short of specifics, it just seems like a way to express dissatisfaction. What is it that confounds you?
I’d pitch it that it’s like reading programming code without comments added to it. It’s always a big no no in programming to not put in comments, because even if you programmed it yourself you can come back to it after a month or two and have no idea what the hell you were doing! Let alone if you are reading someone elses code! (also no freakin’ line breaks except for the occasional paragraph – reading code without line breaks would be/is a nightmare!)
Programmers, in their comments describe in more standard english what they are doing – even though that standard english is useless for actually coding. It’s just there to let the programmer get a handle on each part of the code and contextualise the percieved logic of its structure.
Whether you can blend comments/psuedo code amongst hard code and still fit with the academic genre, I dunno.
But in this regard I agree with Hickman and frankly it’s not a dissatisfaction claim – it is a practical claim, as indicated by good programming practice (and not some fuzzy ‘tone’ issue). Comment your code! Well, that might help – it really does help in programming.
new jargon which is interlocked with a theoretical gestalt is pretty much necessary if we are too effect a discontinous shunt into a new way of thinking. but the jargon is not free floating because actually provides numerous genealogical linkages to other thought and he shares his thoughts on contemporary social developments which he uses to occasion the theoretical explicitation. by roving through the jargon you come to an understanding of the work each term is performing in the discursive economy. but i would argue he isnt playing an incommensurable language game that is cut off from all the others. it’s difficult but i find how he occasions it through widely circulated theory in both continental and cognitive science literature to be essential. and unlike much of contemporary theory you can see an angle of concern shining through in scotts writing. they lamented and impugned strange motives to scotts tenor of dreadit’s akin to how second order systems theorists saw the ethical and the political. they dont consist in propositions or blue prints for future societies standing over and above where we are and what we are doing, but rather in a certain kind of comportment to what one is doing as one is doing it, the weaving together of a logic with a dialogic in von foersters terms. scott is obsessive and self absorbed, but i still get the sense that he is trying to talk to someone, he is trying to convey the concern he feels and how they arent merely parochial self obsessions. the literary angle actually works in scotts favor. i love cognitive science, but like many of the people who have no interest in theory or technicalities of science beyond what it has in store for practical exigenies of their living, fuck if i can pay attention to that dry hollowed out writing. i put down more books because they just dont speak to me and they dont speak in a manner that conveys a concern for the content. the contents are just free floating in their generic discursive economies adequated to the neutral tones of technical writing geared towards specialists speaking to other specialists. i find more so than many other writers in the area that scott weaves together invigorating conceptual rigor with a mode of authenticity that tries to speak in some way to how we live.
Was reading Ecclesiastes and this reminded me of ur BBT!
https://www.biblegateway.com/passage/?search=Ecclesiastes+3%3A18-22&version=KJV
Splendid. Thanks for this BF!
🙂
Much of the wisdom of the Old Testament seems confounded by the folly of the New, and the bad thing about being blessed with a muse is she won’t accept that sometimes her contributions are not helpful. On the other hand, Thomas Kuhn had a point in saying that new paradigms prevail because the old guard dies, not because they are convinced. I suspect something like that applies as much between Judaism and Christianity as between secular scientific theories. I also suspect that if the semantic apocalypse is as apocalyptic as promised staid, scientific language may not be adequate to convey it. Preparing humanity for the end of civilization as we know it seems like a job that calls for pretty heavy rhetorical weapons.
Another cool cover from China
happy about the chinese book. thank you for it all. i date cnaiur and let me tell you they are a son of a bitch.
when are we gonna see book six now scott? next christmas? are you gonna let people know that anasurimbor moengus meets up with the white luck warrior to get advice on how to destroy kellhus before a sword is swung?
we eagerly await your next installment; i was given a chain mail bound journal for christmas…. does that mean you are willing to change the end?
….and holy fucking hell when were you gonna tell us that? is it just the father who is no respecter of persyns or is it everyone? respect.
http://thetongueisstrong.wordpress.com/
i clicked on that but i don’t understand! 😦
well i just explained it so try again
long live bakker
[…] their brains. Bakker would probably claim that he addresses this concern empirically via various studies on cognition and metacognition, but how do we know that “metacognition” as constructed in these experiments maps onto […]