Flies, Frogs, and Fishhooks
by rsbakker
So, me and my buddies occasionally went frog hunting when we were kids. We’d knot a string on a fishhook, swing the line over the pond’s edge, and bam! frogs would strike at them. Up, up they were hauled, nude for being amphibian, hoots and hollers measuring their relative size. Then they were dumped in a bucket.
We were just kids. We knew nothing about biology or evolution, let alone cognition. Despite this ignorance, we had no difficulty whatsoever explaining why it was so easy to catch the frogs: they were too stupid to tell the difference between fishhooks and flies.
Contrast this with the biological view I have available now. Given the capacity of Anuran visual cognition and the information sampled, frogs exhibit systematic insensitivities to the difference between fishhooks and flies. Anuran visual cognition not only evolved to catch flies, it evolved to catch flies as cheaply as possible. Without fishhooks to filter the less fishhook sensitive from the more fishhook sensitive, frogs had no way of evolving the capacity to distinguish flies from fishhooks.
Our old childhood theory is pretty clearly a normative one, explaining the frogs’ failure in terms what they ought to do (the dumb buggers). The frogs were mistaking fishhooks for flies. But if you look closely, you’ll notice how the latter theory communicates a similar normative component only in biological guise. Adducing evolutionary history pretty clearly allows us to say the proper function of Anuran cognition is to catch flies.
Ruth Millikan famously used this intentional crack in the empirical explanatory door to develop her influential version of teleosemantics, the attempt to derive semantic normativity from the biological normativity evident in proper functions. Eyes are for seeing, tongues for talking or catching flies; everything has been evolutionarily filtered to accomplish ends. So long as biological phenomena possess functions, it seems obvious functions are objectively real. So far as functions entail ‘satisfaction conditions,’ we can argue that normativity is objectively real. Given this anchor, the trick then becomes one of explaining normativity more generally.
The controversy caused by Language, Thought, and Other Biological Categories was immediate. But for all the principled problems that have since belaboured teleosemantic approaches, the real problem is that they remain as underdetermined as the day they were born. Debates, rather than striking out in various empirical directions, remain perpetually mired in ‘mere philosophy.’ After decades of pursuit, the naturalization of intentionality project, Uriah Kriegl notes, “bears all the hallmarks of a degenerating research program” (Sources of Normativity, 5).
Now the easy way to explain this failure is to point out that finding, as Millikan does, right-wrong talk buried in the heart of biological explanation does not amount to finding right and wrong buried in the heart of biology. It seems far less extravagant to suppose ‘proper function’ provides us with a short cut, a way to communicate/troubleshoot this or that actionable upshot of Anuran evolutionary history absent any knowledge of that history.
Recall my boyhood theory that frogs were simply too stupid to distinguish flies from fishhooks. Absent all knowledge of evolution and biomechanics, my friends and I found a way to communicate something lethal regarding frogs. We knew what frog eyes and frog tongues and frog brains and so on were for. Just like that. The theory possessed a rather narrow range of application to be true, but it was nothing if not cheap, and potentially invaluable if one were, say, starving. Anuran physiology, ethology, and evolutionary history simply did not exist for us, and yet we were able to pluck the unfortunate amphibians from the pond at will. As naïve children, we lived in a shallow information environment, one absent the great bulk of deep information provided by the sciences. And as far as frog catching was concerned, this made no difference whatsoever, simply because we were the evolutionary products of numberless such environments. Like fishhooks with frogs, theories of evolution had no impact on the human genome. Animal behavior and the communication of animal behavior, on the other hand, possessed a tremendous impact—they were the flies.
Which brings us back to the easy answer posed above, the idea that teleosemantics fails for confusing a cognitive short-cut for a natural phenomenon. Absent any way of cognizing our deep information environments, our ancestors evolved countless ways to solve various, specific problems absent such cognition. Rather than track all the regularities engulfing us, we take them for granted—just like a frog.
The easy answer, in other words, is to assume that theoretical applications of normative subsystems are themselves ecological (as is this very instant of cognition). After all, my childhood theory was nothing if not heuristic, which is to say, geared to the solution of complex physical systems absent complex physical knowledge of them. Terms like ‘about’ or ‘for,’ you could say, belong to systems dedicated to solving systems absent biomechanical cognition.
Which is why kids can use them.
Small wonder then, that attempts to naturalize ‘aboutness’ or ‘forness’—or any other apparent intentional phenomena—cause the theoretical fits they do. Such attempts amount to human versions of confusing flies for fishhooks! They are shallow information terms geared to the solution of shallow information problems. They ‘solve’—filter behaviors via feedback—by playing on otherwise neglected regularities in our deep environments, relying on causal correlations to the systems requiring solution, rather than cognizing those systems in physical terms. That is their naturalization—their deep information story.
‘Function,’ on the other hand, is a shallow information tool geared to the solution of deep information problems. What makes a bit of the world specifically ‘functional’ is its relation to our capacity to cognize consequences in a source neglecting yet source compatible way. As my childhood example shows, functions can be known independent of biology. The constitutive story, like the developmental one, can be filled in afterward. Functional cognition lets us neglect an astronomical number of biological details. To say what a mechanism is for is to know what a mechanism will do without saying what makes a mechanism tick. But unlike intentional cognition more generally, functional cognition remains entirely compatible with causality. This potent combination of high-dimensional compatibility and neglect is what renders it invaluable, providing the degrees of cognitive freedom required to tackle complexities across scales.
The intuition underwriting teleosemantics hits upon what is in fact a crucial crossroads between cognitive systems, where the amnesiac power of should facilitates, rather than circumvents, causal cognition. But rather than interrogate the prospect of theoretically retasking a child’s explanatory tool, Millikan, like everyone else, presumes felicity, that intuitions secondary to such retasking are genuinely cognitive. Because they neglect the neglect-structure of their inquiry, they flatter cunning children with objectivity, so sparing their own (coincidentally) perpetually underdetermined intuitions. Time and again they apply systems selected for brushed-sun afternoons along the pond’s edge to the theoretical problem of their own nature. The lures dangle in their reflection. They strike at fishhook after fishhook, and find themselves hauled skyward, manhandled by shadows before being dropped into buckets on the shore.
Excellent!
Lyndon Baines Johnson was reputed to have a favorite question.
“And therefore, what?”
So how does the plastic set of intercommunicating organs we collectively call the “brain” develop models of the world that leverage deep regularities? Does Jeff Bezos know something most of us don’t?
Or is it best said by paraphrasing Lefty Gonzales, “I’d rather be lucky than bright.”
Good stuff here in a disturbing sort of way.
maybe you can get a review copy:
http://imperfectcognitions.blogspot.com/2018/01/beyond-concepts.html
functional cognition remains entirely compatible with causality.
Except that input/output process is so fixed, for actually looking at the process itself (should that be survival enabling/to the degree that is survival enabling) only a fault is strong enough to break that cycle. Self reflection requires a fault in the input/output process order to start – much like a pearl requires an irritant grain of sand. Self reflection has to begin with failure – and then naturally, its advancing processes are founded from this failure, growing from it (as the failure largely works in survival terms).
Or that’s where I go with the piece – it’s hard to parse like uncommented programming code. Not saying that as something at fault about the writing – uncommented code can work just fine. But parsing it can be very difficult.
Given enough salient (mis)cues, the frogs will perceive flies rather than fishhooks just as philosophers perceive explananda rather than crash space?
Probably like a synecdoche, confusing part information/part of a situation for the whole information/whole of a situation. Lies with a small amount of truth to them are the best, after all!
it’s raining frogs…
http://philosophyofbrains.com/2018/01/24/evolving-enactivism-ur-intentionality-whats-it-all-about.aspx
“Finally, it may be thought that Ur-intentionality is thin in another sense – namely, that if Ur-intentionality is contentless then its alleged intentional properties reduce to nothing more than the properties of a stimulus-response mechanism. But this disastrous consequence would only follow if Intellectualism or Mechanism are the only live options.”
What’s disastrous about stimulus-response?
And oddly enough, this thing about frogs and fish hooks reminds me of the cover of the Nirvana record Nevermind and about the guy in War of the Worlds who couldn’t understand why a wad of cash couldn’t buy him a ride out of town. I know this site isn’t big on memetics but the frog in the bucket and the guy staring at his money in disbelief as the last bus pulls away strike me as being in the same cognitive boat.
I have pretty much nothing to add, since my stance on teleosemantics (and anything teleological in biology) has pretty much always been “that’s fucking garbage”.
However, just as the child finds it *useful* to leverage the intentional stance to catch frogs, biologists can find it useful to assig ontological labels to genes.
Why is that protein so evolutionarily conserved? Oh, it’s a DNA repair protein (that’s its “function”) and we tag it as such in our gene ontology (GO) databases. But it is always wise to remember that these are cheap heuristics and not absolute truths. Maybe in most contexts it’s a DNA repair protein, but sometimes it does something else, or maybe it’s conserved for entirely obfuscated reasons (e.g. the Rosa26 locus- tremendously evolutionarily conserved but is often used as a site of synthetic gene insertion in mice since mice lacking the transcripts from the site are completely viable and manifest no identifiable defects*. The locus is conserved enough that an expressed homologue has been identified in humans but no function has ever been convincingly ascribed to it, to the best of my knowledge)
*Casola S. (2010) Mouse Models for miRNA Expression: The ROSA26 Locus. In: Monticelli S. (eds) MicroRNAs and the Immune System. Methods in Molecular Biology (Methods and Protocols), vol 667. Humana Press, Totowa, NJ
Why we need to create AI that thinks in ways that we can’t even imagine
Thanks, dmf. I’ll listen to this later while I’m doing chores. I’ve really enjoyed a number of the Intelligence Squared Debates.
Actual link with comments and audience votes.
The following is just fodder, dmf. I had the time to interject my listening and jot down some bullet points. Bonus the kids got to school on time and laundry and cleaning are mostly done ;).
– In terms of what’s discussed on TPB, I imagine Rothblatt’s opening remarks are woefully optimistic about the introduction of AI into contemporary human cognitive ecology.
– Keen and Lanier would probably appreciate Bakker’s “pumping the breaks” philosophy.
– In Keen’s opening, he talks about “AI helping us understanding the ecosystem” and I can’t help but think he’s making a very on pulse remark as per TPB fare – though, it seems obvious he’s talking about the biosphere.
– Invokes an AI race, akin to an arms race.
– I would really enjoy a TPB post on the opening arguments of the “Against” pair.
– As per Donvan’s moderation, I think, in TPB context, that this debate is framed poorly.
– “We will ‘love’ the AI in our lives” is not dissimilar in TPB speak to “the AI that more or less efficiently/functionally hijack/dominate our sociocognitive ecologies, in localized settings or across ecotomes, are the ones we to which we will thoughtlessly yield our attention,” not that that is contextual to this debate.
– Look no further than the advent of social media mediums which admit to using algorithms to predict the ideal interval of notifications or the emotional content of our feeds or the lights and sounds while playing “CwazyKupcakes.”
– On Rothblatt’s “mass democratization” of the printing press, instantiations of its dissemination happened often in protest to the dictates of the contemporary status quo, both in Scotland and in India, for instance.
– On Keen, I do think we’re better off for the printing press. All change engenders disorder, sometimes violent.
– Lanier apologizes like a Canadian. Lol.
– Wow, I don’t like the comment by Hughes that all that keeps humans from “putting guns in their mouths … [is jobs].”
– Not sure if Rothblatt realizes but she politicized AI very keenly without actually addressing those implications.
– Lol. Even intelligent “adults” need moderators.
– This debate is a mess… their prepared remarks were much better.
– As Lanier said, “the [voting] audience is screwed.”
– Lanier’s separation of “church and state” metaphor is interesting on its own, outside of the context of this debate.
– On Keen’s response to the second question, the algorithms that corporations trust to interpret “the market” already supersede collective human decision making in a number of contexts.
– So Lanier’s answer is to ignore the contextual crux…
– I do think Rothblatt is correct in assuming we’ll inevitably strive to create some (“Demonic,” (TSA)) simulacrum of companionship.
– Badly paraphrasing Rothblatt “but more importantly, who are the AIs masters? Us.” Wishful thinking, as per TPB ken, probably.
– On Hughes’ response to the third question, the hubris suggesting AI somehow *won’t* also digest our politicking behavior beyond our own understanding is… misguided.
– On Hughes’ response to the fourth question, unintentionally treading “crash space.”
– On Hughes’ response to Keen, I’ll repeat, “this debate is a mess…”
– Humans also seemingly have no current capacity for resilience accounting for “invasive AI (TPB speak)” in the human sociocognitive ecology.
– On Lanier: “We could build a self-printing/replicating Skynet in a week/don’t do it;” but humans will still somehow be the dominate agency…
– Engineering “more humane AI” will save us…
– On Lanier’s closing arguments: attributing agency to AI before a declaration of consciousness does seem dangerous, as humans then compromise regarding a number of cognitive heuristics/biases.
– Bakker would have made an interesting third party candidate on this panel…
– Keen going for a win within the parameters of the debate.
– As per Donvan, if that was esoteric, I’m not sure how to qualify TPB ;).
Cheers, dmf.
Lanier’s point about the actual state of engineering vs the hype is the key, people keep conflating the two, all I ask is if someone is going to make a point along these lines they point to actual systems and impacts but maybe that’s asking too much of folks as given as we are to waxing speculative…
What about the question, what will the AI think about it’s own creation?
Kind of a blank space there? Possibly so much so that’s why it’s not on the radar?
‘We’re just making plans for Nigel/AI’
how would we define these terms in way that are practicable for engineers and avoid waxing science-fictional?
That aught to be a provocation to engineers (or could be spelled out as one) – in what way do they understand AI operating? If it’s by ‘thinking’, what is so sci fi about my question? I’ve used the engineers own terms in my question and I presume a term used on both sides in the debate.
We’re not built to see our absences of seeing. An engineer who can’t see the problem of his own term ‘think’ and how the question ‘What will the AI think of its own creation?’ interact is, predictably, not seeing where he can’t see.
So I guess you’d have to first work an ‘intuition pump’ to the engineer to start considering a native inability to see/sense where our sight/senses run out. Then juxtapose that onto the term ‘think’ and the question, showing how their own term is the one disappearing into the dark there, not the questioner.
but AI isn’t any one thing or another that does any thing, the problem is to try and avoid being bewitched by grammar and try to come to terms with thinking of functions of specific assemblages/systems. This calls for a sort of pragmatism (real/existing differences that make a difference) and a willingness to test and see rather than speculate, not easy for most I understand.
DMF, would you say you do not think? If so, then I’d get where you’re coming from here. Otherwise you’re coming from a bewitching term/headwear yourself and seemingly chastising me for putting on the same strange hat as you. Tu quoque (which I can never remember the spelling of). I’m fine with taking the hat off, but not if my interlocutor keeps theirs on. Do you ‘think’? Or do you feel you’re distinct from AI and AI just doesn’t deserve this particular hat you wear?
As I said just before, there’s nothing wrong with my question – the problem is with people choosing to keep the idea that they ‘think’ when engaging the question – this drives the question ‘What will an AI think about its creation?’ into a darkness for them. Makes me look like the unpragmatic one. I already pitched that I would make a engineer question whether they actually ‘think’, which should have raised the question with you as well already. If you had started by saying ‘Well, I don’t really ‘think” then we’d be speaking on the same pragmatic terms – as is, my estimate is you don’t understand what I’ve said because you’re coming from the idea you ‘think’, but a question of an AI thinking makes a hash either of A: the question or B: makes a hash of the idea that you or I ‘think’.
At a guess the paradigm that you ‘think’ will win out here, making this post seem a hash as well. And yet abandoning the idea that you (or I, or the engineer, or the debaters) think would be the pragmatic thing to do, if you want to urge pragmatism.
Thinking (in humans) isn’t one thing and we don’t have good models yet of the many functions we loosely refer to when we use the term, that’s about something that we can reverse engineer as brains/neuroanatomy exist and the sorts of systems you are suggesting are only found in scifi books so how could we offer reasonable specs for them?
The sort of systems I’m suggesting exist right now in reality – us. The debates avoid going subject to object, so they always see AI as some kind of tool, rather than something that will have a behavioral diversity that gets exponentially more diverse the stronger the AI. I’d bring up the Tesla car that killed its ‘driver’, but that’ll seem a mechanistic example – precisely because debate wont allow subject to object (there and then back again) and read the AI in that case as having an unexpected behavioral diversity in how it responded to the environment. Instead debate will try and keep humans as ‘subjects’ and the car just an object, a kind of ‘tool that went wrong’. We want AI, yet we still want it to be an object like a tool – something that’s just an extension of its users intent. Yet there that user is, under a truck. Dead. That wasn’t his intent. But we’ll tell ourselves that’s just like if his break cables had snapped, right? Totes the same.
While people wont go subject to object, wont treat themselves as an object, it is a fair question to ask ‘What will an AI think of its own creation?’ and of the wildly diverse behaviorism that could result. It’s fair to ask it because it stays within the realm of subjects – a realm the debate insists on staying within…in regards to humans. If the debaters or the engineer or yourself are going to treat themselves as subjects, then the question is valid.
Otherwise it’s just trying to treat us as subjects, but for some reason instead of saying we want to make fancy tools, we say we want to make AI or talk about AI. We want to have it be both an Intelligence, yet an object and not a subject?
To the man with a hammer/soul, everything looks like a nail/object. It’s uncanny that the paradigm of us treating ourselves as subjects makes us think we can have our cake and eat it too with AI – as if we can have an Intelligence, but it be an object while we remain subjects. The abject conceit of sheer insistence we are the subjects here – stamping our feet over it like a dude in a golden room! So ‘that’ must be an object – yet it’s an Intelligence as well? It has to be an object because we are so very, very much the subjects and it is not us? Your example engineer has a problem they are unaware of, as the first step of dealing with the other problem. Worse than if he were colour blind but did not know it and had some green and red wires to cut.
Click to access Wiese-Erkenntnis-Active-Inference_penultimate.pdf
[…] This is what ‘intentional cognition’ amounts to: the collection of ancestral devices, ‘hacks,’ we use to solve, not only one another, but all supercomplicated systems. Since these hacks are themselves supercomplicated, our ancestors had to rely on them to solve for them. Problems involving intentional cognition, in other words, cue intentional problem-solving systems, not because (cue drumroll) intentional cognition inexplicably outruns the very possibility of reverse-engineering, but because our ancestors had no other means. […]
[…] The obvious answer is that biology, and cognitive biology especially, is so fiendishly complicated. The complexity of biology all but assures that cognition will neglect biology and fasten on correlations between ‘surface irritations’ and biological behaviours. Why, for instance, should a frog cognize fly biology when it need only strike at black dots? […]
[…] also understands that distinct cognitive modes are at play. But rather than see this distinction biologically, as the difference between complex […]
[…] be sourced in nature and what cannot be sourced, between causes and purposes, and somehow, someway, they conspire to render living systems intelligible. The evidence of this basic fractionation lies plain in experience, but the nature of its origin […]
[…] sourced in nature and what cannot be sourced, between causes and purposes, and somehow, someway, they conspire to render living systems intelligible. The evidence of this basic fractionation lies plain in experience, but the nature of its origin […]
[…] also understands that distinct cognitive modes are at play. But rather than see this distinction biologically, as the difference between complex […]