### Godelling in the Valley

#### by rsbakker

“*Either mathematics is too big for the human mind or the human mind is more than a machine*” – Kurt Godel

.

Okay, so this is purely speculative, but it is interesting, and I think worthwhile farming out to brains far better trained than mine.

So BBT suggests that the ‘a priori’ is best construed as a kind of cognitive illusion, a consequence of the metacognitive opacity of those processes underwriting those ‘thoughts’ we are most inclined to call ‘analytic’ and ‘a priori.’ The necessity, abstraction, and internal relationality that seem to characterize these thoughts can all be understood in terms of *information privation*, the consequence of our metacognitive blindness to what our brain is actually doing when we engage in things like mathematical cognition. The idea is that our intuitive sense of what it is we think we’re doing when we do math—our ‘insights’ or ‘inferences,’ our ‘gists’ or ‘thoughts’—is fragmentary and deceptive, a drastically blinkered glimpse of astronomically complex, natural processes.

The ‘a priori,’ on this view, characterizes the *inscrutability*, rather than the nature, of mathematical cognition. Even without empirical evidence of unconscious processing, mathematical reasoning has always been deeply mysterious, apparently the most certain form of cognition when *performed*, and yet perennially resistant to decisive second order reflection. We can *do* it well enough—well enough to radically transform the world when applied in concert with empirical observation—and yet none of us can agree on just *what* it is that’s being done.

On BBT, our various second-order theoretical interpretations of mathematics are chronically underdetermined for the same reason any theoretical interpretation in science is underdetermined: *the lack of information*. What dupes philosophers into transforming this obvious epistemic vice into a beguiling cognitive virtue is simply the fact that we also lack any information pertaining to the lack of this information. Since they have no inkling that their murky inklings involve ‘murkiness’ at all, they simply assume the *sufficiency* of those inklings.

BBT therefore predicts that the informational dividends of the neurocognitive revolution will revolutionize our understanding of mathematics. At some point we’ll conceive our mathematical intuitions as ‘low-dimensional shadows’ of far more complex processes that escape conscious cognition. Mathematics will come to be understood in terms of actual physical structures doing actual physical things to actual physical structures. And the historical practice of mathematics will be reconceptualized as a kind of *inter-cranial computer science*, as experiments in self-programming.

Now as strange as it might sound, you have to admit this makes an eerie kind of sense. Problems, after all, are posed and answers arise. No matter how fine we parse the steps, this is the way it seems to work: we ‘ponder,’ or input, problems, and solutions, outputs, arise via ‘insight,’ and successes are subsequently committed to ‘habit’ (so that the systematicities discovered seem to somehow exist ‘all at once’). This would certainly explain Hintikka’s ‘scandal of deduction,’ the fact that purported ‘analytic’ operations regularly provide us with genuinely novel information. And it decisively answers the question of what Wigner famously called the ‘unreasonable effectiveness’ of mathematical cognition: mathematics can so effectively solve nature—enable science—simply because mathematics *is nature*, a kind of cognitive Swiss Army Knife extraordinaire.

On this picture, *there is only* *implementation, *implementations we ‘generalize’ over via further implementations, and so on and so on*.* The ideality, or ‘software,’ is simply an artifact of our metacognitive constraints, the ‘ghost’ of what remains when multiple dimensions of information are stripped away. Not only does BBT predict that the ‘foundations of mathematics’ will be shown to be *computational*, it also predicts that, as the complexities pile up, mathematics will become *more and more the province of machines*, until we reach a point where *only *our machines (if the possessive even applies at this point) ‘understand’ what is being explored, and the imperial mathematician dwindles to the status of technician, someone charged with *translating* various machine discoveries for human consumption.

But I ain’t no mathematician, so I thought I would open it up to the crowd: Does this look like the beginning?

I am lightly trained in Mathematics, I went as far as taking an undergraduate class on Godel actually. What I remembering understanding was that mathematically there will always be something that is true in a closed system that can’t be proven using just this closed system. (at least that is what I remember, i may have it off a little, what I remember was that this thing that is true, we don’t know what it is, we just know that it exists, the proof really doesn’t say much) So to this question that we will understand mathematics better, or even like a machine, and that it will be so machine like that only machines will ‘understand it’, I would say that makes sense only so far as the mere fact that like math programs are written using ones preferred language. So the question is do you believe that language can be understood mathematically and by mathematically I mean as a closed system (though perhaps it won’t be, but isn’t that make this a mute question), and if so won’t there always be something in this closed system that we can’t prove to be true, yet is? ( I might end up feeling like a fool because of a faulty memory, but

I just had a chance to look up what I was saying, and besides my poor grammar, the proof says that within any consistent system, one where the axioms can enumerated by an unlimited computer, there will be something that is true yet can be proven false. So there will always be something, we don’t know what, but something that would come out false yet be true. My general question still mostly stands I believe, at least my interpretation of wikipedia aligns well enough….

Have you read any of Chaitin’s stuff, gears? He’s big on the limits of mathematization. The thing that blows me away is just how

wellthese problems of recursion map onto mechanical limits of computation.Ok, but no I haven’t read Chaitin I have never heard of him. This area was always an interest that never got fulfilled, I would rather like to know more, does Chaitin have a blog? I don’t really know, anymore, what the problems of recursion are. You spoke magic to me, but I would like to find out more. I once had an interest in looking at the connections between AI and Godel’s problem, I was going to do a second undergrad thesis on it, but life got in the way.

Chaitin is the founder of algorithmic information theory, and has his own take on incompleteness/halting that informs his version of pancomputationalism. He’s an outlier insofar as he thinks that mathematics is ‘quasi-empirical.’

That sounds fascinating to tell the truth. I will get one of his works sometime, sadly I neither have the time nor the cash to do that now. I like the idea that math is ‘quasi-empirical’. It probably is, not in the way some things can be quasi-empirical, but in this way that it is mind-matter sort of. Anyways, fascinating stuff.

I looked all this up on wikipedia, I think I see how recursion would map onto the mechanical limits of computation; but what blows you away about it? I just don’t see what you are getting at…

Now I get it, basically because these problems of computation are recursive they are most likely predictable, or related. Interesting….

I don’t understand the link – they don’t test their theorums with computers even when they have computers just sitting all around them? I’m not sure I’m understanding that – though math which seemed to contain what is in programming language a loop, like a geometric function, always freaked me out that they seemed to be able to render a loop in a singular, static format – something done many times, yet as if only done once. I never quite understood how they did that – but hey, maybe they didn’t and they didn’t realise what they were writing was triggering a loop (in them)? Heh.

I don’t really believe in computers investigating anything, unless they are AI that have agendas. And in a way, like ourselves, their agendas largely determine the extent of their exploration. ‘Understanding’ is a reflection of agenda forfillment. Aint no understanding anything, sans an agenda, as far as I know. That sounds like a reference to ‘the true knowledge’ or some kind of god knowledge. Of course maybe, even by chance, a new cognitive model comes out (like the old one basically came out by chance). But the odds of it occuring…few!

You might get that the things being explored become more and more complex, so that the machine dumbly maps it and the human works from a lower res, likely visualised version of that map. People trying to wrangle advantage from powers they don’t understand, more and more economy totering on top of such ‘understanding’ and of course the final grand resource to exploit (so as a handful of individuals can become like warlords of an old era) are other people, en masse. Possibly such theorum used to manage people, people are told they ‘don’t understand’ when they argue with being told to not sow their fields (heh!), which is true because even the math technicians don’t understand it, so how could regular folk understand the good word?

Well, as far as I know, the problem with proof checkers is that they can only check formal steps (and even if coq has a much easier formalisation of mathematics than set theory is, that is not likely to be any different there) and formal steps are essentially small steps.

But proofs in higher mathematics are, contrary to popular belief, not very formal. They are full of huge intuitive leaps, based on experience and understanding i.e. certain images in the mind of the mathematician and the knowledge that this particular gap in the proof could be filled with arbitrary formality (though it might take a (long) while to do it).

So, coq might work like a grammar checker (wouldn’t that be nice to have …), but no grammar checker will ever turn you into an author.

Anyway, as a set theoretician recently turned computer scientist I basically have to take a look at it … maybe this weekend.

The question is one of what’s going on inside the black box of these ‘intuitive leaps’ you mention. If it’s not mechanical, then it’s… something else? For my money it’s far more prudent to assume it is mechanical and that we lack the metacognitive access to see as much, rather than something supernatural.

There’s already several instances of computers writing articles that I’ve read about at least, so things might change more quickly than anyone might think. Definitely keep us up to date with anything you find, Phille!

Of course it is mechanical, it just isn’t formal. In other words: A mathematical proof is rarely an unbroken string of logical conclusions between axioms and result. To accept a proof as correct mathematicians just have to be convinced that this unbroken string of logical conclusion does indeed exist, even if it might take millions of lines and tens of years to write it down.

To make the same kind of jumps with the same certainty as a good mathematician, a machine would need to have the same kind of computing power and memory that lies behind the human intuition.

It might take a while ’till this kind of computing power exists and it will be a while longer ’till it exists on the PC of every mathematician. 😉

The power problem is definitely a big hurdle to cross… an insuperable one I hope! But it’s a slender hope. They’ll be powering supercomputers with refrigerator lightbulbs soon enough.

Lacking the syntax means lacking the blue-print. This brings me back to Chaitin again, and his notion that some mathematical facts just exist (or something along those lines), without any syntactic umbilical tethering them to the whole. Do you know any of his stuff Phille? I’ve always found it fascinating and incomprehensible.

BTW, the Second Apocalypse art thread is getting extra bad ass at the moment!

Thanks for this, Callan. Way, way fucking cool!

If all human intellectual activity, including creativity (mathematical, musical, literary etc.) is mechanical in the way Scott says then it seems reasonable to assume that like any other mechanical computation, creativity can be run on various types of hardware. The fact that human brains can be creative suggests that human-made brains can be creative, since both are machines. If a human brain has 100 billion neurons and each neuron has 1000 connections to other neurons, and each connection can have 1000 possible states (instead of the 1 or 0 of a simple logic gate) it should be possible to build a computer with 100 quadrillion logic gates that exhibits human-brain-level creativity. Let’s get to work.

For me this is one of the big questions raised by the singularity: the rates at which various possibility spaces can be explored. Everything we presently assume is circumscribed by our cognitive constraints. 100 billion

artificialneurons with 100 trillion interconnections processing information at the speed oflightas opposed to chemistry…If you look at intelligence differentially, you could say the AI programme is the project of transforming

ourselvesinto the imbecilic children we always were.And after we build that first 100 quadrillion gate computer we can build another one to give us metacognitive access to the first one.

Which would simply add to the load of untracked processing: there’s a sense in which BBT is an empirically necessary consequence of information processing, the difference being that we could design machines that, although they suffered medial neglect regarding the details of their operation, could nevertheless model those operations without distortion – with a kind of ‘enlightened unconscious.’

could nevertheless model those operations without distortionYou’d think. But how could you know?

In the same way human brains right now can already model the consequences of medial neglect in other human brains as environmental processes. We might suck at recognizing the distortions of our own metacognitive observations, but why would all possible brains be equally bad as us at accounting for it?

Why wouldn’t you take their word for it, Frank?

Oh, new and fabulously sparkly unexpected feedback loop phantomia? I’d pay that. But the very end of uncertainty? Right there in an utterly complex machine? Even Hubble wears contacts.

It depends on whether the thing has to go by your definition of a brain. If so, like a child, it carries part of your semantics into it. Conforming to your idea of a brain means conforming to the ideas of your quite human brain. Constraint. Sorry to pitch it with melodrama, it’s the only way to be succinct.

And ‘we can model the consequences’ – sans any curtailing caveats, that’s omniscient talk. And with curtailing caveats added – well, such existing is part of the point.

I’m afraid your melodrama has made you quite incomprehensible to me. What are you even trying to say at this point?

Maybe I don’t want to be too attentive

There does seem to be an infinite regression problem.

You can say that aga…

Yes, I am a bad person…

Oh, from the SA forum art thread – when you see it (the hidden part), you will freak!

http://princeofnothing.wikia.com/wiki/File:Wracu_redux..png

And when you find out it was unintentially put there, you will extra freak!!

“We worship…the space

betweendragons…”Okay, mangled the quote, but check it out, cause it was worth it!

Crud, I broke the link in the above post: http://princeofnothing.wikia.com/wiki/File:Wracu_redux..png

odd. it didn’t work for me either,Callan.

Yeah, for people who want to see it you have to old fashioned highlight the whole thing, copy and paste it into a new browser. It’s the two recurring full stops before the png at the end, they are freaking out the auto link feature of wordpress. See, even when word press sees it, it freaks out! 😉

Possibly the file aught to be renamed with just a single full stop before the png – that might make it alot easier to link to.

Callan, if you could, email me so i can send you a copy of it just in case.i’m laboring under some tech issues.

quintvoncanon@gmail.com

wracu

There, that’s the ticket!

http://www.newscientist.com/article/mg22029392.600-back-from-the-dead-reversing-walking-corpse-syndrome.html

I am both deeply curious about what brings about the death conclusion (I mean, when your leg goes to sleep, that’s pretty damn numb, yet it doesn’t feel dead or anything). I wonder if it’s tied to some sort of adaption to a changing morphus (changing over hundreds of thousands of years) and part of the brain adapts to the current morphus in each individual – and this is screwing up that adaption? I’m deeply curious but also the blunt strength of it bothers me. Guess it’ll be the new party drug at some point. Maybe leading to a pseudo zombie apocalypse (maybe when a cheap version that gets out proves irreversable? Now there’s story fodder!)?

[…] Scott R. Bakker, over at Three Pound Brain, offers a compelling summary of neuroscience research enriched by some speculative ideas: Blind Brain Theory. The thesis is: “the metacognitive opacity of” the “processes underwriting ‘thoughts’” we treat as “an… […]