From: Colin Hales (colin@versalog.com.au)
Date: Mon Sep 02 2002 - 18:58:30 MDT
Hal Finney
>
> What exactly is the relationship between a program, whether AI or
> something else, and a mathematical formal system? Is there a one to
> one correspondence between formal systems and computer programs?
This is an extremely good question. I have spent many hours mired in
confusion between the Godelian Theorem set view of things and what the mind
is really doing. You know how I view it now? I have basically categorised it
as not playing any serious role in the AI process other than as an
explanatory/orienting tool.
If you look in Godel Escher Bach, Fig 18 Page 71, you'll see a tree of
Truths and A tree of Falsities in a sea of formulae in a sea of symbolic
noise. It's a great picture, but I have concluded that it does not represent
the mind. Even if it did, I would like to get you to think of it as both
trees continually changing shape, small buds growing, leaves forming and
then crinkling and fading, branches bifircating, growing and then shrinking.
(Both trees, not just the one). The trees wither and grow with knowledge
and ignorance, literally.
If the mind was 'thinking' an actual underlying theorem set, then I think
that would be a good characterisation. That's not what happens, IMO.
Godellian considerations are a nice way to think about the ideas of
self-referentiality, as Hofstatder so brilliantly does, but the mind's
theorem set - its model of causality in the universe 'out there' is a pale
shadow of any mathematical description of the universe. Indeed, as I've
posted before -"Insight is the serenditity born of failure to make a
mistake" - it is the capacity to really stuff it up that is the source of
human genius. The entire symbolic noise space is accessible (see the tree
picture) to the mind - leaps of intuition can get you anywhere. Simply
because a 'theorem' = a proposition for a causal link, can be construed
arbitrarily by any of us, any time.
This is because the causality models of the brain don't have to describe
anything accurately at all! They only have to be good enough to let the
brain survive. You can believe in fairies and flat earths if you want, or
that the great pumpkin is the spirit in every tree. The real facts are
irrelevant. Put another way: If the model for the belief:
"I_am_never_a_tiger_dinner_ness" fails to trigger behavioural responses
needed to cause that to happen whilst in the presence of a tiger, then you
get eaten. This characterises evolutionary brain development. The reticular
activation system at work. Think something useful, not necessarily accurate!
Let's say you look at the entire neural/glial charge manipulation structure
and model that as a theorem set. You can think of sensing as a a kind of
'Godel Numbering' activity and create theorems that represent beliefs. You
get to the end of it and what have you got? A recipe for a brain, not a
brain. You have characterised the brain and can then communicate that
characterisation to a 3rd person, but it will not help you make a mind and
it cannot tell you what it is like to be that mind. It is simply what the
brain happened to look like just then.
This is getting a bit too much like a new thread I was about to start when
this one turned up. I think I'll cut loose and move to the other thread.
There's work to be done! The extropians are going to re-do scientific method
for the world. Hopefully before lunch.
cheers
Colin
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:16:38 MST