From: Hal Finney (hal@finney.org)
Date: Mon Sep 02 2002 - 12:16:17 MDT
Lee Corbin writes, regarding Rudy Rucker's comments on Godel:
> No, I think that this exaggerates what Godel's proof implies. We, because
> of our realism, or our belief in the existence of numbers, or by going to
> a higher formal system than the system in which some axioms are written,
> can see that certain things are true even though those things *cannot*
> be proven from those axioms (i.e., in the original formal system).
>
> Perhaps soon, (2040?), there will be a formal system based on a finite
> number of axioms, which will nonetheless proceed to make conjectures,
> and study our universe and us, and be altogether vastly wiser than
> we are. It too will understand Godel's theorem, and it will have a
> proof, as do we, of the theorem using metamathematics (like we do) to
> prove its results. So a formal system can be just as powerful as we are.
I am confused about how a formal system can do all of these things.
A formal system is passive; it is a set of rules which ultimately allow
you to determine whether a given proof of a theorem is valid. Lee's
description sounds like an AI program.
What exactly is the relationship between a program, whether AI or
something else, and a mathematical formal system? Is there a one to
one correspondence between formal systems and computer programs?
I can imagine how to write a computer program to implement a given
formal system. Armed with the axioms and rules of inference, and given a
proposed proof, it can go through each step of the proof and mechanically
check whether that step follows from all of the previous steps, using
the various axioms and inference rules. At the end it can produce output
indicating whether the proof, and hence the proposed theorem, is valid or
invalid. This is a mechanical, finite process and no doubt many programs
have been written over the years to implement such proof checkers.
This shows that there is a mapping from formal systems to programs.
However, such programs would seemingly be only a subset of all possible
programs. Most programs do not work by reading in axioms and then
checking a proposed theorem. This word processor which is allowing me
to type my message doesn't seem to fit into that structure at all.
The question is then whether there is a mapping in the other direction,
from programs to formal systems. Is it the case that for any program, we
can define a formal system which is somehow isomorphic to the program?
How would that work; would each step of the program be like adding
another line to a proof in the formal system? The axioms would somehow
be derived from the program source code, and the description of the
hardware architecture?
If this is possible, so that in particular an AI program would
correspond to a formal system, and that AI program began making
mathematical conjectures and proofs just like we do, it doesn't seem
like the program would be basing its proofs on its own formal system.
If there is a formal system defining this program, it would presumably
be enormously complex, at least as complex as the program itself, which
for human-level AI would probably be billions of bytes long at least.
But no mathematician works with a formal system whose axioms would take
an entire encyclopedia to list them. They work with relatively small
axiom sets. The Zermelo-Fraenkel axioms of set theory take up fewer
than a dozen lines. The Peano axioms definining the integers are only
slightly larger.
Human mathematics is done with axiom systems that will fit on a single
page. The corresponding axiom system which defines an AI's behavior, if
it exists, will take up dozens if not hundreds of volumes. Clearly the
AI is not doing mathematics with its own axiom system. It is bound
and constrained by that system, but it is not using that system to do
its mathematics.
Again I run into confusion when I try to consider applying Godel's
reasoning to the hypothetical, enormous formal system which defines
the AI's software operation. Godel's theorem essentially proves that a
sufficiently complex formal system is either inconsistent or incomplete.
Usually we are dealing with small axiom sets like those I described
above, and we have a strong intuition that inconsistency is not possible.
(Although note that the original formulation of set theory was later
found to be inconsistent, despite many people's mathematical intuition
that it was not so, and they were just as smart as mathematicians today.)
Hence Godel's theorem is usually described as showing that formal systems
are incomplete - that there are truths that cannot be proven.
It's one thing to look at a single page of axioms and see that it is
"obviously" consistent. But how can you do so when looking at a
million pages of axioms? That is far beyond the abilities of a human
mathematician, or those of a near-human-level AI.
And furthermore, what does it mean for the formal system specifying a
piece of software to be consistent or inconsistent? Here I come back
to the question of what exactly it means to say that a program "is" a
formal system. With mathematical formal systems, consistency is easy to
define: T and ~T cannot both be theorems, where ~ means "not". This is
purely a syntactic definition. There can be no strings of characters,
both of which are accepted as valid proofs by the formal system, and for
which the last lines are identical except that one has the ~ in front
of it.
How would you apply this to a program? What does it mean for it to
be consistent or inconsistent? Somehow the theorems or proofs of the
formal system must be identified with the program state; and then what
does the "not" symbol "~" mean?
Summing up, it seems to me that Godel's theorem does not apply in quite
the same way to an AI program as it does to the Peano axioms that describe
the integers. The whole question of consistency vs completeness has
to be judged on very different terms when dealing with such a complex
system, and it's not clear what inconsistency even means in this case.
I think there is a tendency to confuse the formal system which describes
the AI with the one that the AI is using to make its mathematical proofs,
as though an inconsistent AI is one that starts with the Peano axioms
and manages to prove that 1=2. But I don't believe that is the case at
all; an AI which did this would simply be mistaken, just like a human
who managed it (and people make such mistakes all the time). But such
behavior would be completely consistent under any hypothetical formal
system which defines the AI, since if that system exists it would be in
some sense isomorphic with the AI software program itself.
Hal
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:16:38 MST