From: Hal Finney (hal@rain.org)
Date: Mon Feb 10 1997 - 10:05:45 MST
From: John K Clark <johnkc@well.com>
> In general, anything that could make the slightest claim to be intelligent,
> even a simple computer program, can not predict what it will do.
There are some things about John's definition of free will which are unclear
to me. First, would simple computer programs really be said to have free
will?
I think John's original definition referred to "beings" with free
will, and he explained that before asking whether a program had free
will we should first ask if it is a being, which I gather would
imply consciousness at a minimum. Since simple computer programs
aren't conscious, then this would mean that they don't have free will.
(Perhaps John in the above meant a program which was relatively "simple"
for a conscious computer program, something which would be very complex
by our standards.)
I also am not clear on how to define what it means to make a prediction.
Suppose we ask a conscious program, "What will you do if X happens?" The
program runs a simulation of itself where X happens, and comes up with
the result, which it reports (perhaps with a probabilistic component).
Now we might say that this is not an accurate prediction, because if X
then actually does happen, the program won't be in the state it thought
it was going to be in when it made the prediction, simply by virtue of
having made the prediction. So when X then does happen, the program might
do something else. This would illustrate the program's inability to predict
what it would do.
But I don't think this is fair to the program. It was asked to predict
what it would do when X happened. We should have asked it, "What will
you do if you are asked to predict what you will do when X happens, and
then X actually happens?" In that case the computer could have accurately
predicted what it would do in these circumstances.
Now of course if X happens after the computer is asked _this_ question,
again it might do something different from the prediction. But once
again the problem is not that the computer program is unpredictable
(to itself or anyone else), but rather that the question being asked
does not correspond to the circumstances. We would have had to ask it,
"What will you do if you are asked to predict what will happen under the
circumstances where you predict what you will do when X happens, and then
X happens, and then after making that prediction, X actually happens?"
This is obviously an infinite regress. But again I would not conclude
from this that the computer is unable to predict its future actions, but
rather that it is impossible to usefully word certain types of questions
about future actions to a program with a memory of the questions it
was asked. If the wording is supposed to imply that the program had
never heard the question, then the circumstances being asked about are
never going to occur.
You could ask it this: "What would you do if X happens after your memory
has been reset to the state it was in before I asked this question?"
In that case the program could accurately and reliably predict its
behavior. In principle, a person with access to an accurate neural-level
description of his brain could do so as well (although obviously the
memory resetting would be more difficult in that case).
Would people's ability to answer such questions imply that they don't
have free will? Do we really want to rely on the fact that the memory of
questions changes the state of the system to show that it has free will?
To me this is a somewhat incidental characteristic of conscious (and many
unconscious) systems and does not seem fundamental enough to distinguish
which ones have free will.
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:09 MST