From: John K Clark (johnkc@well.com)
Date: Mon Feb 10 1997 - 21:36:52 MST
-----BEGIN PGP SIGNED MESSAGE-----
On Mon, 10 Feb Hal Finney <hal@rain.org> Wrote:
>I think John's original definition referred to "beings" with free
>will, and he explained that before asking whether a program had free
>will we should first ask if it is a being, which I gather would
>imply consciousness at a minimum.
Actually I think it would imply consciousness at a maximum. I like my
definition of free will, but I don't have a definition for consciousness
that's worth a damn, even though I experience it. Also, unlike free will,
I don't have a sure fire test for it, The Turing Test is the best anybody has
come up with, it's not perfect but it's all we have. It's all we'll ever have
I think.
>I also am not clear on how to define what it means to make a
>prediction.
I ask the computer program: "Suppose you decided to search for the
smallest even number greater than 4 that is not the sum of two primes
(ignoring 1 and 2) and then stop. Would you ever stop?"
Not only will the computer program be unable to answer that question but
I can't either, nobody can.
>The program runs a simulation of itself where X happens, and comes
>up with the result, which it reports (perhaps with a probabilistic
>component). Now we might say that this is not an accurate prediction,
>because if X then actually does happen, the program won't be in the
>state it thought it was going to be in when it made the prediction,
>simply by virtue of having made the prediction.
Yes, that is another problem. For the mind to totally understand itself it
must form a perfect internal model of itself, but that is impossible because
the brain as a whole must have more members than the part that is just the
model.
>But I don't think this is fair to the program. It was asked to
>predict what it would do when X happened.
It's not important if an outside party asks us to make a prediction, because
we constantly ask ourselves the question "What am I going to do next?".
The answer we receive is often wrong, we change our mind and surprise
ourselves, or the answer we receive is "I don't know I'll have to think about
it". We know for sure what we're going to do when we do it, and not before.
>You could ask it this: "What would you do if X happens after your
>memory has been reset to the state it was in before I asked this
>question?" In that case the program could accurately and reliably
>predict its behavior.
But it would still take the computer 10 minutes to figure out what it will do
10 minutes from now. Besides, if its memory is erased what's the point?
>In principle, a person with access to an accurate neural-level
>description of his brain could do so as well [...] Would people's
>ability to answer such questions imply that they don't have free
>will?
The electronic computer is much faster than my old steam powered biological
brain, so it figures out in 10 seconds what I'm going to do in 10 minutes,
but if the computer then tells me of it's prediction about me, and my
personality is such that out of pure meanness I always do the opposite of
what somebody tells me to do, then the prediction is wrong.
>Do we really want to rely on the fact that the memory of questions
>changes the state of the system to show that it has free will?
I think we do, because we're the one asking ourselves the questions.
John K Clark johnkc@well.com
-----BEGIN PGP SIGNATURE-----
Version: 2.6.i
iQCzAgUBMv/0nn03wfSpid95AQF6uQTww+r4T18+qYtHT8ZyKOHX+vK8kTpUHaRA
sMKe6iWfzyJV3H+BfuXwuFdb1N74u60StbjzjQ7mFGvfJqxnMeoX59DO1uXkceCU
PYIMrf2fVljycJtoOvk0qzpqD1QHKsRgd71JRWs6Ft3n8XhT7sQXQrACWUcTwzhl
UaEY67viBz2fJuwbfSjhvQVgXuHvlSVCgEYsaGYudtLrFO4yPFk=
=HFW3
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:09 MST