From: John K Clark (johnkc@well.com)
Date: Thu Dec 12 1996 - 10:03:09 MST
-----BEGIN PGP SIGNED MESSAGE-----
On Wed 11 Dec 1996 Eliezer Yudkowsky <sentience@pobox.com> Wrote:
>what is this whole business where every time I give a
>rational reply to an objection (rational or otherwise),
>people respond with a personal attack?
Eliezer, I think you're a fine fellow and if you have interpreted anything I
said as a personal attack I apologize, I assure you I did not intend it that
way. I think some people are a bit put off by your arrogance, but not me,
I like arrogant people, I'm an arrogant bastard myself and it's always nice
to talk to a fellow member of the brotherhood. I look at it this way,
if I have superior knowledge or intelligence then I've earned the right to be
arrogant, and if I've demonstrated exceptional stupidity or uncommon
ignorance then my arrogance performs an altruistic service by generating
amusement and mirth in all I come in contact with.
>I am not claiming that cognitive causality will not run on a
>Turing machine.
Good, I'm glad you cleared that up.
>It is not particularly relevant whether our causal-analysis
>modules will run on a Turing Machine
There is nothing more relevant. If it were proved that we can not be run on a
Turing Machine then I would abandon my interest in Science and Mathematics
and become a Holly Roller. It would be the only logical thing to do.
>I am saying that, when discussing "causality", one must be
>aware of how one's own cognitive architecture affects one's
>viewpoint.
Not possible. We have only a hazy understanding of our motives and almost no
understanding of out individual cognitive architecture. There is reason to
think that this ignorance about ourselves will not improve dramatically
regardless of how intelligent we become. You may be able to obtain a deep
understanding of my mind, and I could do the same with you, but we will never
understand ourselves.
In 1935 Turing solved The Halting Problem, he proved that there is no general
way to determine what a program will do other than run it and see. A program
does not know what it will do next and to me that means it has free will.
If your brain hardware was faster than mine you could form a mental model of
me, run it, and predict everything I would do. However you could not do the
same thing with yourself. For the mind to totally understand itself it must
form a perfect internal model of itself. The model must not only describe the
rest of the mind in every detail but it must also depict the model itself
with a micro model. This micro model must represent the rest of the brain and
the micro model itself with a micro micro model. This path leads to an
impossible infinite regress. Both the brain and the model must be made up of
a finite number of elements. If we are not to lose accuracy the components of
the brain must have a one to one correspondence with the elements of the
model. But this is impossible because the brain as a whole must have more
members than the part that is just the model.
This argument does not hold if the mind has an infinite number of components,
it is possible to have a one to one correspondence with a proper subset of an
infinite set. From this I conclude that we have free will but God does not.
In a different post on a different subject you indicated that you were
certain that a Quantum Computer will be built in the next 15 years. I don't
see how you can be so certain, I think there is a 30% chance, and only a 60%
chance that such a machine would not violate the laws of Physics, this area
is at the very frontier of knowledge. The rapid progress made in just the
past year is very encouraging, particularly the work in Quantum Error
Correction, but I think it would be overstating it to say that they have a
proof of concept. I still think there is a 40% chance we'll have to settle
for old fashioned Drexler style Nanotechnology.
You also said you were certain that faster than light travel and
Picotechnology was possible. Care to elaborate?
John K Clark johnkc@well.com
-----BEGIN PGP SIGNATURE-----
Version: 2.6.i
iQCzAgUBMrA/8H03wfSpid95AQHxrgTvRLixLxTQYO3oMdrbkrDl/8ChqBxyNP3Z
GoETMU4Bz5nCcjqo7Mvx5seaBSDf1Ou1e0IbvYIZ1XzrpWWN/u4qdAbHTAUcR1S0
EyhNtBZorFdZApEaTWdqoF28/sMOMSRvMW0Ve44BbExBjbf4V4RPFtGHNdmOjj5j
eG1J9iAjbAxz+YAT8Pe+YvKlPnOAnQuywSp0NCSefzrvJRHGmZM=
=PtOT
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:53 MST