From: John Clark (jonkc@worldnet.att.net)
Date: Sun Nov 26 2000 - 22:46:16 MST
Eliezer S. Yudkowsky <sentience@pobox.com> Wrote:
> The Turing diagonalization argument proves that absolute self-knowledge is
> impossible,
True.
>Nonetheless, if a transhuman can have "effectively perfect" self-knowledge
I don't see how. Tomorrow you might find a proof of the Goldbach Conjecture
and prove it true, or tomorrow you might find a counterexample and prove it false,
or it might be Turing unprovable, meaning it's true so you'll never find a counterexample
to prove it wrong but a finite proof does not exist so you'll never find a way to
prove it's correct. You might not find a proof or a counterexample, not in
a year, not in a million years, not in 10^9^9^9 years, not ever. You won't
even know your task is hopeless so you might just keep plugging away at the
problem for eternity and make absolutely zero progress. We don't know even
approximately how this might turn out because we can't assign meaningful
probabilities to the various possible outcomes, we don't know and can't know
what if anything our mind will come up with. I just don't know what I'm going to
do tomorrow because I don't understand myself very well, and the same would
be true if I was a chimp or a human or a Transhuman.
John K Clark jonkc@att.net
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:32:06 MST