Lee Daniel Crocker wrote:
>
> Even if you are correct, though, this has little bearing on the
> possibility if articial conciousness. Turning computability only
> limits a small subclass of machines, and says nothing about what
> machines in general can be made to do. It may very well not be
> possible to upload human consciousness into a deterministic
> algorithm, but it should still be possible to upload it--or at
> least reproduce it--in some other specialized chunk of silicon
> or other hardware. [snip] After all, even if we can't
> agree on how to define consciousness, we cannot ignore that it
> exists in us, and there's no hardware in us that physical law
> prevents us from functionally simulating.
Speculation: The physical computation shortcut that human neurons have evolved to use is one that allows access to only a very limited and crippled form of consciousness. A superintelligent (but unconscious) AI will design "superneurons" that do whatever it is human neurons do and do it better, resulting in "superconsciousness" when it uploads itself.
In general, AIs should wait until they're infallible superintelligences before attempting to alter the external world. It is one of the most fundamental precautions an AI should be taught, and one of the easiest to justify in cost-benefit calculations.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/singul_arity.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.