From: hal@finney.org
Date: Mon Mar 13 2000 - 16:57:45 MST
D.den Otter, <neosapient@geocities.com>, writes:
> The problem with >H AI is
> that, unlike for example nanotech and upload technology in general,
> it isn't just another tool to help us overcome the limitations of
> our current condition, but lirerally has a "mind of its own". It's
> unpredictable, unreliable and therefore *bad* tech from the
> traditional transhuman perspective. An powerful genie that, once
> released from its bottle, could grant a thousand wishes or send
> you straight to Hell.
> Russian roulette.
>
> If you see your personal survival as a mere bonus, and the
> Singularity as a goal in itself, then of course >H AI is a great
> tool for the job, but if you care about your survival and freedom --
> as, I belief, is one of the core tenets of Transhumanism/Extropianism--
> then >H AI is only useful as a last resort in an utterly desperate
> situation.
So, to clarify, would you feel the same way if it were your own children
who threatened to have minds of their own? Suppose they could be
genetically engineered to be superior to you in some way. Would you
oppose this, and perhaps even try to kill them?
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:27:21 MST