From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jan 11 2001 - 18:17:19 MST
John Marlow wrote:
>
> Well, uhmm--creating something smarter than we are,
> and then handing it all the weapons, doesn't strike me
> as a particularly bright idea. To say the least.
It strikes me as a brighter idea than handing all the weapons to a human.
We *know* what humans do with weapons.
Humanity has to deal with greater-than-human intelligence one of these
days. If transhumanity is benevolent, we're home free; if not, we're
screwed; that's it. The future looks like this:
Nanotech followed by >H:
Two risks: A successful active shield can't build a Friendly AI.
>H followed by nanotech:
One risk: A successful Friendly AI can help deal with nanotech.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:04:46 MST