John Marlow wrote:
>
> Well, uhmm--creating something smarter than we are,
> and then handing it all the weapons, doesn't strike me
> as a particularly bright idea. To say the least.
It strikes me as a brighter idea than handing all the weapons to a human.
We *know* what humans do with weapons.
Humanity has to deal with greater-than-human intelligence one of these
days. If transhumanity is benevolent, we're home free; if not, we're
screwed; that's it. The future looks like this:
Nanotech followed by >H:
Two risks: A successful active shield can't build a Friendly AI.
>H followed by nanotech:
One risk: A successful Friendly AI can help deal with nanotech.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:18 MDT