From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jul 07 2002 - 18:11:02 MDT
Colin Hales wrote:
>
> An AI capable of understanding the beliefs and experience of its
> jailer to the point of being able to argue its way out would never be
> in the bottle in the first place. Or have I missed something?
I mostly agree, although there will still be people arguing for a larger
"box" consisting of a sealed lab, or that the AI be trained with outside
exposure and then put in the box, and so on. All of which is moot if
one can clearly demonstrate that the humans cannot keep the AI in the box.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:15:13 MST