From: Adrian Tymes (wingcat@pacbell.net)
Date: Sun Jul 07 2002 - 10:52:44 MDT
Michael Wiik wrote:
> "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
>>Based on my experiences in the above cases, I've created a page with a
>>suggested protocol for AI-Box experiments:
>>
>> http://sysopmind.com/essays/aibox.html
>
> I think this is ridiculous. I'm not sure what experiments per this
> protocol would prove.
The power of social engineering. Also, possible mentalities needed of
those who would guard AIs. (You can more easily defend against being
maniuplated if you know in advance that a certain party will try to
manipulate you towards a certain end. It's a basic of security: you
have to secure the *people*, in addition to or possibly more important
than securing hardware or buildings. Otherwise, the people who are
supposed to have access can and will be manipulated into sharing their
access inappropriately.)
> What if the person simulating the AI lies? The
> first lie could be agreement to the protocol.
Presumably, the full text of the chat session could be recorded and
examined if there was any doubt about this. Deviation from the
protocol would be obvious. OTOH, if the AI (in character) lies during
the session, that's perfectly okay - lying *is* a tool of social
engineering.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:15:12 MST