From: Hal Finney (hal@finney.org)
Date: Mon Jul 08 2002 - 12:47:20 MDT
Eliezer writes:
> Based on my experiences in the above cases, I've created a page with a
> suggested protocol for AI-Box experiments:
> http://sysopmind.com/essays/aibox.html
This is a fascinating series of experiments! I would never have believed
that people with seemingly a firm commitment could be persuaded to change
their minds in just two hours of conversation. It is certainly a tribute
to Eliezer's persuasive skills, and indeed a super-intelligence should
be far more able to convince people.
Inevitably, however, one is left wondering whether the people involved
were really that committed to their positions. What was their motivation,
or simulated motivation, for keeping the AI in the box? They *said*
they were firmly commited, but what repurcussions were they imagining
would happen if they changed their minds? Indeed, just why did they
want to keep the AI in the box?
I suggest that the protocol be extended to allow for some kind of public
conversation with the gatekeeper beforehand. Let third parties ask
him questions like the above. Let them suggest reasons to him why he
should keep the AI in the box. Doing this would make the experiment
more convincing to third parties, especially if the transcript of this
public conversation were made available. If people can read this and
see how committed the gatekeeper is, how firmly convinced he is that
the AI must not be let out, then it will be that much more impressive
if he then does change his mind.
I hope Eliezer will repeat the experiment with this kind of information
provided.
Hal
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:15:14 MST