Re: AI-Box Experiments

From: Anders Sandberg (asa@nada.kth.se)
Date: Tue Jul 09 2002 - 03:01:41 MDT


On Mon, Jul 08, 2002 at 09:11:23PM -0400, Eliezer S. Yudkowsky wrote:
>
> I'd be interested in the results of any AI-Box roleplaying, but be sure
> to specify it's roleplaying when reporting it, since the dynamics of
> roleplaying are different from the dynamics of the experiment.

Of course. Good scientific reporting is based on the sacred IMRAD:
Introduction-Methods-Results-Discussion.

> In
> roleplaying you want both sides to have a fair chance of winning.

Actually, that is not the case. In good roleplaying sessions (and with
experienced players) there is no need for winners and losers.

In my current game I have a player who is more like a co-gamemaster than
a player, he gets to play many different roles of people the characters
of the game meet. This sunday he had to play a certain Dick Cheney. He
had an entirely orthogonal agenda to the characters, and what was
interesting was noth who would "win" but what would happen when they
collided. I think the same is true for the AI situation: we do not study
it because we really root for the AI or humans, but rather to understand
this kind of situation.

(I almost feel sorry for the Cheney character - he has run into the
mother of all political out of context problems. What should the present
administration do about time travel, posthuman diplomats, the imminent
threat of trans-dimensional invasion and xoxing of US Army personel? ;-)

> Also, I
> suspect that in roleplaying you would want to permit the AI to make its
> way out by planting a Trojan Horse as you discuss above.

Within reason. Handwaving shouldn't be allowed, so saying "I use my
voice vibrations to fire the right neurons in your frontal lobe" is out.
I think the scenario works well if the only thing the AI can do is to
exchange information; it could give the outsiders a piece of computer
code and suggest they run it, but it couldn't automatically run it or
prevent the humans from examining it carefully (for years, if necessary)
to figure out if it had any backdoors. Similarly the information channel
has limited capacity: neither side has total information about the state
of the other, only the information that is exchanged counts.

My game posthuman didn't have any restrictions at all, which made it
deliciously frightening to the players but not particularly useful as a
scenario planning instrument.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:15:15 MST