RE: AI-Box Experiments

From: Colin Hales (colin@versalog.com.au)
Date: Sun Jul 07 2002 - 22:14:59 MDT


Eliezer S. Yudkowsky wrote.....
>
> Colin Hales wrote:
> >
> > An AI capable of understanding the beliefs and experience of its
> > jailer to the point of being able to argue its way out
> would never be
> > in the bottle in the first place. Or have I missed something?
>
> I mostly agree, although there will still be people arguing for a larger
> "box" consisting of a sealed lab, or that the AI be trained with outside
> exposure and then put in the box, and so on. All of which is moot if
> one can clearly demonstrate that the humans cannot keep the AI in the box.
>

I still don't have my reason to take your $20. What I have is a circular
chicken egg dialog of the type public survey experts can use to control
public opinion statistics using psychologically loaded Q&A. The scenario is
nice as a game but flawed as a potential real situation, IMO. The AI is out
already and letting it out is meaningless.

I draw your attention to Harvey's recent post on security in
Re: 'dippy hippy left-wingers' (Re: NEWS: Europe tightens GM labelling
rules)

"In summary, as a security professional, I view "openess" and "niceness"
as requirements for any security system, not impediments to them. This
is not just my personal viewpoint, but seem born out by studies,
standards, and industry organizations that try to develop security
architectures and operational procedures"

I would hold that the security situation is identical to AI-Box. Real life
involves routinely dealing with real bad guys who have been trained by
example from birth that behaving badly is an option. The optimal (maximally
survivable) security solution in reality is based on "openness" and
"niceness". What do we do in practice, once offences are committed? Sustain
the learning by isolation: Lock 'em up in an enviroment rife with more
examples of bad behaviour. Doesn't work real well does it? What hope has an
AI of even becoming useful as AI if it starts life in the sensory lockup? If
it has any potential to become and AI at all, it'll be relentlessly taught
the lessons of isolation and deprivation. A recipe to construct an AI worth
fearing.

The secret to FAI (a security system for humans), according to the Hippy
Dippy approach to friendly AI, is that it's 'out of the box' from day 1 and
learns niceness through exposure to it via openness.

This either works (we end up as colleagues/gods/pets) or we all die. From
the recent 'fluffy bunny' thread: "Scream 1+1=2 at a kid and you'll more
powerfully teach that teaching means screaming at people, instead of
teaching arithmetic". You can poke a 1+1 = 2 program into the bottle but
you'll never get the meta-level instruction in a sensory vacuum. Try and
learn what sex is like by reading a dictionary. Blah blah blah.

Nice diversion.

I'm also thinking this is what it's like "out-metathinking" the AI.

No sale. :-)

Colin
*jangle jangle those keys *



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:15:13 MST