[SL4] Re: AI testing and containment Re: Programmed morality
From: Dale Johnstone (dalejohnstone@email.com)
Date: Tue Jul 18 2000 - 15:19:57 MDT
Apologies for the delay in replying to these threads, I don't seem to have received Eliezer's SL4 posts on the 9th. Curiously though I *did* get Eli's post from the Sing list on the same day.
>From: Eliezer S. Yudkowsky <eliezertemporarily@s...>
>Date: Sun Jul 9, 2000 8:58pm
>Subject: Re: AI testing and containment Re: Programmed morality
>Dale Johnstone wrote:
>>
>> Brian Atkins wrote:
>> >
>> >However to really test what an AI will do once it is
>> >"loose" you would have to provide it with a quite awesome
>> >simulation of the real world (Matrix-like) and then see what it
>> >does to the humans. I don't think we will be able to do that even
>> >if we had the hardware. So I would be interested to know of other
>> >possible ways to test what the AI would do.
>>
>> What if the grass is pink, not green? Will it matter?
>
>Sure, because then it would be obvious that it was a test. Our world
>hangs together. It started with the Big Bang, evolved, and wound up
>with us. Aside from the Fermi Paradox and possibly qualia, there
>are no major holes in the picture. Now, you put an AI in The
>Village and it's gonna be pretty obvious that the whole thing is a
>simulation. Maybe a stupid AI would fall for it, but not any AI
>smart enough for us to worry about. A superintelligence could
>probably look at its surroundings and not only deduce that the whole
>thing was a simulation, but deduce ab initio the nature of evolution
>and that the most likely explanation for the simulation was a group
>of evolved beings worried about the motives of superintelligence...
I'm assuming an AI will be stupid before it's smart. Details like the colour of the grass are not important at this stage. If we see that this AI is turning into something 'bad' we can do something about it well before it gets out of hand.
Obviously, I agree, a superintelligence will quickly figure out the nature of it's reality.
As for the Fermi paradox, I don't think it's much of a paradox. Information maximally compressed sounds like noise. Or we're part of someone elses simulation. Actually there are probably more simulations of the Universe running than Universes. I expect we'll create millions trying to figure out our own. It should even be possible to travel up & down meta-universes. Hofstadter would love that!
Regards,
Dale Johnstone.
-----------------------------------------------
FREE! The World's Best Email Address @email.com
Reserve your name now at http://www.email.com
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:35 MDT