From: Joseph Sterlynne (vxs@mailandnews.com)
Date: Tue Oct 26 1999 - 16:05:32 MDT
> Billy Brown
>I think this whole discussion is wandering off into left field. The
>question here is whether you can control sentient AI (and eventually SI) by
>running it in a virtual world instead of letting it interact with the real
>one.
Thank you for attempting to maintain focus; it can be easy for main points
and assumptions in discussions (here and elsewhere) to simply get lost.
Which leads to a related problem: if an idea is run in a virtual discussion
can it find its way out?
>1) [. . .] The question of whether the AI could think its way out of a
>perfect simulation is irrelevant - the real issue is whether you can get
>the defect count low enough to be even halfway convincing.
If it were raised entirely in even a defective environment it might never
know what would constitute evidence that it was constructed.
>2) In order to get useful work out of an AI you need to tell it about the
>real world. That means that for any commercial application the AI will know
>all about its real situation, because you'll have to tell it.
Not necessarily. You could (given, we are assuming, adequate resources)
generate a universe for it which is not dissimilar to ours. The simulation
contains the same target problem as the higher-level universe. The
difference is that the AI is just not directly connected to our world,
which means that it may not know "real" people, places, et cetera;
therefore it will not and cannot attempt to interfere with those things.
The lines out are blocked.
>3) [. . .Y]ou are either going to build things that it designs for you, or
>follow advice that it gives you, or maybe even (god forbid!) let it write
>software for you. Whichever way you go, this means that the AIs will get
>lots and lots of chances to try to break out.
If we are gods of the simulation, we should be more or less omniscient. We
could observe the AI's thinking and results and place agents (uploaded
humans, VR-mediated humans, ostensibly-inanimate objects, and so on) within
the simulation which would guide the AI's projects. That aside from more
direct manipulation, which is really what should be available if you had
someone's code right in front of you. But I guess that if a very clever AI
suspects that it is in a simulation and tries to sneak something into a
design there could be undesirable effects.
>4) Suppose that VR containment works great for AI 1.0 [. . . and] years
>after that you have millions of copies of AI 6.0 (IQ 300, x10,000 time
>rate) running on desktop computers. The longer containment works the
>harder it is to maintain, and the worse it will be when it finally gets
>breached.
I'm not sure I understand the concern in this context. Why is it the case
that "[t]he longer containment works the harder it is to maintain"? You
seem to imply that the AIs have the capability to outthink the containment
technology. And that is in a way just what we were originally debating.
It could be that an AI will never realize its situation or do anything
about it if it did regardless of its intelligence.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:38 MST