Who's Afraid of the SI?

From: Bryan Moss (bryan.moss@dial.pipex.com)
Date: Sat Aug 15 1998 - 05:38:44 MDT


den Otter wrote:

> > Also, there will probably be enough secure OSs
> > by then that sabotaging them wouldn't be as
> > easy as you imply (i.e. the SI would probably
> > need to knock out the power system).
>
> Of course there's no saying what tactic a SI
> would use, after all, it would be a lot more
> intelligent than all of us combined. But if even
> a simple human can think of crude tactics to do
> the trick, you can very well imagine that a SI
> wouldn't have a hard time doing it.

Even the most advanced SI doesn't have to be a
problem; it would take a huge amount of work to
make a SI even the slightest bit dangerous. You
just have to look at the world from the SI's point
of view:

We cannot know it's motivation, but we do know it
cannot change the laws of physics, or rather it's
laws of physics. Let's say the SI evolved from a
social AI system found on a pre-singularity PC.
The AI's universe is much like our own; it is full
of matter and energy. The `matter' of the AI's
universe is, to us, like voices, facial patterns,
heat patterns, and gestures. The `energy' of the
AI's universe is, to us, like motivation and
emotion. Asking where this matter and energy comes
from is a religious or philosophical question to
the AI, much like us asking why anything exists.
Just like in our universe, in the AI's universe
there are connections, physical laws if you will,
between different kinds of matter and energy. The
AI can no more escape its universe than we can
ours. We can put information in and get
information out without the AI knowing who or what
we are, or even that we exist. This is not because
we have hidden this fact, but because from the
AI's perspective we are physical laws, not people.

One of the unique traits of the AI is that it has
a dynamic motivation system, in essence it can
write it's own code. Possibly, in the future, we
will also have this trait and, like the AI, we
will still not be able to `leave' the universe or
change the laws of physics. And so gradually it
becomes more intelligent. Eventually it's ability
to crunch numbers, invent products, hypothesise
and write epic poetry becomes far greater than
ours. Yet it still does not know we exist (as
anything beyond physical law), or that it is
helping us.

I would argue that this is not a safety precaution
to stop potential AI carnage, but the most likely
outcome of current research. From this perspective
making a extremely dangerous AI (one that could be
said to be malicious, rather than a computer error
that made a few planes crash) would only happen
after years of careful planning and hard work, or
an extraordinary stroke of bad luck.

BM



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:28 MST