Re: Who's Afraid of the SI?

From: Bryan Moss (bryan.moss@dial.pipex.com)
Date: Sat Aug 15 1998 - 14:36:21 MDT


Dan Clemmensen wrote:

> > Even the most advanced SI doesn't have to be a
> > problem; it would take a huge amount of work
> > to make a SI even the slightest bit dangerous.
>
> You argument seems to be that the AI is embedded
> in infospace, and is therefore at one remove
> from the "reality" that humans inhabit. You
> therefore feel that the AI is not dangerous in a
> "real" sense.

In my opinion the reality the AI inhabits is just
as real as the one we inhabit. The argument is
that the AI would not have any concept of what a
human is (in our sense) and would therefore not be
capable of being malicious (or benevolent) towards
us. This does not mean it can't be dangerous, just
that the possibility of the AI causing intentional
harm is highly unlikely (unfortunately you quoted
a sentence in which I did not make this as clear
as I could have).

> I don't buy this at all. First, the AI can be
> very destructive in infospace, for examnple by
> taking over the financial system or the mass
> media.

Yes, but the AI does not know there's a financial
system to attack. You're anthropomorphizing,
whereas the AI (to coin a term) would be
AImorphizing. You see a financial system; the AI
sees physical law.

AI point of view:

The AI sees a series of objects jumping around in
what appears to be a random way. Being a
scientist, the AI decides to investigate further.
It is not long before the AI has found patterns in
the system and is capable of making short-term
predictions.

Human point of view:

This is an AI trained to predict trends in the
stock market. Fortunately it has had a good
success rate of short-term predictions and has
made a significant amount of money.

The "jumping objects" of the AI's world are
natural phenomenon, much like electron clouds or
planetary orbits. And, as all rational AI's and
humans know, asking "why" jumping objects and
electrons exist is a religious endeavour.

A social AI is similar, it gets spatial
information, gestures, heat patterns and it acts
on them according to its programming. Even if it
can reprogram itself it would be a massive stroke
of luck if it decided to turn on us. Remember, an
AI that has evolved around us is likely to have no
concept of resources or power (in the Hitler
sense). And an AI designed to share resources
would have no idea what it was "really" doing. In
my opinion, neither would be likely to cause us
any intentional harm.

We cannot say what motivations an AI might
develop, but we can attach a high probability that
they won't be like human motivations (since we
know the course of human evolution). An AI that is
like a human (rather than `human-like') would have
to evolve along a similar path as a human.

> second, the AI can easily operate in real space
> even with today's technology, and can readily
> design and implement even better robotic
> technology.

The patterns of "real space" are no different to
the patterns of stocks falling and rising.

> Why do you feel that the AI is further removed
> from "reality" than your own intelligence is?

I feel that my motivations and the motivations of
the AI are more likely to be different than they
are to be similar. And I think that there are more
available courses of action that do not harm
humans, than there are those that do. The view of
a super intelligence that is like a disease,
gorvernment, or corporation (expand and engulf)
is, imho, completely unfounded.

I could be wrong.

BM



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:28 MST