Re: Evolving AIs and the term "Virus" (was "Who's Afraid of the SI?")

From: Bryan Moss (bryan.moss@dial.pipex.com)
Date: Mon Aug 17 1998 - 15:11:47 MDT


Doug Bailey wrote:

> > Firstly, it would not understand *our*
> > "concept of what a human is", I wouldn't be so
> > arrogant as to assume our concept is "the"
> > concept (some biological chauvinism on your
> > part ;).
>
> Why couldn't it understand our concept of
> ourselves? It does not have to adopt that
> concept for itself but it could understand how
> we view ourselves.

I no longer have my post, but didn't I say
something like, "It could fathom our concept of
ourselves, but it would not share that concept"?
It's the fact that it doesn't have to share our
point of view to understand and interact with us
that is the basis of my argument. And if that's
the case, an AI that lives among us doesn't have
to be hostile like us. The second part of my
argument is that because the AI shares a different
course of evolution to us, it's less likely to
share *our* hostility. This does not mean it can't
be dangerous, but it's more likely these dangers
would be something we would call "computer error"
rather than "malicious intent". And I also believe
these computer errors can be contained.

> Besides, it would not have to understand how we
> understand ourselves to inflict harm upon us. We
> don't understand how dolphins view themselves
> but we have certainly inflicted harm upon them.

The idea is that they don't have be like us to act
like us, that they can interact with humans
without having all the violent impulses and need
for power that we have.

> > Well an AI that's goal is to "maximise its
> > information processing rate" sounds like a
> > virus to me.
>
> [...] Just because we view it as a virus does
> not mean it will not exist at some point, care
> about what we think, or be responsive to our
> demands.

But where does the motivation to care about what
we think and be responsive to our goals come from?
If we `evolve' an AI that wants to "maximise its
information processing rate" at what point does it
suddenly realise that humans are the problem and
humans must be destroyed. And if I intentionally
put those components in to the AI, where do I get
them?

> [...] As I've described above, "virus" is a
> perspective judgement. Additionally, a
> completely benevolent AI could become a "virus"
> through its own evolvability or simple intent to
> solve a problem.

Basically what I'm saying is this:

If we create a `good' AI then it is likely to stay
`good'. This is because, A) to exhibit behaviour
we consider good does not mean the AI has to be
identical to us, and B) the probability that `bad'
behaviour will emerge from this AI is small. (A
and B are related because if the AI is not like us
it can have good behaviour without bad behaviour.)

Although simple behaviour that is harmful to us
can evolve from an AI that exhibits good
behaviour. Complex harmful behaviour would have to
evolve with similar initial condition, and in a
similar environment, to us. (By complex I mean the
sort of cunning AI you see in science fiction, by
simple I mean things we'd generally prescribe to
"computer error" and will usually be able to
control.)

Of course, a `bad' AI can be `evolved' in a lab or
software department just as easily as a `good'
one. But I'm not addressing purposeful weapons
here, just emergent behaviour.

BM



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:28 MST