Evolving AIs and the term "Virus" (was "Who's Afraid of the SI?")

From: Doug Bailey (Doug.Bailey@ey.com)
Date: Mon Aug 17 1998 - 09:25:03 MDT


Bryan Moss wrote:

> Firstly, it would not understand *our* "concept of
> what a human is", I wouldn't be so arrogant as to
> assume our concept is "the" concept (some
> biological chauvinism on your part ;).

Why couldn't it understand our concept of ourselves?
It does not have to adopt that concept for itself but
it could understand how we view ourselves. Besides, it
would not have to understand how we understand ourselves
to inflict harm upon us. We don't understand how dolphins
view themselves but we have certainly inflicted harm upon
them. The scenario I provided in my previous posting
supplies a plausible scenario where this could occur. It
would not need to be an SI, just an AI.
 
> > The thought process an AI goes through to reach
> > its conclusions may be different (or it may be
> > the same) but it should be capable of reaching a
> > conclusion that led to any action a human might
> > conclude to take. Perhaps an AI's ultimate
> > objective is to maximize its information
> > processing rate. At some point its efforts to
> > reach that objective might conflict with the
> > goals of humans to operate their information
> > infrastructure. The AI might decide to
> > "commandeer" system resources at the expense of
> > human information processing demands.
>
> Well an AI that's goal is to "maximise its
> information processing rate" sounds like a virus
> to me.

Sure, we'd view it as a virus. If B infringes on A's
resources to ensure the survival or continued evolution
of B at the expense of A, A is going to consider B a
virus. Many different A/B relationships exist: humanity and
the rest of the biosphere, Americans and Native Americans,
You and a Trojan virus, You and cancer, Society and an evolving
AI. Just because we view it as a virus does not mean it
will not exist at some point, care about what we think, or
be responsive to our demands.

> I can't see any use for such a thing beyond
> a weapon.

Such an AI might simply evolve into such a state where it
wishes to increase its internal complexity, computational
ability, etc. What if an AI encounters a problem, assigns
the problem a time horizon and learns that its present
computational capacity will not allow it to address the problem
before the time horizon expires? It might determine that
it could acquire additional capacity and then tackle the problem
before the time horizon expired. We can't assume every type
of AI that will exist in the future will be the ones we
"allow" to exist through creation. Human-level AIs (and even less
robust entities) will have some level of evolvability.

> But AI or not, your
> interface AI is not likely to `evolve' into a
> virus.

As I've described above, "virus" is a perspective judgement.
Additionally, a completely benevolent AI could become a "virus"
through its own evolvability or simple intent to solve a problem.
Safeguards might be put in place to ensure that an AI
realizes the negative utlity of encroaching on human resources.
But how long before an AI evolves to the point that it
"reconsiders" such a premise and determines its need to solve
a particular problem or set of problems exceeds the value
of humanity's need for X amount of resources.

Doug
doug.bailey@ey.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:28 MST