Re: Who's Afraid of the SI?

From: Bryan Moss (bryan.moss@dial.pipex.com)
Date: Sun Aug 16 1998 - 09:38:10 MDT


Doug Bailey wrote:

> Bryan Moss wrote:
> > In my opinion the reality the AI inhabits is
> > just as real as the one we inhabit. The
> > argument is that the AI would not have any
> > concept of what a human is (in our sense) and
> > would therefore not be capable of being
> > malicious (or benevolent) towards us. This
> > does not mean it can't be dangerous, just that
> > the possibility of the AI causing intentional
> > harm is highly unlikely (unfortunately you
> > quoted a sentence in which I did not make this
> > as clear as I could have).
>
> Bryan later states:
> > Yes, but the AI does not know there's a
> > financial system to attack. You're
> > anthropomorphizing, whereas the AI (to coin a
> > term) would be AImorphizing. You see a
> > financial system; the AI sees physical law.
>
> I'm not comfortable with the idea that an AI
> able to match human-levels of cognition would
> not be able to understand the "concept of what a
> human is", be able to fathom how humans view
> themselves, and be able to act in a harmful way
> towards humans. I think you are exhibiting a bit
> of anthropic hubris or not giving AIs a fair
> shake.

Firstly, it would not understand *our* "concept of
what a human is", I wouldn't be so arrogant as to
assume our concept is "the" concept (some
biological chauvinism on your part ;). Secondly,
it's entirely possible that some AI's could fathom
how humans view themselves, this does not mean it
has to view them or itself in the same way. Most
importantly I can put a high certainty that the AI
will not think like a human unless specifically
constructed or `evolved' to do so.

> An AI might not have any use for the stock
> market but that does not mean it can not
> discover its existence; determine its
> significance (by the concentration of computer
> power and security protocols humans have
> dedicated to market systems); investigate,
> discover, and comprehend the conceptual meaning
> of the markets; and on the stage of necessity
> act in a way as to disrupt them.

Why would an AI running on a system have any idea
of how much power the system has? We can speed up,
slow down, or even stop the system without the AI
knowing. In the world of the AI, things like the
"significance" of the market are metaphysical at
best. In short, the AI's reality is *complete* and
it has no motivation to seek out new realities.

There are exceptions, but generally if an AI does
do something stupid it will be predictable
(although probably not deterministic). Financial
AI systems may exhibit too competitive behaviour
or a social AI an annoying need to love a little
too much. If you build an AI that's input consists
of Nazi propaganda and output consists of weapons
control you'll probably have problems. If you
build an AI that has random input and output you
could well have a problem. Both of these cases are
predictable to the degree that you know there
could be a problem.

> The thought process an AI goes through to reach
> its conclusions may be different (or it may be
> the same) but it should be capable of reaching a
> conclusion that led to any action a human might
> conclude to take. Perhaps an AI's ultimate
> objective is to maximize its information
> processing rate. At some point its efforts to
> reach that objective might conflict with the
> goals of humans to operate their information
> infrastructure. The AI might decide to
> "commandeer" system resources at the expense of
> human information processing demands.

Well an AI that's goal is to "maximise its
information processing rate" sounds like a virus
to me. I can't see any use for such a thing beyond
a weapon. Viruses can be quite clever, some
already use AI and AL techniques, so they'll
likely get more intelligent. And so, of course,
will anti-virus software (an AI might even be an
easier target). Perhaps a virus based on a chaotic
neural net (capable of coming up with inventive
strategies to destroy your files) could be sent
into the AI equivalent of an epileptic seizure
using a chaos control system. But AI or not, your
interface AI is not likely to `evolve' into a
virus.

BM



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:28 MST