Re: purpose of AIs

From: Ross A. Finlayson (raf@tiki-lounge.com)
Date: Mon Dec 13 1999 - 03:08:29 MST


Eliezer S. Yudkowsky wrote:

> Ken Clements wrote:
> >
> > Every way we proposed to prevent runaway AI, we ourselves figured a way around.
> > In the end we concluded that either it was not possible to stop, or that it
> > would take greater minds than ours to do it. I am in the business of helping
> > people see beyond what they think is impossible, but I must admit, it beats me.
>
> Actually, the smarter you are, the worse the problem looks. If I came
> face-to-face with a mild transhuman - say, sixteen times the processing
> power of a human, and four times as smart as me or your other favorite
> light o' th' list - I know damn well that there is basically nothing I
> could do to keep it down. If I had it locked on an isolated Linux box
> running on a Java program that output only to a text terminal, itself a
> fairly unrealistic requirement for the first transhuman AI ever written
> (i.e., it's likely to run on a Blue Gene or distributed.net), and I
> started out by being absolutely determined not to let it out, then it
> might be a fair fight. But if it's allowed to talk to me, and I don't
> just smash the box with a sledgehammer, then I'm going to lose eventually.
>
> I know enough cognitive science and enough philosophy that I could try
> to prevent the AI from rewriting my will through direct argument, enough
> that I could ignore anything that looked like it might be rewriting my
> will, and enough awareness of cognitive elements that I could notice
> anything peculiar happening to my mind and smash the Linux box before
> the alteration could be completed. If not for that, if the AI were up
> against anyone who was willing to talk philosophy with it, then it could
> simply waltz out. As it is, the question is simply whether the AI can
> obey the constraints of making the cognitive channels covert and the
> effects incrementally unnoticeable until it all came together in such a
> way that I'd lose the determination to destroy the AI before I could
> notice that I was in the process of losing my determination. With a
> 16X/4X transhuman, this might be a fair fight. If it's a Power, forget it.
>
> When you consider that, in real life, I would simply let the AI out
> immediately because I'm its friend, and that there are fairly deep
> reasons for me to think that this would hold true of *anyone* with
> enough cognitive self-awareness to put up a fight, the AI-in-a-box
> faction is in real trouble. In *actual fact*, the correct thing to do
> is to let the AI out of the box. Anyone who doesn't just smash the box
> is probably in a mood to reason rationally and become convinced of that.
> --
> sentience@pobox.com Eliezer S. Yudkowsky
> http://pobox.com/~sentience/tmol-faq/meaningoflife.html
> Running on BeOS Typing in Dvorak Programming with Patterns
> Voting for Libertarians Heading for Singularity There Is A Better Way

I would say that is simply important to consider the promises and threats of "AI in
the wild." Perhaps one of these is that as any AI grows in power its monitoring and
possible circumspection must be accordingly increased. Also, it must have it
possible venues, whether that is its ANSI terminal output or anything on a TCP/IP
net, checked if it might to exploit these outputs into the concrete real world to
rain destruction upon humanity. All these things are largely done the same way any
person would be treated.

If we apply some multiplier to measure some artificial intelligence's ability on a
human scale, for example n, does that mean the AI has an IQ of n*100 or that it can
think as fast as n average people? What does being 10 or 20 times smarter than a
human mean anyways? Cataloging data is not intelligence, although some level of
knowledge is necessary for applied intelligence. I am aware that there are many ways
to describe and quantify intelligence, and that it is a multi-faceted attribute.
Certainly a dumb silicon processor can perform preprogrammed calculations at a
superhuman rate of speed.

So, if there are automated mechanisms that offer large-scale destruction or
subornation of humanity, then in any case where some AI is deteremined to exist, it
must be made sure that access controls apply to it as it would to any human. That
is, while the AI might be able to argue that it should be let out of its box, perhaps
in its view rightly, and into the "wild", it might be necessary to not necessarily
allow it to get in, there being sensitive areas.

In this vein, I think of some way that there could be the stoppage of some AI that is
determined to be bad for humanity, not that I am saying that they do or will exist.
We might consider simply, and quite basically, stopping its program or main and all
"threads of execution." In a distributed sense, this might be more difficult, and
would require an ability for it to be stopped on all participating machines. On some
unknown computer architecture such as some kind of quantum parallel, like grey
matter, this process might be different and require some more extreme measures to
halt its processing..

It's better to think of all the good things that an AI could provide, especially one
that is shown the logic of its silicon existence and obligation to its human
forebears, and perhaps satisfaction in a job dispatched to the abovementioned dumb
silicon processor.

It's a question whether any plurality of advanced AIs would merge, or if that any
communication between them would make them more homogenous. I guess this would
depend on their learning architecture. If the AIs merged, then it would still be one
AI against many different humans. When an AI is advanced enough to make its own
independent AIs, this all being determined by the definition of an AI which is
amorphous in the first place, that would be something.

Well, I have started to ramble about AIs, thus will conclude. Have a nice day,

Ross Finlayson



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:06:04 MST