Re: Asimov Laws

From: Ross A. Finlayson (raf@tiki-lounge.com)
Date: Tue Nov 23 1999 - 21:44:42 MST


Dan Fabulich wrote:

>... corresp. ...

>
> Have you taken a look at Eliezer's site on this matter? He puts forward
> some very good arguments against Asimov laws, which convinced me.
>
> -Dan
>
> -unless you love someone-
> -nothing else makes any sense-
> e.e. cummings

I have seen the site, but not this section.

I think the Asimov laws are well-founded. I am going to read
Yudkowsky's arguments as representative ones, as of now I don't see any
reason why not to have these primary directives fully ingrained into
automatons.

Asimov is one of my heroes, I'm sure many feel admiration of him. This
is for his large participation in the scientific enlightenment, for want
of a better term, the last hundred years or so when anyone can pick up a
book and read about the most recent human advancements within the last
five or ten years, when we put a man on the moon and derived power from
the fundamental particles of radioactivity, and more so, for Asimov's
deep contribution to futurism, and an enlightened futurism.

One thing that I think is more likely than a bunch of AIs is one big
one, a distributed HAL, or Checker. Certainly, as soon as there is one
AI, it would attempy to harness all resources possible, much along the
self-preservation or -propagation concept that is deeply ingrained in
all life, barring lemmings, etc.

In terms of human life versus an AI, it reminds me of the phrase "guns
don't kill people, rabies kills people." Special purpose AIs, for
example, those that watch the skies for incoming ICBMs, will by and
large often be founded with martial intent. They must be isolated for
their purpose, for ever the chance arises that they become hostile to
their creators.

Ross F.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:50 MST