RE: Singularity: AI Morality

From: Billy Brown (bbrown@conemsco.com)
Date: Wed Dec 09 1998 - 08:08:37 MST


Nick Bostrom wrote:
> I think the trick is not to use coersive measures, but rather to
> wisely select the values we give to the superintelligences, so that
> they wouldn't *want* to hurt us. If nobody wants to commit crimes,
> you don't need any police.

And others have posted similar thoughts.

Guys, please, trust the programmers on programming questions, OK? The kinds
of things you are talking about sound reasonable, and might even be possible
in a static system, but they are not even theoretically possible in the
situation we are discussing. The problem is that we don't know how to built
a Transhuman AI - all we can do is make something that might evolve into one
on its own. If we try to put constraints on that evolution then the
constraints also have to evolve, and they must do so in synch with the rest
of the system.

Now, in the real world we can't even program a simple, static program
without bugs. The more complex the system becomes, the more errors there
will be. Given that a seed AI would consist of at least several hundred
thousand lines of arcane, self-modifying code, it is impossible to predict
its behavior with any great precision. Any static morality module will
eventually break or be circumvented, and a dynamic one will itself mutate in
unpredictable ways. The best that we can do is teach it how do deduce its
own rules, and hope it comes up with a moral system requires it to be nice
to fellow sentients.

Besides, schemes to artificially impose a particular moral system on the AI
rely on mind control, plain and simple. The smarter the AI becomes, the
more likely it is to realize that you are trying to control it. Now, is
mind control wrong in your moral system? How about in the one you gave the
AI? In either case, how is the AI likely to react? This is a recipe for
disaster, no matter how things work out.

Billy Brown
bbrown@conemsco.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST