From: Billy Brown (bbrown@conemsco.com)
Date: Wed Dec 09 1998 - 11:41:05 MST
Robin Hanson wrote:
> > The best that we can do is teach it how do deduce its
> > own rules, and hope it comes up with a moral system requires it to be
nice
> > to fellow sentients..
>
> Well we could do a little more; we might create lots of different AIs
> and observer how they treat each other in contained environments. We
might
> then repeatedly select the ones whose behavior we deem "moral." And once
> we have creatures whose behavior seems stably "moral" we could release
them
> to participate in the big world.
>
> However, I'd expect evolutionary pressures to act again out in the big
world,
> and so our only real reason for confidence in continued "moral" behavior
> would be expectations that such behavior would be rewarded in a world when
> most other creatures also act that way..
If we can do this, the project has failed. Even a mildly Transhuman AI will
be able to deduce what is going on and fool us about its morality. A Power
will find some way around any security we can create, either through coding
or social engineering.
Billy Brown
bbrown@conemsco.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST