From: Nick Hay (nickjhay@gmail.com)
Date: Tue Oct 30 2007 - 06:03:47 MDT
On 10/30/07, Stathis Papaioannou <stathisp@gmail.com> wrote:
> On 30/10/2007, Henry Wolf VII <hwolfvii@gmail.com> wrote:
> >
> > > No, it should be programmed to do what I tell it to do. If it warns me
> > > that my request will have negative consequences (because I've asked it
> > > to) and I still tell it to go ahead then that's my fault.
> >
> > If all you want is an artificial slave, does it really need to be
> > intelligent? We can create computers and robots that simply do what you
> > tell them to do without intelligence. The computer you're sitting at right
> > now is one example. It seems you simply want a more capable computer - not
> > an artificial intelligence.
>
> Are hired human experts intelligent? The idea is that they provide
> advice and other services without letting any competing motives of
> their own interfere.
If you've built this AI, why did you build in competing motives?
I think future predictable horror should be at least allowed as a
veto. Suppose someone really really wants to destroy their brain,
just to see what happens. They think they're implemented by an
immortal soul, so this seems harmless enough. If the AI didn't grant
this wish they'd be indignant: who are you to refuse my order?
However, if they found out souls don't exists they would predictably
be horrified, and wish that the AI had ignored their previous order.
In this scenario, it is not helpful for the AI to shut up and do what they say.
A more extreme example, the wisher commands the AI to make the sun go
nova, because they've always wondered what it looks like. For some
reason they do not understand that this would destroy humanity (which
they do not want), even if this is explained carefully to them. In
some sense it will be "their fault" that the human species dies, and
yet making the sun go nova doesn't seem like a good idea.
-- Nick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT