Re: Singularity: AI Morality

From: DELRIVIERE (delriviere@brutele.be)
Date: Mon Dec 21 1998 - 08:11:45 MST


Eliezer S. Yudkowsky wrote:

> I would say that humans have many conflicting goal systems,
> overlapping where
> they shouldn't and unlinked where they should. I would demand an
> extremely
> good reason before giving an AI more than one goal system. Multiple
> intuitions, yes; multiple goals and multiple intentions; but not more
> than one
> goal system - and absolutely not an silicoendocrine system to
> duplicate the
> stresses we experience as the result of conflict. Even if there are
> good
> evolutionary reasons! It's just too much of a risk.
>

Perhaps only to make the AI able not to be trapped in dead ends,
situations where he is unable to make a next move to progress in his set
of goals and he can't make a come back to choose an other path. To
continue to exist it has to be able to choose or define a new goal
system perhaps conflicting a lot with the previous one. Of course, it's
probably only useful if the continuation of the AI-process for itself is
valuable.

How to make an AI enjoy the existence for itself, the ability to define
internally his goals if he understand that his goals are only arbitrary
choices in an ocean of possibilities? The reward of pleasure ? The fear
of pain ? what's next if the AI is able to rewire itself to feel an
eternity of orgasm and is able to doom his rewired opponents in an
eternity of pain ?

What else for such AI system than to be a robot to implement an
arbitrary set of goals (but no value for existence in itself) or an
expanding orgasm generator megastructure?

How could we give to a machine able to rewire itself the ability to
enjoy life and implement goals, if it is able to understand the
arbitrary nature of a set of goals?

delriviere
christophe



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:50:04 MST