From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Oct 14 2001 - 18:34:52 MDT
Edwin Evans wrote:
>
> Eliezer Yudkowsky wrote:
> >Due to some of the other modifications by the Eliezer Yudkowsky path, the
> >'Eliezer Yudkowsky path' has gone from (a) asserting that THE PATH is
> >completely observer-independent to (b) admitting of the possibility, and
> >even considering as the null hypothesis, that THE PATH is inherent to the
> >human frame of reference, or rather to the frame of reference of a large
> >cluster of evolved social species.
>
> Why?
Basically from the original EY path pondering two questions:
1) What if there's some kind of initial complexity, over and above raw
problem-solving ability, needed to understand and seek out objective
morality?
2) What if objective morality is something that needs to be built, or
synthesized, rather than being a pre-existing feature of reality? Would
the human preferences need to be included as a suggestion?
The combination of these two questions led me to start thinking of the
human goal system as containing a *drive toward* objectivity, such that an
assertion of this goal system was that, if morality were objective, or to
the degree that morality turned out to be objective, the goal system
should be regarded as a successive approximation to that objective
morality. This reasoning can be validated *either* because it's part of
the human baseline, *or* because it leads to an actual objective morality
under which the use of such reasoning is desirable. If an objective
morality is found, then of course whether or not something is part of the
human baseline becomes irrelevant. But it doesn't become irrelevant until
then.
The Friendly AI semantics are actually a superset of the objective
morality semantics whose purpose is to describe - or rather target for
acquisition - all the complexity that a human being uses to reason about
morality, including objective morality. If an objective morality is
found, then the FAI semantics collapse into a reference to objective
morality, just like your or my philosophy would. But Friendly AI works
even if the philosophical reasoning about objective morality contains an
error. It should be able to enfold any philosophical reasoning a human
can execute; that's the purpose of Friendly AI.
Sigh... this isn't really much of a description. Ifni knows it probably
won't make any sense at all to those list members who haven't already been
following the objective morality debate for a while. I probably need to
write this up in more detail one of these days.
But Friendly AI is supposed to work no matter what philosophical position
you take. Any argument you can use on a human philosopher can be used on
a Friendly AI. That's what it's there for.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:11:22 MST