From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jul 10 2001 - 18:18:01 MDT
Anders Sandberg wrote:
>
> On Mon, Jul 09, 2001 at 04:56:09PM -0400, Eliezer S. Yudkowsky wrote:
> > Anders Sandberg wrote:
> > >
> > > intended to prevent certain actions from being taken (like encryptions, locks
> > > and pre-progammed friendliness),
> >
> > I wish people would stop using the word "pre-programmed", or for that
> > matter, "programmed", in connection with Friendliness. (No offense.)
>
> So what word would you use to describe that friendliness is deliberately
> set as the supergoal of the AI upon creation?
It is a bit more complicated than that, as you know. A Friendly AI is
supposed to absorb the cognition humans use to grow and error-correct
their supergoals, not just the cognition bound up in a single human's
supergoal as it exists at a single time.
What word would I use? I'd say "Friendly AI", or if I wanted to get
technical, "CFAI-architecture Friendly AI". I mean, in the example
sentence above, do you really need to say "pre-programmed friendliness"
rather than just "Friendliness"?
I suppose you could have said "designed Friendly AI" instead of
"pre-programmed friendliness". But be careful; I'd still say that
"designed-in" is misrepresentative of CFAI-architecture FAI.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:39 MST