From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri May 09 2003 - 18:24:19 MDT
Ben Goertzel wrote:
>
> However, Bill is correct that Eliezer's plans do not give much detail on the
> crucial early stages of AI-moral-instruction. Without more explicit detail
> in this regard, one is left relying on the FAI programmer/teacher's
> judgment, and Bill's point is that he doesn't have that much faith in
> anyone's personal judgment, so he would rather see a much more explicit
> moral-education programme spelled out.
>
> I'm not sure Eliezer would disagree with this, even though he has not found
> time to provide such a thing yet.
You are correct. It is indeed a very serious difficulty to figure out how
to explain
(a) volitionist altruism
(b) humane renormalization
(c) Friendliness structure
to a mind that understands only
(a) billiard balls
(b) other minds that understand billiard balls.
But the strange thing is, by golly, I think it can be done.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT