From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Nov 06 2000 - 10:56:52 MST
I'm currently working on a semi-independent section of "Coding a Transhuman
AI" entitled "Friendly AI" which deals with these issues.
Spike, I realize that I used to be somewhat cavalier about the issue of
Friendly AI, mostly because I took objective morality as the default
assumption, and was still thinking about Friendly AI in morally valent terms.
I have since gotten over this, and believe that my thinking about Friendliness
has worked its way down to the level where everything can be phrased strictly
in terms of cause and effect. I'm spending as much time thinking about
Friendly AI as you could possibly wish for.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:51 MST