From: J. R. Molloy (jr@shasta.com)
Date: Mon Oct 02 2000 - 18:44:53 MDT
Eliezer S. Yudkowsky wrote,
> ....... if
> the AI successfully comes up with reasonable and friendly answers for
> ambiguous and unspecified use-cases;
Who gets to decide if they're "friendly" answers?
Why limit it to one AI? Take a poll of the entire community of AIs?
> and .......if the AI goes through at least one
> unaided change of personal philosophy or cognitive architecture while
> preserving Friendliness - if it gets to the point where the AI is clearly more
> altruistic than the programmers *and* smarter about what constitutes altruism
> - then why not go for it?
Because that would make AIs diametrically opposed to normal human behavior?
--J. R.
"The man who does not vex anti-Singularitarians has no advantage over the man
who
can't vex them."
--Alligator Grundy
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:21 MST