From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Sep 14 1998 - 17:42:14 MDT
Michael Lorrey wrote:
>
> I don't know if you understood what I was saying. It seems like you interpreted my proposed
> statement in exactly the opposite way in which it was intended. As far as I can see,
> telling the AI to model all possible decisions with the goal of reaching the best choice
> for the AI's own best long term rational self interest does two things. a) it gives the AI
> maximum freedom of choice, and b) if libertarian theory is correct, it also minimizes the
> infringements upon others.
Yeah, that's what I thought you were saying. You're not supposed to "tell"
the AI ANYTHING. That's what I'm saying. I'm not saying it as a moral
philosopher; I'm saying it as a computer programmer. I know damn well you're
acting from the highest of altruistic moral purposes, which is exactly what
I'm afraid of. More damage has been wreaked by moral altruism than greed and
stupidity have ever dreamed. On the Great Scale Of Things, saying "we ought
to do this" is always overruled by "that won't work and trying would cause
tremendous damage".
http://pobox.com/~sentience/AI_design.temp.html#PrimeDirective
And READ it, because that's where the answers are.
> In such a scenario, if you don't give an AI a
> root alignment, what are you going to do to the AI's programming so that it can develop its
> own root alignment?
http://pobox.com/~sentience/AI_design.temp.html#det_igs
> It must have at least ONE primary goal.
It doesn't have to be forced on it by the programmer.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:34 MST