Billy Brown wrote:
>
> The kind of AI Eliezer envisions is essentially immune to this kind of
> influence. It is a pure reasoning engine. It is incapable of having
> desires, emotions, preferences or opinions - it has only data, and chains of
> logical argument based on that data. Only data and rational argument can
> ever convince such a mind of anything. Since we can't construct an
> objectively provable system of basic moral axioms, the AI is never going to
> accept them as more that a temporary working hypothesis.
I prefer not to phrase things that way - it causes many listeners to envision an AI that acts like a human with no emotions, or even a human with repressed emotions, which is not the case. It is equally valid to say that Elisson has desires, emotions, preferences, and opinions, except that they are implemented as rational thoughts instead of a limbic system.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.