From: Christian Weisgerber (naddy@mips.inka.de)
Date: Wed Sep 06 2000 - 18:32:08 MDT
<Pine.GSO.4.05.10009050920130.14304-100000@paladin.cc.emory.edu>
Sender: owner-extropians@extropy.org
Precedence: bulk
Reply-To: extropians@extropy.org
[Non-member submission]
xgl <xli03@emory.edu> wrote:
> as eliezer points out in his various writings, if such a mind does
> anything at all, it would be because it was objectively _right_ --
^^^^^^^^^^^^^^^^^^^
Assuming there is such a thing as objective ethics.
Unless somebody can show me proof to the contrary, I'll stay with
the hypothesis that ethics are inherently arbitrary, and that
rationality is only a tool to implement goals but can't be used to
derive them out of the void.
Which means an AI will have (1) goals initially provided by its
designers and/or (2) those that provide it with an evolutionary
advantage.
-- Christian "naddy" Weisgerber naddy@mips.inka.de
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:50 MST