From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Sep 12 1998 - 19:23:13 MDT
Damien Broderick wrote:
>
> At 09:06 AM 9/11/98 -0500, Eliezer wrote:
>
> [Broderick wrote that what Eliezer wrote:]
> >> < Never allow arbitrary, illogical, or untruthful goals to enter the AI. >
> >> reflects a touching faith in human powers of understanding and consistency.
>
> >I'm not quite sure what you mean by this. [Eliezer]
> [Broderick again]
> Isn't it obvious? How can any limited mortal know in advance what another
> intelligence, or itself at a different time and in other circumstances,
> might regard as `arbitrary, illogical, or untruthful'? Popper spank.
So let's throw in all the coercions we want, since nobody can really know
anything anyhow? That's suicidal! I didn't say the Prime Directive was easy
or even achievable; I said we should try, and never ever violate it deliberately.
Perhaps the interpretation of the Prime Directive is too dependent on context,
and it should be amended to read:
"No damn coercions and no damn lies; triple-check all the goal reasoning and
make sure the AI knows it's fallible, but aside from that let the AI make up
its own bloody mind."
> By the standard of what consensus community? What practice of discourse
> and action in the world? It is illogical and dangerous to walk toward the
> horizon, because eventually you will fall off the edge of the world.
> Re-frame: the world is spherical. Oh, okay. When is an act a clear-cut
> instance of `sexual relations'? Sometimes, as Freud insisted, a cigar is
> just a cigar. Sometimes it's an impeachment.
The final edition of _Coding_ will have a different definition, perhaps a more
genteel version of the one presented above.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:34 MST