From: Emlyn (emlyn@one.net.au)
Date: Sat Sep 30 2000 - 01:23:28 MDT
> "Eliezer S. Yudkowsky" wrote:
>
> >
> > "Any Friendly behavior that follows the major use-cases - that avoids
> > destruction or modification of any sentient without that sentient's
> > permission, and that attempts to fulfill any legitimate request after
checking
> > for unintended consequences - would count as at least a partial success
from
> > an engineering perspective."
> > -- from a work in progress
> >
>
> How can this work? If someone tells me something that I did not know,
they have
> modified me (assuming I remember what they told me). If an IA is required
not to
> modify me without my permission, it will have to refrain from telling me
anything
> I do not already know, because it will not be able to get my informed
consent to
> be told the thing to be told, without telling me the thing.
>
> What is a "legitimate request"?
>
> How do you check for "unintended consequences" without running a
simulation of the
> entire Universe out to heat death? Even in the short run, how will the AI
account
> for the impact of its own future actions in the matter without first
running a
> simulation of itself?
>
> -Ken
I guess AI will have to make a judgement call. That's the truly dangerous
part.
Emlyn
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:18 MST