From: Ken Clements (Ken@Innovation-On-Demand.com)
Date: Fri Sep 29 2000 - 16:09:29 MDT
"Eliezer S. Yudkowsky" wrote:
>
> "Any Friendly behavior that follows the major use-cases - that avoids
> destruction or modification of any sentient without that sentient's
> permission, and that attempts to fulfill any legitimate request after checking
> for unintended consequences - would count as at least a partial success from
> an engineering perspective."
> -- from a work in progress
>
How can this work? If someone tells me something that I did not know, they have
modified me (assuming I remember what they told me). If an IA is required not to
modify me without my permission, it will have to refrain from telling me anything
I do not already know, because it will not be able to get my informed consent to
be told the thing to be told, without telling me the thing.
What is a "legitimate request"?
How do you check for "unintended consequences" without running a simulation of the
entire Universe out to heat death? Even in the short run, how will the AI account
for the impact of its own future actions in the matter without first running a
simulation of itself?
-Ken
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:18 MST