From: Thomas McCabe (pphysics141@gmail.com)
Date: Wed Mar 12 2008 - 21:56:20 MDT
On Wed, Mar 12, 2008 at 11:35 PM, Mark Waser <mwaser@cox.net> wrote:
>
> Also suppose my definition of Friendly is just vastly different than
> yours. I can agree to be Friendly by my own terms - because that's all you
> said you require for membership in the Friendly Group. I simply spread the
> my own meme of Friendly until your Group is has allied itself with my own,
> then continue acting on my believes - subjugating your goals to those of the
> Friendly Group that I have now undermined. I do not willingly admit
> wrongdoing in light of your accusations that I entered the Friendly Group
> under false pretense because I honestly believe that I am righteously
> spreading the Right Meme. Your current declaration does not protect against
> this kind of invasion.
>
> No, but intelligent action dictates that ensure that our versions of
> Friendliness are compatible or else I ask you to include in your version of
> Friendliness that it is not compatible with my version.
>
> Further, the most critical point is the primary overriding goal. If we both
> agree on it, then we are compatible and the rest is really just details of
> how we protect ourselves. If we don't agree on it, then we are not
> compatible and we simply treat each other as non-hostile non-Friendlies
> (which is very different from an UnFriendly) which is relatively harmless
> but not to our mutual advantage.
>
>
This is how humans usually act; it is not how most AIs will act. Iterm
#1,782 on my agenda is to prove that, except for special cases, you
get a higher expected utility when another agent shares your utility
function than when the two agents have different utility functions.
Hence, forced modification of the other agent's utility function also
has positive utility.
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT