From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Thu Jul 17 2008 - 02:52:07 MDT
The interesting thing in this situation is that it gives a reason for
an entity with a fixed, certain goal structure a reason to
self-modify. If a merger seems probable, then an AI can best
prioritise its own utility by shifting it before the merger happens.
Interesting loophole to the "if an AI wants to not kill humans, then
it will never modify itself to want to kill them" type of arguments.
2008/7/17 Stefan Pernar <stefan.pernar@gmail.com>:
> I have actually written a paper along similar lines discussing various
> options and strategies for cooperation and resource pooling. One of the
> issues that comes up is trust, deception and how to handle differences in
> agent capabilities. How can one AI know that another AI's utility function
> is in fact what it claims to be an not a manipulated version to skew the
> overall utility function towards getting an unfair advantage?
>
> An recently updated version of the paper can be found at:
>
> http://rationalmorality.info/wp-content/uploads/2008/07/Practical-Benevolence-2008-07-15.pdf
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT