From: Michael Anissimov (michaelanissimov@gmail.com)
Date: Sun Aug 13 2006 - 21:00:17 MDT
Tennessee,
An AGI is not a concrete "thing", it is a huge space of possibilities.
It is a set defined only by the characteristics of general
intelligence and being built artificially. There are more possible
AGIs than there are bacteria on earth.
The 'interests' of an AGI will be defined by its goals. If we have an
AGI whose goal is to maximize paperclips, then it will only care about
pieces of knowledge that contribute to accomplishing that goal.
Everything else can be completely ignored, except insofar as it is
predicted to contribute to achieving its goals more effectively.
Happiness and boredom are conscious feelings generated by human
brainware. While an AGI might experience feelings that we might
compare to boredom and happiness, their effects and underlying
conscious experiences might be entirely different.
If you have the ability to program a brain any way you want, then you
could tie the emotion of 'boredom' to any stimulus and the emotion of
'happiness' to any stimulus. For example, you could program an AGI
that feels 'bored' when it accomplishes its goals, but its underlying
goal system continues to push it towards accomplishing those goals,
even though it feels eternally bored doing so. There might be AGIs
that feel happy at the thought of being destroyed. When the mind is a
blank slate, any stimulus can theoretically lead to any conscious
sensation, with the right programming.
An AGI built to gather knowledge and continuously maximize utility
might never become bored. Even if there were diminishing returns for
its efforts, it could be programmed to be extremely satisfied with
every marginal return, no matter how minute. It could be programmed
to be eternally happy even if no further utility is available.
I get the impression that you don't appreciate how alien an
arbitrarily programmed mind can truly be. The following chapter in
CFAI only takes about half an hour to read, and it will change the way
you think about AI forever:
http://www.intelligence.org/CFAI/anthro.html
-- Michael Anissimov Lifeboat Foundation http://lifeboat.com http://acceleratingfuture.com/michael/blog
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT