From: pdugan (pdugan@vt.edu)
Date: Fri Jul 15 2005 - 14:56:12 MDT
Here is a funny idea, what if we launch an AGI that recursively self-improved
out the wazoo, and nothing changes at all. Say, the AGI keeps a consistent
supergoal of minding its own buisiness and letting the world continue to
operate with its direct intervention. Or maybe initial supergoals renormalize
into an attitude of going with the flow, letting the wind blow as it may.
Would such a transhuman mind classify as friendly or unfriendly?
- Patrick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT