From: Smigrodzki, Rafal (SmigrodzkiR@msx.upmc.edu)
Date: Fri Jan 11 2002 - 10:28:24 MST
From: Robert J. Bradbury [mailto:bradbury@aeiveos.com]
An advanced intelligence is only going to create an agent
to do the exploring *if* it can guarantee that that agent
will never decide to consume the resources of the creator.
Can it do that? Only if it significantly handicaps the
evolutionary paths the agent may pursue (IMO). (Of course
if you want to give birth to children that consume you, so
be it (there are some models in nature for this), but it isn't
a very good survival strategy IMO).
### I wouldn't be sure about it. If the agent is more efficient than you, it
might be also more efficient than your enemies. If there is any deadly
strife going on between the advanced intelligences, the losing side might be
willing to develop and release a destructive self-propagating entity (a la
Blight), just to cripple the other side, even if it would seal its own fate.
-----
*So*, one may become an essentialy existentialist actor --
"I might as well live today, because sooner or later
my children, agents, seeds, etc. will eliminate me."
*or* one says that I am not going to produce anything
that may compete with me for future resources.
### The one civilization with a different attitude is enough to change the
universe.
-----
(Thus
one only creates sub-minds running on your own hardware
and one deletes these programs once they have outlived
their usefullness.)
### This severely limits your abilities. Your competitors willing to take
some risks would swamp you with armies of their spawned copies.
------
What is the most "trustable" agent? One that you can
squash like a bug if it threatens you in any way.
As recent events have shown -- if you give an agent
a certain amount of power, then ignore it, it may
grow to a level that it can inflict significant harm
upon oneself. In which case you have to apply significant
resources to eliminating such agents and may have relatively
little confidence that you have been successful in doing so.
I do not believe that is the way that "advanced" civiliztions
(or more probably SIs) will behave.
### Only one exception is needed. Are you saying that 100% of conceivable
SI's will limit themselves?
----
It seems to me that for an ATC to produce an expansionary
perspective it must give up a survival perspective.
### Why not? This trick if applied to individuals was quite successfull, as
ants and Wahhabis show. With large numbers of civilizations generated by
random drift, some will reach this spot in the behavior configuration space,
and then radiate, even if it means losing most of their primitive
characteristics.
Rafal
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:11:35 MST