From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Thu Jan 10 2002 - 12:55:10 MST
Sigh!
On Thu, 10 Jan 2002, Eugene Leitl wrote:
> Yes. The population of agents randomly exploring behaviour space and
> iterative selection for the most expansive ones radiating from the nucleus
> applies.
An advanced intelligence is only going to create an agent
to do the exploring *if* it can guarantee that that agent
will never decide to consume the resources of the creator.
Can it do that? Only if it significantly handicaps the
evolutionary paths the agent may pursue (IMO). (Of course
if you want to give birth to children that consume you, so
be it (there are some models in nature for this), but it isn't
a very good survival strategy IMO).
> Not only is this engineer-morphic but also biomorphic. A lot of
> assumptions are necessary to explain the visible universe while being
> compatible with existance of advanced cultures. No assumptions are
> required if you assume that they're not there.
No assumptions are required if you assume the singularity
rapidly drives them to optimality.
I'm going to repeat this every time this discussion comes up.
WE DO *NOT* HAVE THE TECHNOLOGY TO DETECT THE UNIVERSE
IS NOT IN LARGE PART ALREADY ENGINEERED.
The faulty assumption is that if the engineering could
be done, everything would appear "engineered" (to us).
*We* can engineer, yet there are parts of the Earth and
solar system that appear very un-engineered -- whether
by intent or simply the fact that we haven't gotten around
to engineering them because they is no economic benefit
(or we may lack the matter & energy resources) to engineer
them at this point.
One can make a strong assertion that in the beginning there
was no engineering. One may make an assertion that in
the end (after all the protons decay, if they do so)
there will be no surviving engineering. But given that
we and our engineering exist -- the simplest assumption
is that we are someplace in the middle of the development
of engineering the universe.
The historic arguments have been that "seeds" are cheap
or "independent actors" will explore and colonize.
But you have to get this (I've thought about it a
lot) -- once you know you have trumped the hazard
function of the galaxy -- "seeds" and "independent
actors" are significant threats to your long-term
survival!
*So*, one may become an essentialy existentialist actor --
"I might as well live today, because sooner or later
my children, agents, seeds, etc. will eliminate me."
*or* one says that I am not going to produce anything
that may compete with me for future resources. (Thus
one only creates sub-minds running on your own hardware
and one deletes these programs once they have outlived
their usefullness.)
What is the most "trustable" agent? One that you can
squash like a bug if it threatens you in any way.
As recent events have shown -- if you give an agent
a certain amount of power, then ignore it, it may
grow to a level that it can inflict significant harm
upon oneself. In which case you have to apply significant
resources to eliminating such agents and may have relatively
little confidence that you have been successful in doing so.
I do not believe that is the way that "advanced" civiliztions
(or more probably SIs) will behave.
Robert
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:11:33 MST