Peter C. McCluskey writes:
> bostrom@ndirect.co.uk ("Nick Bostrom") writes:
> >Perhaps unpredictable in some dimensions, but not in others. If a
> >being starts out with goal A, then the only way it could switch to a
> >different goal B is through outside intervention or accident. (For it
>
> Suppose some DNA molecules set for themselves the goal of maximising
> the quantity of DNA, and one of the tools they create for this purpose
> is human beings, who decide to upload and replace all DNA-based life
> with more efficient implementations of life.
> Does this involve outside intervention or accident?
Accident, since it was not an intended consequence of the DNA:s plan.
The accident could happen because the DNA did not have the means to
economically program in its own goals into the human beings. It might
take too much genetic information, but in any case the necessary
evolutionary pressures did not exist that could code the synaptic
weights in detail in a brain module big enough to have a conception
of what a DNA molecule is and that could monitor progress toward DNA
replication well enough to include the effects of modern technology.
How could there have been a selective pressure to evolve an aversion
against replacing DNA-based life with more efficient implementations?
A superintelligence, on the other hand, will havehave some
understanding of technology. So if it wanted to make as many copies
of itself in the same medium as itself, then it wouldn't switch to a
different medium -- that would be plain stupid.
(BTW, could it not also be argued that if we are to ascribe goals to
DNA molecules, the most correct goal to ascribe might not be "make as
many similar DNA molecules as possible", but rather "make as many
instances of the same basic DNA information pattern as possible"? In
the past, these two goals would have led to the same actions. In the
future, the assuming the latter goal might imply different
predictions. If these predictions are born out, doesn't that give us
reason to say that ascribing the latter goal was more correct?)
>>What would be the advantage of dissolving the
> >singleton? Think of it like this:
>
> I find it unlikely that designing a singleton to handle a near-term
> singularity would imply designing something that would remain unified
> over interstellar distances, so I'm assuming these are two independant
> problems.
Well, power tends to seek to perpetuate itself. For example, suppose
that in order to survive the singularity we constructed a dictator
singleton, i.e. we gave one man total power over the world (a very
bad choice of singleton to be sure). Then we wouldn't necessarily
expect that dictator to voluntarily step down when the singularity is
over, unless he was a person of extremely high moral standards. I
think the same might easily happen for other choices of singletons,
unless they were specifically designed to dissolve.
I'm not denying, though, that it might well be possible to design
singleton so that it will dissolve. But I don't think it would
dissolve unless we took special steps to ensure it would.
> >> If those other civilizations haven't constrained themselves the way
> >> the singleton has, it may be unsafe to wait until seeing them to
> >> optimize one's defensive powers.
> >
> >Yes, though I think the main parameter defining resilience to attack
> >might be the volume that has been colonized, and I think the
> >singleton and all other advanced civilizations would all be expanding
> >at about the same rate, close to c.
>
> I think this depends quite strongly on your assumption that the goal
> of remaining unified places few important constraints on a civilization's
> abilities.
It follows from the assumption that it's mainly the colonized volume
that determines the military strength of an advanced nanopower,
together with the assumption that everybody would expand at very
close to c.
>you should think carefully about the ideas in Vinge's story
> The Ungoverned (available in _True Names and other Dangers_).
Thanks for the reference, I'll look it up.
_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Sat May 2 17:21:36 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST