Eliezer S. Yudkowsky wrote:
> Billy Brown wrote:
> >
> > Scenario 2 - Nanotech Doomsday
> > Assumptions:
> > Automated engineering is much easier than nanotech, and will thus be
> > implemented substantially sooner..
> >
> > Scenario 3 - The Hard Takeoff to Singularity
> > Assumptions:
> > Automated engineering and nanotech are problems of similar
> difficulty, and
> > will develop together..
>
> I'd reverse the outcomes. Primitive nanotech results in destabilizing
> competition and wars fought with half-baked weapons (sc. 3). Instant
> omnipotent drextech lets the winner take over the world without much
> fuss and even evacuate the planet in case of emergency. Problem is, I
> think nanotech will start primitive.
Hmm. Depend on what you mean by 'primitive' and 'advanced'. I don't think instant mature nanotech is probable - the computers you need to design it can't be built without primitive nanotech or many decades of top-down evolution.
On the high side of the danger zone, the leading power advances so quickly that nothing anyone else does can threaten it. On the low side the rate of change is slow enough that the social order can adapt, or at least avoid being suicidally stupid.
The scenarios I listed would fall out like this:
slow advance danger zone fast advance <------------------------------|---------------------|-------------------->
<----4----> <----3----> <---------2--------->
I suppose we could put in a '1.5' for very fast advance, but it seems very unlikely - if automated engineering is that easy, a seed AI should also be feasible and we're back to scenario 1.
Billy Brown, MCSE+I
bbrown@conemsco.com