From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Tue May 14 2002 - 06:05:54 MDT
On Mon, 13 May 2002, Lee Corbin wrote:
> That could happen. But if the Singularity happens in an AI,
> then it'll almost surely have the cooperation with the humans
> who fed it from the beginning, and they'll be happy with it
> so long as they believe that it's not gone off the rails.
But one has to make a strong assertion that whether or not you
know its not gone off the rails such that it is not inherently
or cannot be constrained.
This is the fundamental problem I have with the sharp takeoff
scenario. To get a transcendent AI it has to be concentrated
(speed of light delay presumably cannot be transcended). If
concentrated, it has to consume a lot of power and produce a
lot of heat (as well as be vulnerable to high doses of radiation).
This was the entire point of RF's Ecophagy paper -- so long
as you have detection systems in place (which we *already*
have in place to a certain extent) its impossible for this
situation to develop without attracting attention.
So the only situation in which you could avoid the unplugging,
radiation sterilization, etc. of rapidly expanding nano/AI's
is if they were in hyperstealth mode (i.e. occupying all
of the bacteria on the planet or taking 1% of the CPU time
of all the computers on the planet).
[As an aside, this discussion raises a very interesting
question of whether only 4000 nuclear weapons in the
hands of the U.S./Russian military is sufficient to
execute a rapid AI sterilization scenario should that
be needed. My mind doesn't want to go there but that
situation cost us 3000 lives last year.]
> And before long, they wouldn't be able to affect it much
> at all, anyway. (The Singularity Institute has written at
> length about this, of course.)
Precisely why the development of concentrated AIs is
something that needs a lot of careful open source review.
That doesn't guarantee that we will get it right but it
gives us a fighting chance.
> So, returning to your proposal, either by accident or design,
> some matter is projected into space:
>
> > It only requires getting a small amount of matter into space.
> > Beaming photons into space isn't more expensive than beaming
> > them in horizontal directions.
>
> But if this happens, maybe there would be an off-planet
> pinnacle of development.
Sure. You only have to hurl onto a nearby NEO a nanorobot
with the capacity to turn it into an increasing amount of
computronium (my varient of the term vs. the Margolus/Toffoli/
Yudkowsky variants). You can beam up the improved designs
until it becomes self-evolving.
Given our current space launch procedures, I doubt there
is an any significant protection against this scenario.
> I have no idea how to estimate how much mass is needed
> for prolonged advancement. Does anyone?
1 km sized asteroids provide a *lot* of material.
1 km provides ~4.2x10^9 m^3 of material = ~4.2x10^15 cm^3
which given a ratio of ~10^5 human brain equivalents
per cm^3 yields ~4.2 x 10^20 human brain equivalents
per 1 km asteroid. Of course this needs to be reduced
due to power harvesting and heat radiation requirements
but it is a scary figure to be sure.
Robert
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:04 MST