Re: Eugene's nuclear threat

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Sun Oct 01 2000 - 13:08:24 MDT


Eliezer S. Yudkowsky writes:

> Eugene, if you make a mistake, you've got a planet full of people who were
> wiped out by grey goo. As for the scenario where millions of lightyears of

Whoa! Wait, have you forgotten SOP says we're to nuke preemptively
anybody who peaks out on the Armageddon scale? _Credible_ grey goo
developers certainly qualify (there goes that warhead).

Right now we're in the happy position to be able to discuss crisis
management strategies several decades before the End-Of-The-World
technology candidates step up upon the carpet. Let's not waste this
opportunity in pure rhetorics and mental wankery, and instead look for
possible solutions, if any. Thankfully, we're not the one in charge,
so we're not hampered by all kinds of nasty constraints and selective
reality perceptions. Nothing is yet at stake if we fail here. So let's
keep an open mind. We might be able to produce something interesting
here, something which has a validity scope outside this list.

> people get wiped out - if it were possible for one little planet to make that
> kind of mistake, and there are that many aliens in range, someone *else* would

We don't know whether someone already did. The wavefront may well be
on the way, but there is no way us knowing that in advance, as long as
we're not in nucleus' lightcone. By the time our expansion wavefront
hits a world, it may be already rife with intelligent life. Have you
ever seen the totoro "Wings of Honneamise", or imagined a Vinge's
universe without Zones? Picture the wavefront piranhas breaking over
an unverse like this. Whoosh.

> have already done it to *us*. It's just the Earth that's at stake.
 
About 6 billion eggs about to hatch in a single, not too large,
basket. Odds still do not seem to bulge a single micron.

What options do we truly have in face of Armageddon technology
candidates?

We can distribute the eggs over many baskets. This makes you pretty
immune from gray goo and bioweapon extinction events, while still
making you vulnerable to SI and <some unspecified new physics threat,
like something which makes you generate desktop GRB, or supercritical
singularities near deep gravity wells>. These are excellent odds, so
getting as many people into sustainable habitats outside of Earth
surface should be a priority. This boils down to cheap launches to LEO
and portable sustainable ecologies and macroscopic autonomous
autoreplicators. Unfortunately, collectively we give very little
priority to this particular alternative reality branch. This one is a
real bummer, and might make us look mightily stupid (not to mention
very dead) in the long run. So let's start weaving these baskets.

What else can we do? We can look out for preventing egg breaking
events, and we can look out for ways to fortify the eggs, as many of
them as we can as strong as we can.

> And above all, if you make a mistake - you'll have made a mistake that
> involved deliberate murder.
 
I'm sufficiently not in denial of realpolitik to be cool with that. As
long you're really sure you know what you're doing (including not
acting in haste and getting second and third and fourth opinions on
the matter, while walking the narrow line of implicitly deciding by
not deciding in time) and are equally clear on the consequences of the
negative (not doing) and you're the one in the driver's seat, you have
to decide either way round. Errors will, regrettably, occur, but
that's strictly unavoidable.

If you haven't yet noticed, that's how the world works. (Though, in
most cases, further overlaid by multiple layers of incompetence and
generic confusion. A certain modicum of fatalism *does* help).

> According to your theories, we have a world where everyone is free to murder
> anyone who works on a technology that someone doesn't like. According to you,

Of course everybody is free to do anything they like, the question is
rather whether they can.

> the people destroying GM crops are not doing anything immoral; if they have a
> flaw, it's that they aren't bombing Monsanto's corporate headquarters. If you

Huh? I don't subscribe to the killer tomato theory. As long as you
don't start messing with modified rhinoviruses in biological weapons
lab, the amount of damage you can possibly produce even in the worst
case is way below the Armageddon scale.

> work on an upload project, then anyone who distrusts uploads and thinks AI is
> safer has a perfect right to stop the uploading project by killing as many
> people as necessary. *I* would have the right to kill *you*, this instant,

Of course, the question is rather whether I'm sufficiently deterred by
the threat and whether compound security is adequate. Btw, I worked in
an animal research facility. I figured out the odds were worth the risks.

> because you might slow down AI in favor of nanotechnology.
 
Runaway AI is of course worse than the grey goo scenario.

> Do you really think that kind of world, that systemic process for dealing with
> new technologies, would yield better results than the nonviolent version?
 
I don't know, but I prefer to keep an open mind. I'm not trying to
exclude certain solutions a priori, just because they're unpopular. I
will drop them if they're stupid, but so far I don't see why they're
stupid.

> Even Calvin and Hobbes know better than that.
 
Great, an imaginary tiger and a cartoon kid decide on world
policy. This only works in a cartoon.

> C: "I don't believe in ethics any more. As far as I am concerned, the
> end justifies the means. Get what you can while the getting's good –
> that's what I say. It's a dog-eat-dog world, so I'll do whatever I
> have to do, and let others argue about whether it's right or not."
>
> <Hobbes pushes Calvin into a mud hole.>
>
> C: "Why did you do THAT?"
> H: "You were in my way. Now you're not. The end justifies the means."
> C: "I didn't mean for everyone, you dolt! Just ME!

Where did I say that I'm exempt from being extreme sanctioned? But,
then, if I ever wear a hat, it's lily white.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:19 MST