Re: Eugene's nuclear threat

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Oct 01 2000 - 13:57:42 MDT


Eugene Leitl wrote:
>
> Eliezer S. Yudkowsky writes:
>
> > Are you so very, very sure that you know better than the people running the
> > facilities? If the people with enough understanding and experience to have
>
> No.
>
> But if I make a mistake, we've got a bunch of dead researchers who
> wouldn't listen. If they make a mistake, we've got millions of
> lightyears of planets full of dead people. (Aliens are also people, of
> course). Dunno, these odds sound good to me.
>

Wrong. The entire attitude is based so much on nothing but fear rather
than finding a positive approach to guide what is going to develop in
any case that this very attitude is more poisonous than what you fear.
You (and whoever else you can persuade to such a view) stand to one side
and act as if you can do the impossible. You act as if you can grade
utterly what is safe and non-safe and keep the world safe by destroying
all of what you see as non-safe. Yet you yourself have argued this does
not work.

Those dead researchers were also a large part of the hope of humanity
transcending this mudball. Thank you very much.

> On a less galactic scale, there is this crazy molecular biologist down
> the street, who subscribes to the "Green Planet, Death to the People"
> church. (Man, these people are sure not Friendly). Funded by fellow
> billionaire church members, he has managed to engineer a
> long-symptomless-latency high-infectivity high-delayed-mortality
> bioweapon, consisting from a dozen of diverse virus families. You know
> he has made successful tests on primates and people (bums snatched off
> the street), and intends to start releasing the stuff in all major
> airports and subways as well in stratospheric bursts (properly
> packaged). All numerical epidemic models they've ran predict >99%
> infection and >95% mortality rate. In other words, the threat is
> rather believable. Because they're rather paranoid, they've got a
> device to reliably (they're very good engineers) self-destruct the
> entire facility, triggerable from a room you just managed to
> penetrate. (Well, I've seen an old James Bond movie yesterday).
>
> Would you press the big red button, instantly killing all people on
> property and safely destroying all virus cultures and information on
> how make them?
>

That is a little more immediate and more directly aimed at destruction.
I would suggest though that growing a positive future scenario that
gives bright babies something better to use their brains for than
figuring out how to exercise their own particular fears is probably the
most fruitful way to avoid or at least minimize such situations.

It is not too hard to think up a variety of ways to destroy and fuck up
on a massive scale. It is much harder to create and especially hard to
create an ovearching vision that our science and technology is used to
bring into reality and that can get buy-in from more than just us nerds.

>
> I'm just smart enough to know no one can be that smart to predict what
> a superhuman Power is going to do. At the same time, Turing, Goedel &
> footnotes to them say you can't, and game theory and evolution theory
> say it won't be a smart thing to try, since offering some constraints
> on behaviour of Powers, which don't look too pretty if you happen to
> be at the human receiving end of it.
>

The best way to be reasonably sure that we won't create our own
destroyer is for humanity to become the Power instead of trying to
create something separate from them. Create SI as part of our own being
and growing edge. Learn to care for and evolve the entire body instead
of just certain parts of the head. Then we will have our best chance.

 
>
> Thankfully, you will never have authority over enough resources for a
> SI project likely to succeed, but even so, just saying those words can
> make life less pleasant for all of us. You should know better.
>
> Seriously, who's playing the architect of humankind's future destiny
> here? You think you're smart enough for that?
>

I think that all of us together are as smart as we get and that we need
to learn to work together really well if we are to make a difference.

 
> Instead of trying to persuade people to pull over into a high enough
> fitness regime by dangling enough of juicy carrots in front of their
> noses before embarking on a project to end all projects, or ending up
> there spontaneously, you say "people no good, I'm also only human, but
> I know what is good for the rest of them, so I'll just go ahead, and
> will do it, the faster, the better". That sounds smart, for sure.
>

This is a good point.
 
- samantha



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:19 MST