Re: [MURG] meets [POLITICS]

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sun Apr 07 2002 - 20:41:07 MDT


On Sun, 7 Apr 2002, Adrian Tymes wrote:

>
> Seriuosly: nuclear war could also be the end of humanity, yet many fewer
> people complain about military solutions when their country is not
> actively involved in a war.

Adrian, I think this is highly unlikely. Even an all out nuclear war,
many submerged submarines with nuclear reactors, ships at sea, remote
outposts like the antarctic, etc.

Sure humanity might be knocked back a few decades, but I doubt it
would be more than a century.

[If you want to assert that nuclear war would be the end of humanity,
please point me to a peer-reviewed article (other than Sagan's nuclear
winter proposition which was subsequently found to be deficient).

> Not to mention various plagues (natural or
> manmade) whose cures could only be found by an AI - and, given when
> they could strike, a self-boostrapping "rogue" AI free of procedural
> development constraints may be the only way to get such an AI in time.

Ah yes, the Nagata "you mean you allowed the nano to *evolve itself*?"
scenario (in response to alien "unstoppable" nano).

Given the pace of molecular biology, I doubt we are going to encounter
anything we can't relatively rapidly understand and counter within
a few years.

> In which case, *preventing* self-bootstrapping rogue AIs may mean the
> end of humanity.

Yes, that possibility does exist but I consider it a case where
people would gladly make their systems available in national/world
emergencies.

> Frankly, the known harm that your solution would impose far exceeds the
> risk times possible danger of a self-bootstrapping rogue AI.

What? Known harm? All I am asking is that (a) all DC/P2P projects
be open source so the code can be reviewed (the trust but verify
approach can solve the problem of people hacking the code and
breaking things); and (b) that a very determined effort be made
by people to ensure they can "trust" the code (high reputation
suppliers, independent security review, etc.) to make it really
hard for an AI to break into the system and use it for alternate
purposes.

Regarding your comments and Samantha's on the problems of internodal
communication speed. I thought I had mentioned that initial AIs
might be sub-realtime, I apologize if I forgot to say that. The
question becomes how fast it can advance to real time (or faster)
on the networks that will be available in a few years. I refer
you to Eugene's comments regarding plausible development scenarios.
(Though from what he says, I suspect he has better ideas but is
reluctant to disclose them. Thats fine with me.)

Keep in mind -- there ought to be 3 or 4 Eliezers out there in
the Islamic world. Not something to be taken lightly if they
choose the dark side. Just because Eliezer broke his mold
doesn't mean others will be so fortunate. The Pakistani
nuclear physicist who may have been working with bin Laden
comes to mind.

Robert



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:19 MST