Re: [MURG] meets [POLITICS]

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Apr 08 2002 - 04:14:48 MDT


Robert J. Bradbury wrote:

> On Sun, 7 Apr 2002, Adrian Tymes wrote:
>
>
>>Seriuosly: nuclear war could also be the end of humanity, yet many fewer
>>people complain about military solutions when their country is not
>>actively involved in a war.
>>
>
> Adrian, I think this is highly unlikely. Even an all out nuclear war,
> many submerged submarines with nuclear reactors, ships at sea, remote
> outposts like the antarctic, etc.
>
> Sure humanity might be knocked back a few decades, but I doubt it
> would be more than a century.
>
> [If you want to assert that nuclear war would be the end of humanity,
> please point me to a peer-reviewed article (other than Sagan's nuclear
> winter proposition which was subsequently found to be deficient).

Please point me to a peer-reviewed article that considers the Dr
Strangelove position on the survivability of nuclear war and it
barely being a blip in the march of progress.

>
>
>>Not to mention various plagues (natural or
>>manmade) whose cures could only be found by an AI - and, given when
>>they could strike, a self-boostrapping "rogue" AI free of procedural
>>development constraints may be the only way to get such an AI in time.
>>
>
> Ah yes, the Nagata "you mean you allowed the nano to *evolve itself*?"
> scenario (in response to alien "unstoppable" nano).
>
> Given the pace of molecular biology, I doubt we are going to encounter
> anything we can't relatively rapidly understand and counter within
> a few years.
>

I think you are quite aware that some plagues, not to mention a
nano weapon, act very rapidly making a "few years" MUCH too late.

 
>
>>In which case, *preventing* self-bootstrapping rogue AIs may mean the
>>end of humanity.
>>
>
> Yes, that possibility does exist but I consider it a case where
> people would gladly make their systems available in national/world
> emergencies.
>

Declared by whom?

>
>>Frankly, the known harm that your solution would impose far exceeds the
>>risk times possible danger of a self-bootstrapping rogue AI.
>>
>
> What? Known harm? All I am asking is that (a) all DC/P2P projects
> be open source so the code can be reviewed (the trust but verify
> approach can solve the problem of people hacking the code and
> breaking things); and (b) that a very determined effort be made
> by people to ensure they can "trust" the code (high reputation
> suppliers, independent security review, etc.) to make it really
> hard for an AI to break into the system and use it for alternate
> purposes.
>

NO. Too easy for existing powers to maintain control and stifle
any innovation dangerous to their castles rather than dangerous
to humanity. Try something else.

> Regarding your comments and Samantha's on the problems of internodal
> communication speed. I thought I had mentioned that initial AIs
> might be sub-realtime, I apologize if I forgot to say that. The
> question becomes how fast it can advance to real time (or faster)
> on the networks that will be available in a few years. I refer
> you to Eugene's comments regarding plausible development scenarios.
> (Though from what he says, I suspect he has better ideas but is
> reluctant to disclose them. Thats fine with me.)
>

It can't advance to faster than realtime on a world-wide net
unless it cracks lightspeed. This is a piss-poor and highly
unlikely scenario for which you seem willing to give up a lot of
freedom and allow a lot of control to be asserted on the one
place we can possibly keep much freedom. I will MUCH sooner take
my chances with an SI somehow coming into existence on
impossibly spread out hardware than on the certainty of massive
repression of all of us if the government gets more excuses to
clamp down on software and on the Net. Not that the need any
more excuses apparently.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:19 MST