From: Adrian Tymes (wingcat@pacbell.net)
Date: Sun Apr 07 2002 - 22:32:41 MDT
Robert J. Bradbury wrote:
> On Sun, 7 Apr 2002, Adrian Tymes wrote:
>>Seriuosly: nuclear war could also be the end of humanity, yet many fewer
>>people complain about military solutions when their country is not
>>actively involved in a war.
>
> Adrian, I think this is highly unlikely. Even an all out nuclear war,
> many submerged submarines with nuclear reactors, ships at sea, remote
> outposts like the antarctic, etc.
>
> Sure humanity might be knocked back a few decades, but I doubt it
> would be more than a century.
>
> [If you want to assert that nuclear war would be the end of humanity,
> please point me to a peer-reviewed article (other than Sagan's nuclear
> winter proposition which was subsequently found to be deficient).
I was unaware of any proof that the nuclear winter proposition had been
found deficient. Even if it was, there's still radioactive fallout to
consider. Not to mention that, if all urban areas were to be destroyed,
the resulting knowledge base loss would extend back at least to the
start of the Industrial Revolution - two and a half centuries, as my
history book stated it.
>>Not to mention various plagues (natural or
>>manmade) whose cures could only be found by an AI - and, given when
>>they could strike, a self-boostrapping "rogue" AI free of procedural
>>development constraints may be the only way to get such an AI in time.
>
> Ah yes, the Nagata "you mean you allowed the nano to *evolve itself*?"
> scenario (in response to alien "unstoppable" nano).
>
> Given the pace of molecular biology, I doubt we are going to encounter
> anything we can't relatively rapidly understand and counter within
> a few years.
Anthrax. Sure, we understand it: we *made* the stuff. But counter?
Not on a meaningful scale before a massive anthrax attack would inflict
significant casualties on its target - which would take a few months,
not years. Repeat for other forms just as they do adapt, and...
>>In which case, *preventing* self-bootstrapping rogue AIs may mean the
>>end of humanity.
>
> Yes, that possibility does exist but I consider it a case where
> people would gladly make their systems available in national/world
> emergencies.
The time to develop AIs to head off a disaster, or any prevention of a
disaster, is not after a disaster, but before.
>>Frankly, the known harm that your solution would impose far exceeds the
>>risk times possible danger of a self-bootstrapping rogue AI.
>
> What? Known harm? All I am asking is that (a) all DC/P2P projects
> be open source so the code can be reviewed (the trust but verify
> approach can solve the problem of people hacking the code and
> breaking things); and (b) that a very determined effort be made
> by people to ensure they can "trust" the code (high reputation
> suppliers, independent security review, etc.) to make it really
> hard for an AI to break into the system and use it for alternate
> purposes.
I recall earlier comments along the lines of untrusted code being made
illegal (in fact, or just in effect)...which would seem to strike
against any such projects that did not have the resources or knowledge
to gain such trust, but did gain wide popularity (ref: Kazaa). This
might well destroy all practical DC/P2P projects if none that focus on
being trusted just happen to emerge - these projects gain some
popularity, in fact, from their "illegitimacy".
> Regarding your comments and Samantha's on the problems of internodal
> communication speed. I thought I had mentioned that initial AIs
> might be sub-realtime, I apologize if I forgot to say that. The
> question becomes how fast it can advance to real time (or faster)
> on the networks that will be available in a few years. I refer
> you to Eugene's comments regarding plausible development scenarios.
> (Though from what he says, I suspect he has better ideas but is
> reluctant to disclose them. Thats fine with me.)
Even on networks that will be available in a few years, it would still
be *far* sub-realtime. Maybe once everyone has fiber (or wireless
optical) to their homes (which likely won't be this side of 2010), but
maybe not even then.
> Keep in mind -- there ought to be 3 or 4 Eliezers out there in
> the Islamic world. Not something to be taken lightly if they
> choose the dark side. Just because Eliezer broke his mold
> doesn't mean others will be so fortunate. The Pakistani
> nuclear physicist who may have been working with bin Laden
> comes to mind.
3 or 4 Eliezers in the Islamic world, vs. *how* many in ours? Weighing
the risk of harm (times magnitude) that theirs might bring w/out
limitations, versus the chance of benefits (times magnitude) that ours
might bring w/out limitations, the balance is very heavily in favor of
letting ours work unhindered even if it means theirs work unhindered
too. Including, say, letting a very rich one buy up Kazaa's makers and
try inserting the rogue code you describe, just to see what happens.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:19 MST