Re: [MURG] meets [POLITICS]

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Mon Apr 08 2002 - 07:06:09 MDT


On Sun, 7 Apr 2002, Adrian Tymes wrote:

> I was unaware of any proof that the nuclear winter proposition had been
> found deficient.

I believe that subsequent studies of Sagan's atmospheric model,
rates of dust clearance, etc. were found insufficient. [As a group
we really out to put together a page on this, related to "risks
humanity faces"].

> Even if it was, there's still radioactive fallout to consider.

True, more deaths, but some will survive and the subsequent
population might be even more radiation resistant. [Actually
within a few years if current efforts with Deinococcus radiodurans
reveal its tricks, we might have a 'don't bother with the clinical
trials' solution for radiation exposure.]

> Not to mention that, if all urban areas were to be destroyed,
> the resulting knowledge base loss would extend back at least to the
> start of the Industrial Revolution - two and a half centuries, as my
> history book stated it.

That isn't clear. I'm sure nuclear submarines come with complete
operating manuals. Its questionable whether you would lose every
chip fab. [A more interesting question is *where* are the chip
fabs of the CIA/NSA and other intelligence agencies...?]

You also probably need to consider whether various governments
in light of the really paranoid days of the cold war didn't
start archiving a significant amount of the human "body of knowledge"
someplace where a nuclear war wouldn't touch it. I know I certainly
would if I were convinced it might happen. [Yet another thing
we ought to know -- I suspect an FOI request could turn up the
answer to this.]

> Anthrax. Sure, we understand it: we *made* the stuff. But counter?
 [snip]

Actually, scientists at Harvard developed an Anthrax toxin anti-toxin
last year. I expect its undergoing accelerated trials now.
I've got a paper written on how the technologies my new company
would develop would significantly accelerate the development of
novel anti-toxins (from months to a week or two) if a plan
were in place for the rapid response to novel bioweapons.

> The time to develop AIs to head off a disaster, or any prevention of a
> disaster, is not after a disaster, but before.

True, but this gets into the big debate about the difficulty
of developing defenses without first developing what they
are designed to defend against.

For responding to the bio- and chemico-weapons technologies
one key element is robust methods for molecular modeling,
DC proteing folding projects, tightly integrated massively
parallel computers like Blue Gene, etc. Many of those we
already have or will soon.

To solve the problem of nuclear bombs or dirty bombs we
are going to need autonomous insect-bots capable of detecting
radioactivity. And we are going to need *very* large numbers
of them. [I don't see having those very soon.]

> I recall earlier comments along the lines of untrusted code being made
> illegal (in fact, or just in effect)...which would seem to strike
> against any such projects that did not have the resources or knowledge
> to gain such trust, but did gain wide popularity (ref: Kazaa).

I may have said that. Given the risks that software like Kazaa might
hold, it isn't unreasonable to require that it be open source.
The government could also make people aware of the risks that
such software poses if it hasn't been "verified" (Underwriters
Laboratory, Consumer Reports, models might work best here.
Then the ability in a national emergency to shutdown computers
known to use such software. We almost got to this with the ISPs
with the viruses last summer/fall without any intervention by
the government.

> This
> might well destroy all practical DC/P2P projects if none that focus on
> being trusted just happen to emerge - these projects gain some
> popularity, in fact, from their "illegitimacy".

I would expect that "reasonable" people, corporations, institutions,
etc. that are concerned about security issues would gravitate towards
DC/P2P projects that were "verified". In turn, grant agencies such
as NSF or NIH (that support projects like Folding/SETI@Home) would
require verification to be part of the grant proposals because they
understand the risks involved. Stockholders wouldn't invest in
companies that were aware of the risks but didn't bother to do
the verification.

As Harvey says, its a tough sell, but we have to start someplace.

> Even on networks that will be available in a few years, it would still
> be *far* sub-realtime. Maybe once everyone has fiber (or wireless
> optical) to their homes (which likely won't be this side of 2010), but
> maybe not even then.

It would only take an act of congress in the U.S. to push this
forward *much* faster. I think there are bills being written
along these lines. Infoaware politicians are viewing this
as a competitive infrastructure, jobs, economic growth issue.

> 3 or 4 Eliezers in the Islamic world, vs. *how* many in ours?

I'm assuming that IQ's high are a function of simple population size.
I was making the statement from the U.S. perspective. I suppose if
you add North America, Europe, Australia and Japan, you might
roughly balance the Islamic population.

> the balance is very heavily in favor of letting ours work unhindered
> even if it means theirs work unhindered too.

The business costs of even *very* simple viruses doing nothing more
than scanning your hard drive for financial information (how do
we know the first application to hack Kazaa will not to be to send
your Quicken records back to the Russian mob?) argue strongly
against allowing "unhindered access".

> Including, say, letting a very rich one buy up Kazaa's makers and
> try inserting the rogue code you describe, just to see what happens.

Fortunately, I think our fingers will probably get burnt a few
more times and then people and the government will "wake up".

Robert



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:19 MST