RE: The Major League Extinction Challenge

From: Billy Brown (ewbrownv@mindspring.com)
Date: Thu Jul 29 1999 - 10:11:49 MDT


Eliezer S. Yudkowsky [SMTP:sentience@pobox.com] wrote:
> Problem is, choosing to commit suicide is still a choice - and that's
> not what I'm hypothesizing. At that level, I don't have the vaguest
> notion of what would really happen if an SI's goal system collapsed.
> The whole lapse-to-quiesence thing in Elisson is a design feature that
> involves a deliberate tradeoff of optimization to achieve a graceful
shutdown.

I know. But if they all have the same hoghly optimized cognitive
architecture by the time they reach this point, and it happens to give a
similar result, then we could actually get the whole civilization to shut
down. Of course, that still leaves eveyone's automation running, and a lot
of it is likely to be sentient...

No, I don't think it could really happen either. The whole scenario is
just too contrived, and it definitely requires that the target civilization
make mistakes (which isn't exactly a guide to useful thought when you're
talking about SIs). However, it beats anything else I've seen. Killing
off a civilization like this without sterilizing the universe is pretty
hard to do.

> Well, if you're interested in a not-so-known-laws-of-physics
> speculation: The various colonies achieve SI more or less
> simultaneously, or unavoidably. The first thing an SI does is leave our
> Universe. But, this requires a large-scale energetic event - like, say,
> a supernova.
>
> Still doesn't solve the Great Filter Paradox, though. Some hivemind
> races will have the willpower to avoid Singularity, period. This
> scenario takes mortals and Powers out of the picture during a
> Singularity, but it doesn't account for the deliberate hunting-down that
> would be needed.

Agreed.

> I think the most plausible argument is this: Every advance in
> technology has advanced the technology of offense over the technology of
> defense, while decreasing the cost required for global destruction.
> There are no shields against nuclear weapons - not right now, anyway -
> and we've certainly managed to concentrate that power more than it's
> ever been concentrated before. In fact, the more technology advances,
> the easier it becomes to cause mass destruction by *accident*. It holds
> true from nuclear weapons, to biological warefare, to the Y2K crash, to
> nanotechnology. All you really need to assume is that the trend
> continues. Eventually one guy with a basement lab can blow up the
> planet and there's nothing anyone can do about it.

Maybe. But I think your example is an artifact of the nature of our recent
historical situation. Nuclear weapons are a superweapon primarily because
we can't disperse ourselves properly (Earth being too small for that
purpose), and biological weapons have mass-destruction potential because we
can't improve our immune systems.

If we were going to have a future without AI/IA, I think it is clear that
these trends would reverse within a century. Spreading into interplanetary
space would give us plenty of room to build economical defenses against
nuclear weapons, and the combination of sealed environments and competent
genetic engineering would make bioweapons a very limited threat.
 Evaluating the potential of nanotechnology is always tricky, but I think
it actually ends up giving the defender a substantial advantage once it
reaches a reasonable level of maturity.

IMO, the hard part is surviving the transition period we are currently in,
where we have increasingly advanced destructive technologies but there is
no room in which to deploy countermeasures.

Billy Brown, MCSE+I
ewbrownv@mindspring.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:36 MST