From: Hal Finney (hal@rain.org)
Date: Thu Sep 04 1997 - 15:12:20 MDT
Damien Sullivan writes:
> [Issue of nanotech power preemptively taking out its neighbors]
> I raise the possibility of disagreement precisely because our ethical
> systems militate against it so strongly. Our evolved morality may know
> more than you do.
This is an interesting question. Surely at many times in the past,
circumstances have arisen in which one power has an overwhelming but
temporary military advantage over another. It would be tempting to
engage in a preemptive strike and destroy the potential competitor.
Yet most ethical systems would condemn such behavior as wrong.
If we think of ethics as distilling our experience of the long-term
consequences of our actions, then this suggests that there is something
mistaken with the reasoning in favor of preemptive strikes.
A recent historical example would be the situation immediately after
WWII, when the U.S. had sole possession of the atomic bomb. There was
undoubtedly debate about using this power preemptively against the USSR,
our allies during the war, but inherent ideological adversaries.
Indeed, the cost of not conquering the Soviet Union was considerable:
the Cold War; years of mistreatment of its population and its ecology
by the Soviet government; justification of American excesses as necessary
to stop the Red menace.
However, if the U.S. had preemptively attacked Russia after WWII,
destroyed it as a potential competitor for at least decades, things
might easily have been worse. Certainly the U.S. would have been a less
trusted ally and partner in the world, more a heavy-handed, feared tyrant.
And as things turned out, the USSR eventually fell apart, with its
countries haltingly moving towards democracy. This is a victory for
our ethical standards, as the USSR learned that its policies were wrong
in an absolute sense, that is, they were not in accordance with nature.
Even with all the years of suffering caused by the existence of the USSR,
the world is very likely a better place today than it would have been
after 50 years under a nuclear-enforced Pax Americana.
The question remains whether (to make the case most starkly) some power
in sole possession of vastly sophisticated nanotech would be making a
similar mistake to unilaterally destroy its competition.
What is the crux of the mistake in preemptive attack? What is it
about such actions which makes us instinctively feel that there will be
negative consequences?
One is a pragmatic issue. Any attack is going to be imperfect. There
will always be some residual resentment and hatred, and the less justified
the attack, the worse that will be. This will fester for years, and
eventually the resistance may get enough power to strike back effectively.
It seems though that the nanotech case is different. It is hard
to imagine a takeover which allows cells of resistance to remain.
Consider a time- and space-limited gray goo which destroys some region
utterly, then decomposes into useful raw materials for new construction.
Nano is so much more powerful and thorough than any previous technology,
this pragmatic concern does not seem relevant.
Another issue is less tangible. It could be argued that by taking an
action which is evil, you make yourself more likely to take other evil
actions in the future. In the nuclear American empire scenario, we can
easily imagine that after using nuclear weapons on first Japan and then
Russia, they might be used against China, Vietnam, Cuba, or any other
country which dares to resist. It may be necessary to crack down on
dissent at home, as these outrageous actions lead to protests. You could
end up with the worst tyranny imaginable.
Similarly, a nanotech power which is so paranoid and aggressive as to
take the step of eradicating everyone else on the planet may find it
difficult to survive on its own terms. Paranoia would rule, and lacking
external enemies it may either imagine them (potential alien species) or
create them internally (subsystems which seem to have too much autonomy).
The conflict between the need to extend power as rapidly as possible
across star systems (in order to be strong when alien enemies are met)
and the need to keep complete uniformity of purpose and will (to prevent
internal dissension) will be difficult to resolve. The power may be
forced into a rigid, simplified mental stance which can be replicated
reliably and applied uniformly, with multiple safeguards against evolution
and alteration.
The result would be a nightmare Borgism, a nearly mindless plague whose
only goal was conquest, spreading throughout the universe. This would
all flow from that first step of destruction.
Consider, in contrast, an entity which takes the harder road from the
beginning, seeking to embrace diversity and work with competitors who
are its equals. Its survival is less certain; the resources it will be
able to command directly will be more limited. But the diversity which
results will be a positive benefit in and of itself. And the need to
deal flexibly and creatively with competitors will arguably make it
better prepared to deal with surprises which the universe throws at it
in the future.
Granted, this is a pretty fuzzy argument. In particular, the notion
of being tainted by evil actions sounds melodramatic and old fashioned.
But as Damien says, our ethical systems do embody a considerable history
of experience, even when wrapped in mystical or religious trappings.
An ethical meme which kills its host is less likely to survive. So we
should not ignore our ethical views too easily. Evil actions may have
subtle negative consequences which we tend to overlook.
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:48 MST