Re: TERRORISM: Is genocide the logical solution?

From: Anders Sandberg (asa@nada.kth.se)
Date: Mon Sep 17 2001 - 12:19:59 MDT


I think my response to this can be summed up in: "Robert, what do you
consider human life and transhumanism to be about?"

On Sun, Sep 16, 2001 at 10:27:14PM -0700, Robert J. Bradbury wrote:
>
> First I will respond to Anders:
> > The ethical is most serious: you assume that human lives can have
> > negative value, and do an essentially utilitarian calculation not
> > of happiness, but simply of utility towards a certain goal.
>
> Anders, first, can you make a reasonable case that lives that
> have as a fundamental raison d'etre the elimination of other
> lives do not have a "negative value"? Second, can you make the
> case that individuals who openly support or fund individuals who
> have as their raison d'etre the elimination of other *innocent*
> individuals from the planet is not of "negative value"?

Yes. Even the most vile, destructive human has a positive ethical value,
which is usually called "human dignity" (I refer to the June archives
for my posts on this subject). Such a person is worth stopping, but
deliberately ending his life should never be done unless there are
absolutely no other ways to handle him. You mix up this form of human
*ethical* value with the *practical* value a person may have for a
certain goal (which can indeed be both positive and negative).

But this is not about whether we would be justified in killing Bin Laden
if any opportunity presents itself - your post dealt with the deliberate
genocide of millions of people! Can you claim that every one of these
people have as their raison d'etre the elimination of other lives? Even
by your own assumption that human value equals the value to your
project, you would be bound kill off positive-valued people. And if you
accept this, why limit yourself to Afghanistan, why not bomb Redmond too
in order to promote better software development? Can you see where this
is leading?

> Bottom line for me -- lives dedicated to the destruction of
> other lives (or supporting the destruction of other lives)
> are clearly unextropic. So my previous post on face value
> is clearly unextropic. [NOTE THIS QUITE CAREFULLY -- I
> HAVE PROPOSED A POTENTIALLY SELF-CONTRADICTORAY SOLUTION
> AND AM WELL AWARE OF THAT.]

Are you dedicated to the enhancement of other lives or their
destruction? You cannot enhance other lives by destroying them, and
destroying lives is only constructive insofar these lives were
themselves extremely destructive. This is why "non initiation of force"
is so important - that is the line showing when deadly force may be used
to stop someone. Just that you don't like them or that they *might*
become dangerous is not enough.

The unextropic aspect of your previous post was not that you wanted to
kill the evildoers (that *might* be a justified defense), but that you
had given up hope for 25 million people and were treating them all like
a hill in the way for your new singularity-railway.

> Anders continues:
> > The core idea of transhumanism is human development, so that we
> > can extend our potential immensely and become something new. This
> > is based on the assumption that human life in whatever form it may
> > be (including potential successor beings) is valuable and worth
> > something in itself. It must not be destroyed, because that
> > destroys what transhumanism strives to preserve and enhance.
>
> Yes, of course, and if my previous note is read carefully, it
> should seem apparent that my desire is to maximize "life".
> Whether the proposed strategy to maximize this is *really*
> optimal is certainly open to significant attack.

I would say most of the posts have pointed out that it is highly
suboptimal, nearly pessimal.
 
> However, the discussion of the fastest path to transhumanism
> or the broadest path to transhumanism is not something that
> should be cast aside due to some unsavory bumps along the road.

Maybe you also should ask yourself what kind of transhumanity you want
to achieve? A transhumanity that emerges out of sacrificing other
intelligent beings is in my book a Blight.

> Anders:
> > Even if some humans are not helpful in achieving transhumanity doesn't
> > mean their existence is worthless, and if they are an active
> > hindrance to your (or anybody elses) plans destroying their lives
> > is always wrong as long as they do not initiate force against you.
>
> Ah, but the key perspective is "as long as they do not initiiate
> force against you". We are past that point. They are initiatiating
> force against us in an unextropic perspective that seems to involve
> the support of brain-washed individuals in Afganistan and Pakistan.

How have oppressed Afghani women initiated force against you? By using
that kind of flexible interpretation of initiation of force you can
justify anything - if an American murders a Swede I would be justified
in killing you. But you go further, and suggest that because in your
estimation there is a high likeliehood of force-initiating individuals
appearing in a region you are justified in attacking it. Guess who is
initiating force now, and by the same logic as above should expect
getting his group being attacked?

What really disturbs me is that you have not responded to my statement
of what transhumanism is about. You don't seem to be willing to deal
with this as an ethical debate, but rather as some kind of cost-benefit
calculation. But if you don't understand what your values are, what you
are trying to achieve and how this is valuable, then the calculation
can't provide an optimal answer. I get the worrying feeling that you
have just assumed "minimize number of deaths before singularity" to be
the value and optimization goal, and completely ignored issues of what
kinds of *lives* there will be. If you sincerely mean that what only
matters is the number of people, then you have joined ranks with a
certain guy who said "A few deaths is a tragedy, a million statistics" -
and your philosophy is likely to lead in the same direction as his.

> Anders:
> > The logical mistake is to ignore the full consequences of your
> > idea, and just look at the "desirable" first-order consequences.
> > What you miss is that if this form of "practical genocide" is
> > used, then the likeliehood of other forms of "practical genocide"
> > are becoming far higher and harder to ethically suppress, as well
> > as resistance to the US or other genocidal group is likely to
> > become *far* more violent.
>
> Anders, *I* did not, and it would appear the U.S. military
> officials (at least currently) are not, ignoring the potential
> consequences of bombing Islamic states. My statements were
> carefully made based on estimates that (a) a backlash would
> develop; (b) a response to such a backlash would be moderately
> effective; (c) technologies would develop that would make the
> entire response vs. counter-response irrelevant)

OK, playing along with this game: what about foreign policy? By behaving
like this, the US would demonstrate to all other nations that it is
dangerous and willing to use force to achieve its goals even when the
victim has not attacked it. The logical conclusion for everybody else is
to view the US as the new rouge nation and start preparing to deal with
it. Even if China, Russia, the EU and Australia might not share the same
pre-emptive idea, they would see where the threat lies and act
accordingly. Hmm, suddenly the concept of MAD begins to rear its ugly
head, doesn't it? And what happens when nano is being developed in this
kind of scenario? You can guess - nano-MAD. Say hello to global
ecophagy.

> Anders:
> > This post is going to haunt us all - it is in the archives, it has
> > been sent to hundreds of list participants. Be assured that in a
> > few years, when the current uproar has settled down, somebody is
> > going to drag it out and use it against extropianism in the media.
>
> I've heard this from Anders, and I've heard it from Eliezer
> (as well as the not small number of messages filling my personal
> mailbox)
>
> I must only say that I am shocked and amazed. If one cannot voice
> on the extropian list thoughts, ideas and opinions that one has
> for the maximization of the evolution of our society -- then we
> are doomed. We are implicitly stating that ideas exist that are
> not fit for public consumption or that we would prefer the veneer
> of public approval rather than the debate of rigorous, rational
> discussion.

No, Robert. I would attack your post just as vigorously even if you had
sent it to me encrypted for my eyes only, because the things it embodies
are so vile. That this is a public forum doesn't change anything.

You might have noted that I put my pessimistic prediction that it would
haunt us near the end, because compared to the other problems with it
this was just a minor point. The fact that you seem to be advocating an
ethics which is as I see it fundamentally incompatible with human
flourishing is far worse than whether Rifkin gleefully could use the
post. In fact, I am a bit dismayed to note how many critics of your post
have merely criticised the consequences of posting it rather than the
contents, as if they didn't care what was posted on this list or thought
within the transhumanist memespace as long as it did not touch the
outside world. That is a very shortsighted view.

One should be able to voice *anything* to the extropians list, including
the most vile hate propaganda - but it is our responsibility to rip such
evil ideas to shreds according to the ideals of transhumanism. If we do
not do that, and allow them to just fester, then transhumanism will soon
become a cesspool. It is better to have an open forum where radical
ideas are freely expressed but also criticized than a closed forum where
innovation is suppressed for security. This is exactly why we need an
open society, too.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:46 MST