From: Anders Sandberg (asa@nada.kth.se)
Date: Mon Sep 17 2001 - 12:48:19 MDT
On Mon, Sep 17, 2001 at 11:24:03AM -0700, Mark Walker wrote:
>
> >The
> > ethical is most serious: you assume that human lives can have
> > negative value, and do an essentially utilitarian calculation not
> > of happiness, but simply of utility towards a certain goal. Humans
> > not promoting this goal are a vaste of resources and a potential
> > threat, so eliminate them.
>
> First off, I don't see where Robert assumes that humans are a vast resource,
> rather, he seems to be asking us to trade off some lives now for many more
> live in the future. Furthermore, Anders claims that Robert has made an
> ethical mistake, but this is a controversial claim in ethics. Robert assumes
> a consequentialist position, i.e., he assumes that the right act here is the
> one that has the best consequence, namely, the one that saves the most lives
> in the end. Anders, as far as I can tell, expresses the deontological view
> which says that some actions are morally obligatory regardless of their
> consequences:
Roberts mistake is not that he made the wrong ethical conclusion, but
that his reason was quite non-ethical.
I regard myself as a consequentialist (I could never stomach
deontology). But I look not just as the first order consequences but
also the second and higher order: what matters is the entire state of
the world after an act, and how this will likely develop. From this form
of consequensialism a kind of deontology develops, e.g. killing
non-initiators of force has a very negative and robust expectation value
in the long run, so we can express this in the statement that killing
non-initiators is wrong. Even if we re-do this derivation for Roberts
scenario we end up with the same result.
> With this distinction we can see that Anders may have been a bit swift
> in pulling the "fascist card":
>
> > A transhumanism not based on this core does not deserve the
> > humanism part of the name. And a transhumanism that bases itself
> > on the utilitarist approach of using humans as means to an end
> > rather than ends in themselves becomes the ideological twin of
> > fascism, with the sole difference that the singularity takes the
> > place of the national community in the ideological structure.
> >
> To make this charge of an "ideological twin" stick Anders would need to show
> at least (a) that transhumanism is necessarily deontological in structure,
> and (b) that Robert intended the consequence to be weighted is that of an
> ideal, namely, the singularity, rather than the lives that the singularity
> will save. However, (a) is an open question in my mind and, as I've said,
> Robert's discussion seems predicated on the assumption that the singularity
> will save a great number of lives. That is, it is to misunderstand (or not
> read carefully) the form of Robert's argument: the singularity is the means
> to the end of saving lives, it is not that sacrificing lives are the means
> to the goal of the singularity.
What I was referring to here was not whether the detailled internal
ethical structure of fascism would be isomorphic with debased
transhumanism, but to point out that once that line was crossed, the
resulting ideology would be for all practical purposes the same. And I
did not use the term fascism just to tar Robert (who I deeply respect in
other areas) but because I really saw the similarity between a "humans
as means for the singularity" transhumanism and a "humans as means for
the national community" fascism. Even if Robert thinks the reason the
singularity is good is because it somehows limits dying, that is not
very different from the more practical minded fascist (and other
collectivist) ideas that view the "national community" as involving a
better life for everybody.
What would your opinion be on a fascist transhumanist (not Robert in
advocating the singularity as a good any sacrifice might be worth? What
would your *ethical* arguments against his position be?
The problem here is not that Robert is wrong or made an inflammatory
post, but that in the current list culture there is precious few
attempts to shore up the ethics of transhumanism in a way that makes it
workable and protects it against merging with unpalatable and dangerous
ideologies. If transhumanism has no generally accepted core ideas and no
ethics, what is there to distinguish it from (say) technological
fascism? That is why I more and more think we have to abandon the term
transhumanism as a designator for ourselves and instead concentrate on
more well-defined systems such as extropianism.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:46 MST