From: Michael S. Lorrey (retroman@turbont.net)
Date: Thu Oct 12 2000 - 13:00:12 MDT
Bryan Moss wrote:
>
> I think Lanier makes some good points that are difficult to
> find in what is essentially a very confused essay. The main
> thing we should take away from this is the questionable
> nature of AI as a goal, not because it is necessarily a bad
> goal but because, for me, it illuminates a bigger problem.
> After all, what is society but a fully autonomous system?
> And what external purpose does that system serve? For me
> Lanier's essay was an affirmation of my own doubts about
> transhumanism. Without a purpose we cannot architect our
> future, we need to discover the precise things we wish to
> preserve about ourselves and our society and only then can
> we go forward. In my mind it is not enough to say "I want
> to live forever"; "I" is simply shorthand, I want to know
> what it is about me that I should preserve and why I should
> preserve it. I think these problems run deep enough that
> we'll need more than polish.
This is a very good analysis. I hope you send your comments to Lanier to see
what he thinks.
>From my own point of view, this falls hand in hand with what I was saying in
another thread, questioning the preservation of life as the highest ethic.
Without both a purpose, and a standard of quality, life, society, and the
universe are pretty meaningless things. Modern liberal thinking typically posits
that there is no meaning or purpose to life as the basis for its moral
relativism, and this core conflict, I think, is why old and current day issues
are repeatedly debated and argued over here and in society in general. Others
say we shouldn't talk about current day issues, that this list is about making
the future, to which I reply that in order for us to make that future we need to
come to agreement over core facts and issues.
If we accept that honest intelligent people will continue to disagree about
these things, then we do need to accept that while some people like Lanier many
not share OUR purposes or values, should they be allowed to oppress our
expression of our purpose and values?
Essentially what Lanier is advocating for the future is technofascism (which is
also the goal of those in the Turningpoint Project), where one group gets to use
the government monopoly of force to force other groups from attaining their
technological goals merely because those in power fear those goals, not because
there is any actual threat. His implicit acceptance of the concept that AI
technology will replace 'real humanity' indicates the fearful root of this
fascism, they cannot conceive that AI will BE us.
>From my own conversations with many common people about extropic and >H
concepts, it seems to be universally accepted that an intelligent machine is not
'human', cannot be, and never will or should be regarded as 'human', even if the
intelligence once resided in a human body. This innate xenophobia I think is the
prime drive against transhuman technological trends.
Greg Bear's _Darwin's Radio_ got the level of this fear portrayed pretty good.
While his plot relied on endogenous retroviruses to trigger the next stage of
human evolution, and were confused for epidemic diseases, development of AI
technology will trigger a similar level of fear among the population. I feel
pretty secure predicting some sort of a 'Butlerian Jihad' (see Dune) against AI
technology. The only question is how significant and widespread it will be.
This, I think, will be determined memetically in the media by a propaganda war,
a war that is already started and which most transhumanists are blithely unaware
is now raging.
Mike Lorrey
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:35 MST