From: Eric Watt Forste (arkuat@pobox.com)
Date: Tue Oct 29 1996 - 09:15:39 MST
Anders Sandberg wrote:
> As for extropian/transhuman sacred cows, there are of course the
> idea that things will get better in some sense in our future, and
> some see the Singularity or the utility of nanotech, uploading or
> AI as fundamental and unassailably true. Another dangerous sacred
> cow is the belief that we know what we are doing and are not a
> group of technophiles playing world saviours in our spare time.
The idea that things *will* get better in our future is of course
a variety of blind faith. Isn't that why we like to talk about
dynamic optimism instead? A mild take on dynamic optimism is that
it simply claims that we are *certain* to fail in finding solutions
to our problems if we give up seeking those solutions. Dynamic
optimism is no guarantee that solutions will be found; it is simply
a motivating principle that keeps us seeking solutions (and saves
us from disasturbation).
The faith of the Singularity and the faith in some particular
orthodoxy of nanotech, uploading, or AI are all regularly barbecued
on this list. Generally, what I see on this list is that when one
person makes an assertion of the form "Because of the inevitability
of my-favorite-techno, such and such is also inevitable" I often
also see followup requests asking for elucidation and defense of
both the premise and the deduction. For me personally, the Singularity
signifies the fact that the future gets cloudier faster as we try
to look into it from our own time than it did for thinkers and
futurists of the past. The Singularity signifies nothing more to
me than that. I've always been more interested in the prospect of
the Diaspora than the prospect of the Singularity.
Even if AI is successful, unless we develop an ability to understand
and recreate human motivations, AI will only create a new race of
slaves that *could* be poisonous to our culture. They're writing
books on android epistemology now, but android ethics is still
stuck in the stage that Asimov left it in, which was a simple
apologia for the enslavement of nonhumans. (An argument, I hope I
need remind no one, which could easily be extended so as to argue
for the enslavement of posthumans. Don't laugh quite so fast.)
Nanotech seems plausible to me, but it's no sacred cow, because it
doesn't seem to be an unalloyed good. We all are aware of the
possibility of engineered viruses designed not to attack our
computers, but *us*, and this possibility becomes more serious each
passing year. That's one motivator for us to work on these problems,
so that we'll be in a position to engineer a defense against such
things when one is called for. I liked Neal Stephenson's idea (in
THE DIAMOND AGE) that the notion of "defense in depth" will take
on new meaning in such an environment. It helped make me more aware
of the staggering complexity of the social changes that nanotech
will probably induce. But unlike Rich Artym, I don't conclude from
this that all my current theoretical apparatus is rendered impotent
and worthless in the face of such changes.
And as for uploading, while I'm familiar with the arguments that
uploading shouldn't (in principle) have any more disruptive effect
on me than drinking a cup of coffee, these fragile and evanescent
thought-experiments don't convince me, because it seems to me that
a distinction that lies at the heart of such gedankenexperiments
is the very poorly understood distinction between signal and noise.
The distinction of signal from noise is a value-judgement passed on
"pieces" of information, and I have yet to find anyone who claims
to be able to explain to me how value-judgments are performed or
how this system might be optimized. (Meanwhile, I work at this
problem myself from time to time.)
Will nanotech do more good than harm? An open question. We might
want to exert ourselves to see that it does.
Will uploading be possible without inducing such profound psychological
transformation that we could not claim that the original human
uploader "lives on"? Another open question.
If "true AI" (whatever that means) proves possible, will human
beings and human power-institutions accept such entities as anything
more than slaves? What kind of conflicts could this give rise to?
(A study of history shows that conflict between self-motivating
systems such as humans--or AIs--is a greatly destructive force.)
Actually, I'm pretty optimistic about this one, but in this post
I'm trying to cook up some barbecue.
I hope you find it tasty.
The most important sacred cow Anders mentioned is the idea that we
know what we are doing. One of my favorite philosophers, William
Bartley, made the observation (and he emphasised it) that we never
know what we are saying and that we never know what we are doing.
All our words and all our actions will have far more unintended
consequences than intended ones. I don't know about the rest of
the list, but I'm quite confident that we have no idea what we're
doing. We ought to give it a good college try anyway.
Eric Watt Forste ++ mailto:arkuat@pobox.com ++ http://www.c2.org/~arkuat/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:48 MST