From: Nick Bostrom (bostrom@ndirect.co.uk)
Date: Mon Dec 07 1998 - 18:15:26 MST
Eliezer S. Yudkowsky wrote:
> This, in my opinion, is exactly the wrong answer. (See particularly the
> "Prime Directive of AI" in "Coding a Transhuman AI".) But think about what
> you just said. First you say that sufficient intelligence should be able to
> recognize good and bad. Then you say that we should build in a moral system
> with a particular set of values.
Yes. What I mean is this: whatever moral system we have (whether we
have defined it explicitly or it is only implicitly manifested in our
use of such moral terms as "good", "right" etc.), a superintelligence
would be able to figure it out and to understand how it was intended
to be applied to particular cases. So what we have to do is (1)
define a moral system that would place a great value on our own
survival (as well as on our gradual metamorphoses into posthumans);
and (2) to give the superintelligence a strong desire to live by this
moral system.
> What if we get it wrong?
Then possibly we're fucked.
> Do you really know all the logical
> consequences of placing a large value on human survival? Would you care to
> define "human" for me? Oops! Thanks to your overly rigid definition, you
> will live for billions and trillions and googolplexes of years, prohibited
> from uploading, prohibited even from ameliorating your own boredom, endlessly
> screaming, until the soul burns out of your mind, after which you will
> continue to scream.
I think the risk of this happening is pretty slim and it can be made
smaller through building smart safeguards into the moral system. For
example, rather than rigidly prescribing a certain treatment for
humans, we could add a clause allowing for democratic decisions by
humans or human descendants to overrule other laws. I bet you could
think of some good safety-measures if you put your mind to it.
> If you can synchronize everyone's intelligence
> enhancement perfectly, then eventually we'll probably coalesce into a
> singleton indistinguishable from that resulting from an AI Transcend.
That could easily happen even if we don't synchronize everyone's
intelligence enhancements.
> Look, these forces are going to a particular place, and they are way, way,
> waaaaaayyy too big for any of us to divert. Think of the Singularity as this
> titanic, three-billion-ton truck heading for us. We can't stop it, but I
> suppose we could manage to get run over trying to slow it down.
To use your analogy, what I am proposing is that we try to latch on
to it somehow - the earlier the better, since it get's harder as it
picks up speed - and try to get into the driver's seat. Then we can
drive it safely to where we want it to go.
> > but let's not go into that
> > now.
>
> Let's. Please. Now.
How to contol a superintelligence? An interesting topic. I hope to
write a paper on that during the Christmas holiday.
> > Plus: whether it's moral or not, we would want to make
> > sure that they are kind to us humans and allow us to upload.
>
> No, we would NOT want to make sure of that. It would be immoral. Every bit
> as immoral as torturing little children to death, but with a much higher
> certainty of evil.
I suppose we have to agree to disagree on that one. But even if it
were slighly immoral to place a premium on human survival, I still
think we should do it - simply because we want to survive. You are
asking too much if you want us to be coldblodedly engineer our own
martyrdom. I would not vote for that policy.
Sure, a few humans who refuse to upload might be inefficient and a
waste of resources, but there are enough resources in the universe
that we can afford that. Let's be generous. Even from your moral
point-of-view, it would seem a wise moral insurance policy - for what
if human life turned out to have great moral value after all, and we
allowed a selfish superintelligence to destroy it. The outcome would
have been hundreds of times worse than what Hitler did.
Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk
Department of Philosophy, Logic and Scientific Method
London School of Economics
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:55 MST