Re: Posthuman mind control (was RE: FAQ Additions)

From: Nick Bostrom (bostrom@ndirect.co.uk)
Date: Thu Mar 04 1999 - 16:11:03 MST


Eliezer S. Yudkowsky" <sentience@pobox.com>

> Nick Bostrom wrote:
> >
> > Eliezer S. Yudkowsky wrpte:
> >
> > > Are you arguing that, say, someone who was brought up as a New Age
> > > believer and switches to being an agnostic is not making a rational choice?
> >
> > Believing in the healing powers of crystals is not a value, it's a
> > mistaken factual opinion. The New Ager, provided he has the same data
> > as we do, will be rational to give up his New Agey beliefs.
>
> I invoke the banner of Crockerism to communicate, and humbly beg your
> tolerance: I think you may be conforming the facts to the theory.
>
> On the whole, New Agers are not people who form mistaken factual
> opinions about the healing powers of crystals. You are, shall I say,
> extropomorphizing? These people do not believe their tenets as the
> simplest explanation for incorrectly reported facts; they believe
> because Crystals are the Manifestation of the New Age of Warmth and Love
> and Kindness which shall Overcome the Cold Logic of Male-Dominated
> Science.

If the New Ager believes that crystals have lots of healing power,
and I believe they don't have lots of healing power, then we have a
factual disagreement. At least one of us is factually mistaken; I
claim it is him.

> > It's hard to give a precise definition of fundamental value, just as
> > it is hard to give a precise definition of what it means to believe
> > in a proposition.
>
> ?? There are a few kinds of cognitive objects in the mind associated
> with "belief", including the form of the proposition itself, [snip]

But what does it mean for a proposition to" be in the mind"? It's
a big philosophical question (see e.g. W.V.O. Quine). But let's
not get into that.

> > But let me try to explain by giving a simplified
> > example. Suppose RatioBot is a robot that moves aroung in a finite
> > two-dimensional universe (a computer screen). RatioBot contains two
> > components: (1) a long list, where each line contains a description
> > of a possible state of the universe together with a real number (that
> > state's "value") [snip] On the other
> > hand, the values expressed by the list (1) could be said to be
> > fundamental.
>
> The human mind doesn't work that way. [snip]

I certainly agree with that.

> > I think I know approximately what my fundamental values are: I want
> > everybody to have the chance to prosper, to be healty and happy, to
> > develop and mature, and to live as long as they want in a physically
> > youthful and vigorous state, free to experience states of
> > consciousness deeper, clearer and more sublime and blissful than
> > anything heard of before; to transform themselves into new kinds of
> > entities and to explore new real and artificial realities, equipt
> > with intellects incommensurably more encompassing than any human
> > brain, and with much richer emotional sensibilities. I want very much
> > that everybody or as many as possible get a chance to do this.
> > However, if I absolutely had to make a choice I would rather give
> > this to my friends and those I love (and myself of course) than to
> > people I haven't met, and I would (other things equal) prefer to give
> > it to people now existing than only to potential future people.
>
> Do you think that these fundamental values are *true*?

I don't understand that sentence.

> That they are
> better than certain other sets of fundamental values?

Better for what? (I do think they are morally very superior to, say,
the values of Hitler, if that's what you are asking.)

> That the
> proposition "these values are achievable and non-self-contradictory" is true?

I think the values can be achieved if enough people share them and
are smart and careful enough. I don't think there is a
"contradiction" between any of the values I mentioned, although cases
could in principle arise where they could not all be completely
achieved, so that a tradeoff had to be made. (A more accurate
description of my fundamental values would include some indication of
how I wanted such a tradeoff to be made in different possible
situations.)

> If a Power poofed into existence and told you that all Powers had the
> same set of values, and that it was exactly identical to your stated set
> EXCEPT that "blissful" (as opposed to "happy") wasn't on the list; would
> you change your fundamental goals, or would you stick your fingers in
> your ears and hum as loud as you could because changing your beliefs
> would interfere with the "blissful" goal?

Maybe, in such a situation, I would conclude that I had misdescribed
what I had in mind. It is not so easy to say exactly what one would
want, if one could have anything one wished for. It requires some
considerable introspection, and mistakes are always possible.

> > With human-level AIs, unless they have a very clear and unambigous
> > value-structure, it could perhaps happen. That's why we need to be on
> > our guard against unexpected consequences.
>
> With seed, human, and tranhuman AIs, it will happen no matter what we do
> to prevent it.

That's what you think. Others (e.g. Moravec) have a different
opinion, and I think it is an open question. My feeling is that by
being careful enough we can probably avoid making mistakes that
would lead the AIs to attack and exterminate us humans.

Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk
Department of Philosophy, Logic and Scientific Method
London School of Economics



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:15 MST