Re: Posthuman mind control (was RE: FAQ Additions)

From: Nick Bostrom (bostrom@ndirect.co.uk)
Date: Mon Mar 01 1999 - 18:18:37 MST


Eliezer S. Yudkowsky wrote:

> > But I don't think we deliberately change our fundamental values.
> > Non-fundamental values we may change, and the criteria are then our
> > more fundamental values. Fundamental values can change too, but they
> > are not deliberately (rationally) changed (except in the mind-scan
> > situation I mentioned in an earlier messege).
>
> Well, either I've misunderstood you, or you're simply wrong. We humans
> switch fundamental values all the time. It happens every time someone
> changes a religion. If you're going to argue that these weren't the
> true "fundamental" values, then the AI's "make people happy" won't be a
> fundamental value either.

My answer to this is a superposition of three points: (1) I
explicitly allowed that fundamental values could change, only (except
for the mind-scan) the change wouldn't be rationally brought about.
For example, at puberty peoples' values may change, but it's not
because of a rational choice they made. (2) Just because somebody
calls a certain value fundamental doesn't mean it actually is
fundamental. Especially ideological values are, for humans, often
better described as their official policies rather than their true
driving force. Otherwise everybody would be perfectly willing to
suffer torment and death for the sake of their professed "fundamental
values"; but since they don't, those values are either not
fundamental or are just part of their fundamental values. (3) With
imperfectly rational beings (such as humans) their might be conflicts
between what they think are their fundamental values. When they
discover that that is the case, they have to redefine their
fundamental values as the preferred weighted sum of the conflicting
values (which thereby turned out not to be truely fundamental after
all).

> My fundamental values have changed from "eat and sleep and survive" to
> "serve humanity" to "bring about a Singularity" to "do what is right",
> where I presently reside.
>
> I think that it will be a considerable amount of time before an AI is
> pressed by logic to change its fundamental values from "do what is
> right". But anything more specific certainly isn't a fundamental value.

"Do what is right" sounds almost like "Do what is the best thing to
do", which is entirely vacuous.

> > That depends. If selection pressures lead to the evolution of AIs
>
> What selection pressures? Who'd be dumb enough to create an AI wanting
> to survive and reproduce, and, above all, *compete* with its children?

I suspect there would be many humans who would do exactly that. Even
if none did, such a mindset could still evolve if there were
heritable variation.

 
> > with selfish values that are indifferent to human welfare, and the
> > AIs as a result go about annihilating the human species and stealing
> > our resources, then I would say emphatically NO, we have a right to
> > expect more.
>
> Absolutely. I do not intend to let humanity be wiped out by a bunch of
> selfish, badly programmed <Hs

I'm glad to hear that. But do you hold the same if we flip the
inequality sign? I don't want to be wiped out by >Hs either.

Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk
Department of Philosophy, Logic and Scientific Method
London School of Economics



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:12 MST