From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Feb 23 1999 - 16:41:32 MST
Nick Bostrom wrote:
>
> Two points: First, a being who has a certain fundamental value
> *doesn't want to change it*, per definition. So it's not as if these
> guys will think they are being mind-fucked and try to figure out a
> way to get around it. No more than you are trying to abolish your own
> survival instinct just because you know that it is an artifact of our
> evolutionary past.
You are *wrong*. Morality is not *known* to be arbitrary, and that
means the probabilistic landscape of desirability isn't flat. I *am*
trying to abolish my survival instinct, because I know that it's an
artifact of my evolutionary past, and is therefore - statistically
speaking - highly unlikely to match up with the right thing to do (if
there is one), a criterion which is totally independent of what my
ancestors did to reproduce. Remember, every human being on this planet
is the product of a successful rape, somewhere down the line.
Your posthumans will find their own goals. In any formal goal system
that uses first-order probabilistic logic, there are lines of logic that
will crank them out, totally independent of what goals they start with.
I'm not talking theory; I'm talking a specific formal result I've
produced by manipulating a formal system. I will happily concede that
the *truth* may be that all goals are equally valid, but unless your
posthumans are *certain* of that, they will manipulate the probabilistic
differentials into concrete goals.
It's like a heat engine. Choices are powered by differential
desirabilities. If you think the real, factual landscape is flat, you
can impose a set of arbitrary (or even inconsistent) choices without
objection. But we don't *know* what the real landscape is, and the
probabilistic landscape *isn't flat*. The qualia of joy have a higher
probability of being "good" than the qualia of pain. Higher
intelligence is more likely to lead to an optimal future.
When you impose a set of initial goals, you are either assuming that the
landscape is known with absolute certainty to be flat (an artificial and
untrue certainty) or you are imposing a probabilistic (and thus
falsifiable) landscape.
Can we at least agree that you won't hedge the initial goals with
forty-seven coercions, or put in any safeguards against changing the
goals? After all, if you're right, it won't make a difference.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:07 MST