Re: Singularity: Individual, Borg, Death?

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Sun Dec 06 1998 - 12:54:16 MST


Eliezer S. Yudkowsky wrote:
>In the space of choices, the morality Hamiltonian, we have no gate. The
space
>appears homogenous and flat, all probabilities and values equal to zero.
>There is no way to experimentally test the value of any choice. We have no
>perceptions. It is revolutionary enough to suggest that there is a single
>point called the truth, that the arbitrariness is an illusion. But that
first
>step taken, the second task is to open up a gate, to unflatten moral space
and
>seek out the singularity.
>
>I don't know how to do that. I have good reason to think it impossible to
the
>human cognitive architecture. So - it's time to step beyond.

This, then, is what I'm trying to ask you: What makes you think that a
super intelligence would be able to unflatten the moral opinion space?

I see two real problems here. First, it seems to me that the super
intelligence itself would have its own trajectory, and there's no reason to
think that just because it's super intelligent it would step out of its own
trajectory and into the right one. Hofstadter raised the point about
Jumping Out of the System that you CAN'T fully jump out of the system in a
rational way. If you create rules for breaking rules, then you're just
following a different set of rules. And these new rules, as you point out,
must necessarily be evaluated from within the super intelligence's own
trajectory, which may, in fact, divert it from the "correct" moral opinion.

The second and larger problem may be that this fundamental "truth
singularity" does not exist when we're talking about morality. ~LA1. In
your justification of LA1, you claim that under ~LA1, life would be
meaningless and no choice would be any better than any other. This is
obviously not the case: your trajectory itself gives you some direction in
this matter, and many people live lives which they consider to be
meaningful without even making an attempt to jump out of their system.
(Sad, IMO, but possible.) Doing what you evaluate to be the best thing
within your own trajectory has a rather obvious technical phrase in ethical
philosophy: it is "pursuing your own self-interests." (Don't take this
term too colloquially; it doesn't mean being selfish. Mother Theresa was
also pursuing her self-interests as it is defined here.)

At that point, you can begin to discuss the relative merits of egoism vs.
utilitarianism or rule-based utilitarianism, etc. And I wouldn't doubt
that a super intelligence would be able to answer questions like these far
better than we could; that a super intelligent being could even be defined
as being qualitatively better at pursuing its own self-interests. But that
isn't exactly what you're looking for, is it?

Essentially, why wouldn't the super intelligence just follow its own self
interests? Are we sure that it's even possible for the super intelligence
to do otherwise? I'm sure not.

-Dan

        -GIVE ME IMMORTALITY OR GIVE ME DEATH-



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:54 MST