Re: Singularity: Individual, Borg, Death?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Dec 04 1998 - 12:28:56 MST


Perhaps I should have phrased RA1 differently, to distinguish between absolute
knowledge and probabilistic knowledge. I stick by my guns, however, in that I
have yet to observe rational debate between two parties absolutely convinced
of what they're saying, or even with just one party absolutely convinced.

Nick Bostrom wrote:
>
> I think Eliezer's interesting argument is unsound, because I think
> one of it's premisses (RA1) is false.
>
> Eliezer wrote:
>
> > Rational Assumptions:
> > RA1. I don't know what the objective morality is and neither do you.
> > This distinguishes from past philosophies which have attempted to "prove"
> > their arguments using elaborate and spurious logic. One does not "prove" a
> > morality; one assigns probabilities.
> [snip]
> > LA1 and RA1, called "Externalism", are not part of the Singularity logic per
> > se; these are simply the assumptions required to rationally debate
> > morality.
>
> I don't see why a rational debate about morality would be impossible
> if you or I knew the "objective morality". People often
> rationally debate issues even when one party already knows where the
> truth lies.

But is that party _certain_? Probably not. People don't learn truth by being
certain. Perhaps I should have phrased RA1 differently, to distinguish
between absolute knowledge and probabilistic knowledge. I stick by my guns,
however, in that I have yet to observe rational debate between two parties
absolutely convinced of what they're saying, or even with just one party
absolutely convinced. Even mathematicians, about the only people on the
planet with real access to any kind of absolute truth, don't violate this
rule. If someone is arguing with the mathematician, either it's another
mathematician and the proof is so complex that both are uncertain, or it's
some idiot who wants to prove he can trisect an angle. Most religious debate,
or Plato and Aristotle's travesties of philosophy, illustrate very clearly
what happens when one party tries to "prove" things using spurious logic
rather than probabilistic reasoning.

For two people to argue rationally, it is necessary for each to accept the
possibility of being wrong, or they won't listen. Maybe this doesn't hold on
the other side of dawn, but it surely holds true for the human race.

And note that if one of us stumbled across a hydrogen-band SETI message
containing the Meaning of Life, my logic goes out the window. So you're
certainly correct in that if RA1 is wrong, it takes my argument with it.

> As for RA1, I think it could well be argued that it is false. We may
> not know in detail and with certainty what the moral facts are (so
> sure we'd want to assign probabilities), but it doesn't follow that
> we know nothing about then at all. In fact, probably all the people
> on this list know that it is wrong to torture innocent people for a
> small amount of fun. We could no doubt write down a long list of
> moral statements that we would all agree are true. Do you mean that
> we all suffer from a huge illusion, and that we are all totally
> mistaken in believing these moral propositions?

First of all, I don't know that it is wrong to torture innocent people for a
small amount of fun. It is, with at least 90% probability, not right. I
can't be at all certain that "2+2 = 4", but I can be almost totally certain
that "2+2 does not uniquely equal 83.47". I don't become really convinced of
something because evidence is presented for it, but only when I can't come up
with any plausible alternative. Now, it may be that intensity of experience
is the ultimate good, or a kind of thought that pain stimulates, or that pain
is morally null but the joy the torturer experiences is not, or even that all
things that happen are good and chosen by God. It does seem unlikely enough
that pain is right, and likely enough that pain is wrong, and sufficiently in
violation of social cooperation to boot, that I would cheerfully shoot such a torturer.

> Now, if we accept the position that maybe we already know quite a few
> simple moral truths, then your chain of reasoning is broken. No
> longer is it clear that the morally preferred action is to cause a
> singularity no matter what.

I'm not too sure of this. The hedonistic qualia chain, which states that the
qualia of pleasure and pain are physically real positive-value and
negative-value goals, is sufficiently in correspondence with the general
postulates used by society, and hence with the game theory of social
cooperation, that I would not grossly violate it simply to get to the
Singularity. For me, the ends cannot justify the means. Any being short of
superintelligence is still bound by game theory. There is absolutely no way
in which I consider myself responsible for an ethical superintelligence's
actions, however, even if I personally created the SI and successfully
predicted the actions.

> For example, if it is morally preferred
> that the people who are currently alive get the chance to survive
> into the postsingularity world, then we would have to take this
> desideratum into account when deciding when and how hard to push for
> the singularity.

Not at all! If that is really and truly and objectively the moral thing to
do, then we can rely on the Post-Singularity Entities to be bound by the same
reasoning. If the reasoning is wrong, the PSEs won't be bound by it. If the
PSEs aren't bound by morality, we have a REAL problem, but I don't see any way
of finding this out short of trying it. Or did you mean that we should push
faster and harder for the Singularity, given that 150,000 people die every day?

> In the hypothetical case where we could dramatically
> increase the chances that we and all other humans would survive, by
> paying the relatively small price of postponing the singularity one
> year, then I feel pretty sure that the morally right thing to do
> would be to wait one year.

For me, it would depend on how it affected the chance of Singularity. I don't
morally care when the Singularity happens, as long as it's in the next
thousand years or so. After all, it's been fifteen billion years already; the
tradeoff between time and probability is all in favor of probability. From my
understanding of causality, "urgency" is quite likely to be a human
perversion. So why am I in such a tearing hurry? Because the longer the
Singularity takes, the more likely that humanity will wipe itself out. I
won't go so far as to say that I'm in a hurry for your sake, not the
Singularity's - although I would personally prefer that my grandparents make
it - but delay hurts everyone.

> In reality there could well be some kind of tradeoff like that.

There's a far better chance that delay makes things much, much worse. Delay
appeals to the uncertain, but that doesn't mean it's not fatal.

> It's
> good if superintelligent posthumans are created, but it's also good
> if *we* get to become such beings. And that can in some cases impose
> moral obligations on us to make sure that we ourselves can survive
> the singularity.

Why not leave the moral obligations to the SIs, rather than trying (futilely
and fatally) to impose your moral guesses on them?

> You might say that our human morality - our desire that *we* survive
> - is an arbitrary effect of our evolutionary history. Maybe so, but I
> don't see the relevance. If our morality is in that sense arbitrary,
> so what? You could say that the laws of physics are arbitrary, but
> that does not make them any less real. Don't forget that "moral" and
> "good" are words in a human language. It's not surprising, then, if
> their meaning is also in some way connected to human concepts and
> anthropocentric concerns.

I don't just think it's arbitrary, I fear that it is _wrong_.

Perform reality<->morality inversion, move back 500 years:

"You might say that our human reality - our opinion that the Earth is flat -
is an arbitrary effect of our evolutionary history. Maybe so, but I don't see
the relevance. If our reality is in that sense arbitrary, so what? You could
say that human sacrifice is arbitrary, but that does not make it any less
good. Don't forget that "real" and "true" are words in a human language.
It's not surprising, then, if their meaning is also in some way connected to
human concepts and anthropocentric concerns."

Same objections.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:54 MST