From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Dec 04 1998 - 11:29:16 MST
I assign a higher probability to my own self-awareness than to my Singularity
reasoning. I don't see how this implies a higher moral value, however.
(Moral-Should(X), Logical-Predict(X), and Observe-Is(X)) mean three different
things. You can phrase all of them as probabilistic statements, but they are
statements with different content, even though they are processed using the
same rules.
Paul Hughes wrote:
>
> "Eliezer S. Yudkowsky" wrote:
>
> > (By admitting that the Singularity has the capability to destroy your
> > consciousness, you are admitting that the Singularity should be treated as real.)
>
> I admit that anything is possible with a singularity, and that it's real in the same sense that
> a horizon portends that something exists beyond which I can see. I can accept the possibility
> that a singularity contains an intelligence vastly exceeding my own. This however is still not
> enough for me to give up my life for its evolution, anymore than I would expect a retarded child
> to give up their life so I could evolve.
One of the most interesting events in history, from an ethical perspective, is
the extermination of the Neanderthals by the Cro-Magnons. If you fell back in
time and had the ability to create or prevent this event, which would you do?
I don't know of any question that is more morally ambiguous. I can't decide.
I won't decide unless I have to.
Now that may come as a surprise, given what I've been saying. But I never
proposed to personally exterminate humanity; I only said that I am willing to
accept that extermination may be the ethically correct thing to do. In
essence, I stated that I was willing to default to the opinions of
superintelligence and that I would not change this decision even if I knew it
would result in our death. But then, I would hardly go all disillusioned if
we could slowly and unhurriedly grow into Powers. Either way, *I'm* not
making the decision that the ends justify the means. If at all possible, such
decisions should be the sole province of superintelligence, given our dismal
record with such reasoning.
> > The problem with making recourse to infinite recursion as a source of ultimate
> > uncertainty is that it makes rational debate impossible - both externally and
> > internally. One can imagine Plato saying: "Well, that was a very impressive
> > demonstration that the world was composed of atoms, but insofar as we can't be
> > certain about anything, I choose to believe that the world is composed of four
> > elements, since I can observe air, water, earth, and fire more directly."
>
> Very well said, but that was not my intention. I was trying to suggest that no matter what
> reality is, my self-awareness has more validity than a logical concoction or a conference of
> scientist proving otherwise. No matter how astute your logic and where it takes you, you can
> not escape the unmistakable fact that it was "you" and "your" thoughts that got you there in the
> first place. To undermine the validity of your "self" (which I think you have done) puts doubt
> on the rest of your ruminations. Eliezer my dear friend, I think you may have become the snake
> whose eating his own tail - an inverse Von Neuman Singularity! :-)
You are absolutely correct in that I assign a higher probability to my own
self-awareness than to my Singularity reasoning. I don't see how this implies
a higher moral value, however. (Should(X), Predict(X), and Is(X)) mean three
different things. You can phrase all of them as probabilistic statements, but
they are statements with different content, even though they use the same
logic. For that matter, I am more certain that WWII occurred, than I am that
superintelligence is possible. But we all know which we prefer.
> Assuming it does happen, when and how fast will depend
> largely on the human variable - even if this variable consist of only one person. It only takes
> one person to make the necessary breakthrough, and only one person to kill the person right
> before they make that breakthrough.
(I said "in the event of Singularity", intending it as roughly the same sort
of qualification.)
> Where do we differ? I have 99.9% certainty of my own awareness, with everything else at various
> levels of decreasing certainty determined, as you say, by their degree of arbitrariness.
> Whereas you seem to put the singularity at the highest level of certainty, with your own
> awareness somewhere down on the list of increasing arbitrariness.(?)
Again, I think you're confusing certainty and desire. I'd say that there's a
30%-80% chance of a Singularity, but even if it was only 30%, I'd still want
it. (Incidentally, I don't assign more than a 95% probability to my own
qualia. I still believe in "I think therefore I am", but the reasoning has
gotten too complicated for me to be at all certain of it.)
Let me ask it another way: I have 98% probability that the Singularity is a
good thing, but only 40% probability that human awareness is a good thing.
(Predicated on the existence of meaning and reality; in absolute terms the
reasoning is less probable than my qualia, about 92%.) Do you have 99.9%
certainty that your awareness is a good thing?
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:54 MST