From: D.den Otter (neosapient@geocities.com)
Date: Sun Jan 09 2000 - 17:57:13 MST
----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>
>
> "D.den Otter" wrote:
> >
> > How can you be so sure that this won't be the case? In the
> > absence of hard data either way, we must assume a 50% chance
> > of AIs causing our extinction.
>
> That's good enough for me! What changed your mind?
Oh, I haven't become more optimistic about the whole thing,
if that's what you mean. Personally I think that the chance
that the SI(s) will kill us is much greater than 50%, but for
reasons of practicality it is better to stick to less
controversial (more conservative) estimates in a discussion.
A 50% chance of extinction is bad enough, mind you.
> > Oh golly, we've just been
> > reduced to a second-rate life form. We no longer control
> > our planet. We're at the mercy of hyperintelligent machines.
> > Yeah, that's something to be excited about...
>
> Sooner or later, we're gonna either toast the planet, or come up against
> something smarter than we are. You know that. We've agreed on that.
> Your sole point of disagreement is that you believe that you'll be
> better off if *you're* the first Power. But Otter, that's silly.
Well, yes, of course it is. I'd love to be the first Power (who
wouldn't?), but it's hardly realistic. Hence the proposal for
an upload project, comparable to your AI project. And yes,
I know it's probably more difficult than your idea, but the
reward is proportionally greater too; if you win you win for
real. You are *free*, a true god. Not some cowering creature
looking for SI handouts. There's a 50% chance that we'll
*survive* if and when AIs turn into SIs, but that doesn't mean
that this life will be *good*, nor that the SIs "mercy" will be
infinite. It could still kill or torture you anytime, or leave
you forever stranded in some backwater simulation. The
only truly positive scenario, unconditional uplifting, is
just one of many possibilities within those 50%.
> If
> transforming me into a Power might obliterate my tendency to care about
> the welfare of others, it has an equal chance of obliterating my
> tendency to care about myself.
Theoretically yes, though I still think that survival ranks well
above altruism; the latter has only evolved because it helps to
keep a creature alive. Altruism is the auxiliary of an auxiliary
(survival), the supergoal being "pleasure" for example. If a SI
has goals, and needs to influence the world around it or at least
experience mental states to achieve those goals, then it needs to
be alive, i.e. "selfish". Altruism on the other hand is just
something that's useful when there are peers around, and which
becomes utterly obsolete if a SI gets so far ahead of the
competition that they no longer pose an immediate threat.
> If someone else, on becoming a Power,
> might destroy you; then you yourself, on becoming a Power, might
> overwrite yourself with some type of optimized being or mechanism. You
> probably wouldn't care enough to preserve any kind of informational or
> even computational continuity.
Maybe, but I want to make that decision myself. I could stop
caring about preserving the illusion of self, of course, but why
should I care about anything else then? Why care about other
people if your own life is meaningless? If one life is "zero", than
*all* lives are "zero". Why do you want to save the world, your
grandparents etc. if you don't care about personal identity?
> Both of these theories - unaltruism and
> unselfishness - are equally plausible, and learning that either one was
> the case would greatly increase the probability of the other.
>
> So, given that there's also a 50% chance that the Powers are nice guys,
> or that no objective morality exists and Powers are freely programmable;
> and given also that if the Powers *aren't* nice guys, then being the
> Power-seed probably doesn't help; and given that your chance of winning
> a competition to personally become the Power-seed is far more tenuous
> than the chance of cooperatively writing an AI; and given that if we
> *don't* create Powers, we're gonna get wiped out by a nanowar; and given
> the fact that uploading is advanced drextech that comes after the
> creation of nanoweapons, while AI can be run on IBM's Blue Gene;
It could be years, decades even before nanoweapons would actually
get used in a full-scale war. Viable space colonies could perhaps
be developed before things get out of hand, neurohacking might
beat both AI and nano if given proper attention, for example. AI
isn't the only option; it's just a relatively "easy" way out. It
has a strong element of defeatism in it, IMO.
> and
> given your admitted 50% chance that the Other Side of Dawn is a really
> nice place to live, and that everyone can become Powers -
A 50% chance that we'll *survive* (at least initially), that's
not the same as a 50% chance of paradise (that would be 25% at
best).
> In what sense is AI *not* something to be excited about?
Oh, I just meant *positively* excited, of course. But I have
to say that when it comes to dying, "death by Singularity" is
a top choice.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:26:11 MST