Re: IA vs. AI was: longevity vs singularity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jul 28 1999 - 21:42:50 MDT


den Otter wrote:
>
> ----------
> > From: Eliezer S. Yudkowsky <sentience@pobox.com>
>
> > Has it occurred to you that if the first uploads *are* charitably
> > inclined, then it's *much* safer to be in the second wave? The first
> > uploads are likely to be more in the nature of suicide volunteers,
>
> The first uploads would no doubt be animals (of increasing complexity),
> followed by tests with humans (preferably people who don't grasp
> the full potential of being uploaded, for obvious reasons).

You have to be kidding. Short of grabbing random derelicts off the
street, there's simply no way you could do that. Or were you planning
to grab random derelicts off the street? And did I mention that by the
time you can do something like that, in secret, on one supercomputer,
much less run 6000 people on one computer and then upgrade them in
synchronization, what *I* will be doing with distributed.net will -
hell, even Eugene Leitl's genetic algorithm would Transcend at that
point. Douglas Lenat could just rerun EURISKO. *Spreadsheet* programs
would Transcend.

> Of course,
> the strictest possible safety measures should be observed at all
> times. Only when these tests are concluded with satisfactory results,
> should the (actual) synchronized upload procedure be executed.

If only the Straumers had stamped "Thank you for observing all safety
precautions" on that ancient archive! This seal of Solomon would have
kept the Blight from getting out.
You're dreaming.

Besides, there's more to uploading than scanning. Like, the process of
upgrading to Powerdom. How are you going to conduct those tests, hah?

> > especially when you consider that a rough, destructive, but adequate
> > scan is likely to come before a perfect, nondestructive scan.
>
> Well, IMHO the scan version of uploading is utterly useless from the
> individual's point of view (copy paradox and all that), and I certainly
> wouldn't waste any time on this method. In fact, I'm just as opposed
> to it as I am to conscious AI.

So. No backups.

> > You're staking an awful lot on the selfishness of superintelligences.
>
> I'm simply being realistic; when you realize how incredibly slow,
> predictable and messy humans will be compared to even an early
> SI, it is hard to imagine that it will bother helping us. Do we "respect"
> ants? Hardly. Add to that the fact that the SI either won't have our
> emotional (evolutionary) baggage to start with, or at least can modify
> it at will, and it becomes harder still to believe that it would be
> willing to keep humans around, let alone actively uplifting them.

This holds just as true for uploaded humans, except that being all
"messy" we're even less likely to wind up in a state remotely resembling
what we started out with.

> Why would it want to do that? There is no *rational* reason to
> allow or create competition, and the SI would supposedly be the
> very pinnacle of rationality. It's absurd to think that a true SI would
> still run on the programming of tribal monkey-men, which are
> weak and imperfect and therefore forced to cooperate. That's why
> evolution has come up with things like altruism, bonding, honor
> and all the rest. Nice for monkey-men, but utterly useless
> for a supreme, near-omnipotent and fully self-contained SI. If
> it has a shred of reason in its bloated head, it will shed those
> vestigal handicaps asap, if it ever had them in the first place.
> And of course, we'd be next as we'd just be annoying microbes
> which can spawn competition. Competition means loss of control
> over resources, and a potential threat. Not good. *Control* is good.
> Total control is even better. The SI wouldn't rest before it had
> brought "everything" under its control, or die trying. Logical, don't
> you think? Forget PC and try to think like a SI (a superbly powerful
> and exquisitely rational machine).

*You* try to think like an SI. Why do you keep on insisting on the
preservation of the wholly human emotion of selfishness? I don't
understand how you can be so rational about everything except that!
Yes, we'll lose bonding, honor, love, all the romantic stuff. But not
altruism. Altruism is the default state of intelligence. Selfishness
takes *work*, it's a far more complex emotion. I'm not talking about
feel-good altruism or working for the benefit of other subjectivities.
I'm talking about the idea of doing what's right, making the correct
choice, acting on truth. Acting so as to increase personal power
doesn't make rational sense except in terms of a greater goal.

I *know* that we'll be bugs to SIs. I *know* they won't have any of the
emotions that are the source of cooperation in humans. I *still* see so
many conflicting bits of reasoning and evidence that you might as well
flip a coin as ask me whether they'll be benevolent. That's my
probability: 50%.

If Powers are hungry, why haven't they expanded continually, at
lightspeed, starting with the very first intelligent race in this
Universe? Why haven't they eaten Earth already? Or if Powers upload
mortals because of a mortally understandable chain of logic, so as to
encourage Singularities, why don't they encourage Singularities by
sending helper robots? We aren't the first ones. There are visible
galaxies so much older that they've almost burned themselves out. I
don't see *any* reasonable interpretation of SI motives that is
consistent with observed evidence. Even assuming all Powers commit
suicide just gives you the same damn question with respect to mortal aliens.

And, speaking of IA motivations, look at *me*. Are my motivations
human? Not by your definition. That's all from some trivial little
twist in the quantitative levels of abilities... nothing qualitative.

> Or are you hoping for an insane Superpower? Not something you'd
> want to be around, I reckon.

No, *you're* the one who wants an insane SI. Selfishness is insane.
Anything is insane unless there's a rational reason for it, and there is
no rational reason I have ever heard of for an asymmetrical world-model.
 Using asymmetric reflective reasoning, what you would call
"subjectivity", violates Occam's Razor, the Principle of Mediocrity,
non-anthropocentrism, and... I don't think I *need* any "and" after that.

> Synchronized uploading would create several SIs at once, and though
> there's a chance that they'd decide to fight eachother for supremacy,
> it's more likely that they'd settle for some kind of compromize.

Or that they'd merge.

> > Maybe you don't have the faintest speck of charity in your soul, but if
> > uploading and upgrading inevitably wipes out enough of your personality
> > that anyone would stop being cooperative - well, does it really make
> > that much of a difference who this new intelligence "started out" as?
> > It's not you. I know that you might identify with a selfish SI, but my
> > point is that if SIs are *inevitably* selfish, if *anyone* would
> > converge to selfishness, that probably involves enough of a personality
> > change in other departments that even you wouldn't call it you.
>
> To transcend is to change, dramatically, about that I have no doubt.
> So what, I'm not who I was when I was, say, 2 or 5 or 12. In some
> aspects I'm the polar opposite of what I was then, but I still consider
> myself to be me. Drugs can change the mind temporarily almost
> beyond recognition, and while you dream your dream persona can
> be quite different from the "real" you, both outside and inside, yet
> you still feel that it's "you". So, I'm not too worried about ascension-
> related personality changes, as long as I remain conscious and
> reasonably in control while it happens. Sooner or later, the monkey-
> man will have to pass on.

I find it hard to believe you can be that reasonable about sacrificing
all the parts of yourself, and so unreasonable about insisting that the
end result start out as you. If two computer programs converge to
exactly the same state, does it really make a difference to you whether
the one labeled "den Otter" or "Bill Gates" is chosen for the seed?

> > > -Stopping you from writing an AI wouldn't be all that hard, if I really
> > > wanted to. ;-)
> >
> > Sure. One bullet, no more Specialist. Except that that just means it
> > takes a few more years. You can't stop it forever.
>
> Maybe not forever, but perhaps long enough to tip the balance in favor
> of uploading.

See below.

> > All you can do is
> > speed up the development of nanotech...relatively speaking. We both
> > know you can't steer a car by selectively shooting out the tires.
>
> No, but you *can* slow it down that way.

Of course you can. But does it really matter all that much, to either
of us, whether a given outcome happens in ten years or twenty? What
matters is the relative probabilities of the outcomes, and trying to
slow things down may increase the probability of *your* outcome relative
to *my* outcome, but it also increases the probability of planetary
destruction relative to *either* outcome... increases it by a lot more.
You can't selectively shoot the Yudkowsky who wants to bring about a
pure Singularity without also killing the Yudkowsky who's working on
neurosurgical intelligence enhancement and the Yudkowsky who's studying
AI motivations.

> > > You can run and/or hide hide from nanotech, even
> > > fight it successfully, but you can't do that with a superhuman
> > > AI, i.e. nanotech leaves some room for error, while AI doesn't (or
> > > much less in any case). As I've said before, intelligence is the
> > > ultimate weapon, infinitely more dangerous than stupid nanites.
> >
> > Quite. And an inescapable one. See, what *you* want is unrealistic
> > because you want yourself to be the first one to upload,
>
> That's *among* the first to upload, which is something else entirely.
> Well, yes of course I want that; after all, the alternative is to meekly
> wait and hope that whoever/whatever turns SI first will have mercy on
> your soul. If I had that kind of attitude I'd be a devout Christian, not
> a transhumanist. Wanting to be among the first to upload is morally
> right, if nothing else, just like signing up for suspension is morally
> right, regardless whether it will work or not. It's man's duty (so to
> speak) to reject oppression of any kind, which means spitting death
> in the face, among other things. AI could very well be death/
> oppression in sheep's clothing (which reminds me of the movie
> "Screamers", btw, with the "cute" killer kid), so we should treat it
> accordingly.
>
> > which excludes
> > you from cooperation with more than a small group
>
> Theoretically a group of almost any size could do this, more or
> less SETI-style (but obviously with a good security system in
> place to prevent someone from ascending on the sly). I'm not
> excluding anyone, people exclude *themselves* by either not
> caring or giving in to defeatism, wishfull thinking etc.

I see. You're going to simultaneously upload six million people? And
then upgrade them in such a way as to maintain synchronization of
intelligence at all times? Probability: Ze-ro.

> > and limits your
> > ability to rely on things like open-source projects and charitable
> > foundations.
>
> Why would it limit that ability? Even if you'd want to keep your project
> secret you could cooperate with people and organizations which
> might somehow advance your cause, without them ever knowing.
> Happens all the time. Anyway, it *isn't* secret. On the contrary,
> it's all over the web.

I think you overestimate the tendency of other people to be morons.
"Pitch in to help us develop an open-source nanotechnology package, and
we'll conquer the world, reduce you to serfdom, evolve into gods, and
crush you like bugs!" Even "Help us bring about the end of the world in
such a way that it means something" has more memetic potential than
that. There's a *lot* more evolution devoted to avoiding suckerhood
than lemminghood.

> What *they* want is unrealistic because they want to
> > freeze progress.
>
> Who is "they"?

The ones who want to keep Life As We Know It around indefinitely.

> > Both of you are imposing all kinds of extra constraints. You're always
> > going to be at a competitive disadvantage relative to a pure
> > Singularitarian
>
> What's a "pure" Singularitarian anyway, someone who wants a
> Singularity asap at almost any cost? Someone who wants a
> Singularity for its own sake?

Yep.

> > or the classic "reckless researcher", who doesn't demand
> > that the AI be loaded down with coercions, or that nanotechnology not be
> > unleashed until it can be used for space travel, or that nobody uploads
> > until everyone can do it simultaneously, or that nobody has access to
> > the project except eight people, and so on ad nauseam. The open-source
> > free-willed AI project is going to be twenty million miles ahead while
> > you're still dotting your "i"s and crossing your "t"s.
>
> Just because something is easier, doesn't mean that it's the right
> thing to do. Instead of trying to find an intelligent solution, you're
> actively contributing to the problem; it's like punching holes in an
> already sinking ship (and actually taking great pride in it too), while
> instead you should be looking for, or building, a life raft.

Why, thank you! I rather like that metaphor. The one about punching
holes in the ship, I mean, not the part about the life raft. There is
no life raft. Humanity *will* sink. That is simply not something
subject to alteration. Everything must either grow, or die. If we
embrace the change, we stand the best chance of growing. If not, we
die. So let's punch those holes and hope we can breathe water.

> > A-priori chance that you, personally, can be in the first 6 people to
> > upload: 1e-9.
> > Extremely optimistic chance: 1%
>
> Why 6? It could be 600 or 6000 for all I care, as long as uploading
> happens simultaneously. If, say, half of all serious transhumanists
> decided to go for it [uploading], we'd each stand more than a
> 1% chance, simply because 99.99...% of the world's population
> lacks vision.

Believe me, den Otter, if I were to start dividing the world's
population into two camps by level of intelligence, we wouldn't be on
the same side. You may be high-percentile but you're still human, not human-plus-affector.

And, once again, you are being unrealistic about the way technologies
develop. High-fidelity (much less identity-fidelity) uploading simply
isn't possible without a transhuman observer to help. I could be wrong
about this, but I really don't think I am - not from looking at the
technology. Again, those 6000 are suicide volunteers. Any uploadee is
a suicide volunteer until there's an SI (whether IA or AI) to help.
There just isn't any realistic way of becoming the One Power because a
high-fidelity transition from human to Power requires a Power to help.

> > Extremely pessimistic chance that AIs are benevolent: 10%
> >
> > Therefore it's 10 times better to concentrate on AI.
>
> Well, see above. Besides, a 90% chance of the AI killing us
> isn't exactly an appealing situation. Would you get into a
> machine that kills you 90% of the time, and gives total,
> unprecedented bliss 10% of the time? The rational thing is
> to look for something with better odds...

Yes, but you haven't offered me better odds. You've asked me to accept
a 1% probability of success instead. I think you're just favoring that
1% probability because it *looks* like something you can increase, while
the probability of a friendly SI is absolute. Well, but IMHO the
probability of a friendly IA is absolute. And even if you can increase
the probability, you aren't going to be able to increase it to 10%. I
think you're taking a very sentimental and unrealistic approach to
navigating the future.

> Humanity (those not in the project) would benefit too from
> the mass upload approach, because either [uploaded] people
> retain the key parts of their original personality, which means
> that the "good guys" (fanatical altruists) would protect the
> mehums, while the "bad guys" probably wouldn't risk a lethal
> conflict over the issue and some compromize would be made,
> OR personalities would change beyond recognition in which
> case humanity wouldn't be any worse off than in the case of
> an AI transcension. Conclusion: survival chances likely better
> than 10% for *everyone*, and up to 50% for those directly
> involved, depending on speed of AI development, nanotech
> etc. A long shot, but the only one that makes sense.

Well, this is where I fundamentally disagree with you. It comes down to
that "OR" statement you made. I think that the second branch is much
more likely - and humanity wouldn't be any worse off than in the case of
an AI Transcend, but said Transcend becomes much less probable relative
to nuclear war or grey goo problems. What you're overlooking is that
what counts isn't just the relative desirability of the *results* of the
two scenarios, but the relative *probability* of the two scenarios
compared to known undesirable scenarios. An IA Transcend involves so
many constraints, preconditions, and necessities that it would need a
*huge* desirability advantage to be preferable to an AI Transcend.

As far as I can tell, your evaluation of the desirability advantage is
based solely on your absolute conviction that rationality is equivalent
to selfishness. I've got three questions for you on that one. First:
Why is selfishness, an emotion implemented in the limbic system, any
less arbitrary than honor? Second: I know how to program altruism into
an AI; how would you program selfishness? Third: What the hell makes
you think you know what rationality really is, mortal?

And I think an IA Transcend has a definite probability of being less
desirable than an AI Transcend. Even from a human perspective. In
fact, I'll go all the way and say that from a completely selfish
viewpoint, not only would I rather trust an AI than an upload, I'd
rather trust an AI than *me*. And I mean that from a strictly selfish
standpoint! Human minds are too goddamn messy and they are NOT designed
to tolerate architectural changes. The risk of destructive insanity is
far, far higher. No matter what it is you want to preserve in an SI, it
has a better chance of being there - zero, in my opinion, but a more
plausible zero - if you put it in a clean, elegant AI instead of a messy
human conviction. I think you're expressing a faith in the human mind
that borders on the absurd just because you happen to be human.

> What needs to be done: start a project with as many people as
> possible to firgure out ways to a) enhance human intelligence
> with available technology, using anything and everything that's
> reasonably safe and effective

*Laugh*. And he says this, of course, to the author of "Algernon's Law:
 A practical guide to intelligence enhancement using modern technology."
 Which is the *other* problem with steering a car by shooting out the
tires... Taking potshots at me would do a lot more to cripple IA than
AI. And, correspondingly, going on the available evidence, IAers will
tend to devote their lives to AI.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:36 MST