Re: IA vs. AI was: longevity vs singularity

From: den Otter (neosapient@geocities.com)
Date: Tue Jul 27 1999 - 11:30:42 MDT


----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>

> Has it occurred to you that if the first uploads *are* charitably
> inclined, then it's *much* safer to be in the second wave? The first
> uploads are likely to be more in the nature of suicide volunteers,

The first uploads would no doubt be animals (of increasing complexity),
followed by tests with humans (preferably people who don't grasp
the full potential of being uploaded, for obvious reasons). Of course,
the strictest possible safety measures should be observed at all
times. Only when these tests are concluded with satisfactory results,
should the (actual) synchronized upload procedure be executed.

> especially when you consider that a rough, destructive, but adequate
> scan is likely to come before a perfect, nondestructive scan.

Well, IMHO the scan version of uploading is utterly useless from the
individual's point of view (copy paradox and all that), and I certainly
wouldn't waste any time on this method. In fact, I'm just as opposed
to it as I am to conscious AI.

> You're staking an awful lot on the selfishness of superintelligences.

I'm simply being realistic; when you realize how incredibly slow,
predictable and messy humans will be compared to even an early
SI, it is hard to imagine that it will bother helping us. Do we "respect"
ants? Hardly. Add to that the fact that the SI either won't have our
emotional (evolutionary) baggage to start with, or at least can modify
it at will, and it becomes harder still to believe that it would be
willing to keep humans around, let alone actively uplifting them.

Why would it want to do that? There is no *rational* reason to
allow or create competition, and the SI would supposedly be the
very pinnacle of rationality. It's absurd to think that a true SI would
still run on the programming of tribal monkey-men, which are
weak and imperfect and therefore forced to cooperate. That's why
evolution has come up with things like altruism, bonding, honor
and all the rest. Nice for monkey-men, but utterly useless
for a supreme, near-omnipotent and fully self-contained SI. If
it has a shred of reason in its bloated head, it will shed those
vestigal handicaps asap, if it ever had them in the first place.
And of course, we'd be next as we'd just be annoying microbes
which can spawn competition. Competition means loss of control
over resources, and a potential threat. Not good. *Control* is good.
Total control is even better. The SI wouldn't rest before it had
brought "everything" under its control, or die trying. Logical, don't
you think? Forget PC and try to think like a SI (a superbly powerful
and exquisitely rational machine).

Or are you hoping for an insane Superpower? Not something you'd
want to be around, I reckon.

Synchronized uploading would create several SIs at once, and though
there's a chance that they'd decide to fight eachother for supremacy,
it's more likely that they'd settle for some kind of compromize.

> Maybe you don't have the faintest speck of charity in your soul, but if
> uploading and upgrading inevitably wipes out enough of your personality
> that anyone would stop being cooperative - well, does it really make
> that much of a difference who this new intelligence "started out" as?
> It's not you. I know that you might identify with a selfish SI, but my
> point is that if SIs are *inevitably* selfish, if *anyone* would
> converge to selfishness, that probably involves enough of a personality
> change in other departments that even you wouldn't call it you.

To transcend is to change, dramatically, about that I have no doubt.
So what, I'm not who I was when I was, say, 2 or 5 or 12. In some
aspects I'm the polar opposite of what I was then, but I still consider
myself to be me. Drugs can change the mind temporarily almost
beyond recognition, and while you dream your dream persona can
be quite different from the "real" you, both outside and inside, yet
you still feel that it's "you". So, I'm not too worried about ascension-
related personality changes, as long as I remain conscious and
reasonably in control while it happens. Sooner or later, the monkey-
man will have to pass on.

> > -Stopping you from writing an AI wouldn't be all that hard, if I really
> > wanted to. ;-)
>
> Sure. One bullet, no more Specialist. Except that that just means it
> takes a few more years. You can't stop it forever.

Maybe not forever, but perhaps long enough to tip the balance in favor
of uploading.

> All you can do is
> speed up the development of nanotech...relatively speaking. We both
> know you can't steer a car by selectively shooting out the tires.

No, but you *can* slow it down that way.
 
> > You can run and/or hide hide from nanotech, even
> > fight it successfully, but you can't do that with a superhuman
> > AI, i.e. nanotech leaves some room for error, while AI doesn't (or
> > much less in any case). As I've said before, intelligence is the
> > ultimate weapon, infinitely more dangerous than stupid nanites.
>
> Quite. And an inescapable one. See, what *you* want is unrealistic
> because you want yourself to be the first one to upload,

That's *among* the first to upload, which is something else entirely.
Well, yes of course I want that; after all, the alternative is to meekly
wait and hope that whoever/whatever turns SI first will have mercy on
your soul. If I had that kind of attitude I'd be a devout Christian, not
a transhumanist. Wanting to be among the first to upload is morally
right, if nothing else, just like signing up for suspension is morally
right, regardless whether it will work or not. It's man's duty (so to
speak) to reject oppression of any kind, which means spitting death
in the face, among other things. AI could very well be death/
oppression in sheep's clothing (which reminds me of the movie
"Screamers", btw, with the "cute" killer kid), so we should treat it
accordingly.

> which excludes
> you from cooperation with more than a small group

Theoretically a group of almost any size could do this, more or
less SETI-style (but obviously with a good security system in
place to prevent someone from ascending on the sly). I'm not
excluding anyone, people exclude *themselves* by either not
caring or giving in to defeatism, wishfull thinking etc.

> and limits your
> ability to rely on things like open-source projects and charitable
> foundations.

Why would it limit that ability? Even if you'd want to keep your project
secret you could cooperate with people and organizations which
might somehow advance your cause, without them ever knowing.
Happens all the time. Anyway, it *isn't* secret. On the contrary,
it's all over the web.

What *they* want is unrealistic because they want to
> freeze progress.

Who is "they"?

> Both of you are imposing all kinds of extra constraints. You're always
> going to be at a competitive disadvantage relative to a pure
> Singularitarian

What's a "pure" Singularitarian anyway, someone who wants a
Singularity asap at almost any cost? Someone who wants a
Singularity for its own sake?

> or the classic "reckless researcher", who doesn't demand
> that the AI be loaded down with coercions, or that nanotechnology not be
> unleashed until it can be used for space travel, or that nobody uploads
> until everyone can do it simultaneously, or that nobody has access to
> the project except eight people, and so on ad nauseam. The open-source
> free-willed AI project is going to be twenty million miles ahead while
> you're still dotting your "i"s and crossing your "t"s.

Just because something is easier, doesn't mean that it's the right
thing to do. Instead of trying to find an intelligent solution, you're
actively contributing to the problem; it's like punching holes in an
already sinking ship (and actually taking great pride in it too), while
instead you should be looking for, or building, a life raft.
 
> A-priori chance that you, personally, can be in the first 6 people to
> upload: 1e-9.
> Extremely optimistic chance: 1%

Why 6? It could be 600 or 6000 for all I care, as long as uploading
happens simultaneously. If, say, half of all serious transhumanists
decided to go for it [uploading], we'd each stand more than a
1% chance, simply because 99.99...% of the world's population
lacks vision.

> Extremely pessimistic chance that AIs are benevolent: 10%
>
> Therefore it's 10 times better to concentrate on AI.

Well, see above. Besides, a 90% chance of the AI killing us
isn't exactly an appealing situation. Would you get into a
machine that kills you 90% of the time, and gives total,
unprecedented bliss 10% of the time? The rational thing is
to look for something with better odds...

Humanity (those not in the project) would benefit too from
the mass upload approach, because either [uploaded] people
retain the key parts of their original personality, which means
that the "good guys" (fanatical altruists) would protect the
mehums, while the "bad guys" probably wouldn't risk a lethal
conflict over the issue and some compromize would be made,
OR personalities would change beyond recognition in which
case humanity wouldn't be any worse off than in the case of
an AI transcension. Conclusion: survival chances likely better
than 10% for *everyone*, and up to 50% for those directly
involved, depending on speed of AI development, nanotech
etc. A long shot, but the only one that makes sense.

What needs to be done: start a project with as many people as
possible to firgure out ways to a) enhance human intelligence
with available technology, using anything and everything that's
reasonably safe and effective and b) develop an as detailed as
possible scenario for actual (gradual) uploading that can be
implemented as soon as nanotech becomes functional. c)
Spread awareness among AI researchers, related
individuals and institutions about the possible negative
implications for humanity and themselves. AI can be very
useful for all sorts of things, including human uploading, but
for chrissake don't make it *too* smart. d) Obviously, some
people are better suited to take care of funding, while others
do research, while yet others fill the gaps in between. There's
something useful to do for everyone.

Well, it ain't gonna happen, of course, but it would certainly be
a good idea. Better than anything suggested so far, anyway.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:35 MST