Re: IA vs. AI was: longevity vs singularity

From: Max More (max@maxmore.com)
Date: Wed Jul 28 1999 - 10:55:41 MDT


At 07:30 PM 7/27/99 +0200, den Otter wrote:
>

>> You're staking an awful lot on the selfishness of superintelligences.
>
>I'm simply being realistic; when you realize how incredibly slow,
>predictable and messy humans will be compared to even an early
>SI, it is hard to imagine that it will bother helping us. Do we "respect"
>ants? Hardly. Add to that the fact that the SI either won't have our
>emotional (evolutionary) baggage to start with, or at least can modify
>it at will, and it becomes harder still to believe that it would be
>willing to keep humans around, let alone actively uplifting them.
>
>Why would it want to do that? There is no *rational* reason to
>allow or create competition, and the SI would supposedly be the
>very pinnacle of rationality.

I find this puzzling. By the same reasoning, we should want to keep
children uneducated and ignorant, since they will become competition for
us. Both assume that more people with intelligence and ability come at a
cost to those who already have it. A zero-sum assumption. Clearly the
economy does not work this way. Right now, most of Africa has less wealth,
education, health, and technological ability than the Americas and Europe.
I would see Africa's ascendence not as a competitive threat but as a
massive contribution to the total output of ideas, goods, and services.

Why should SI's see turning humans into uploads as competition in any sense
that harms them? It would just mean more persons with whom to have
productive exchanges.

It's absurd to think that a true SI would
>still run on the programming of tribal monkey-men, which are
>weak and imperfect and therefore forced to cooperate. That's why
>evolution has come up with things like altruism, bonding, honor
>and all the rest. Nice for monkey-men, but utterly useless
>for a supreme, near-omnipotent and fully self-contained SI. If
>it has a shred of reason in its bloated head, it will shed those
>vestigal handicaps asap, if it ever had them in the first place.
>And of course, we'd be next as we'd just be annoying microbes
>which can spawn competition. Competition means loss of control
>over resources, and a potential threat. Not good. *Control* is good.

>Total control is even better. The SI wouldn't rest before it had
>brought "everything" under its control, or die trying. Logical, don't
>you think?

This must be where we differ. No, I don't think total control is desirable
or beneficial, even if it were me who had that total control. If true
omnipotence were possible, maybe what you are saying would follow, but
omnipotence is a fantasy to be reserved for religions. Even superpowerful
and ultraintelligent beings should benefit from cooperation and exchange.

>Well, yes of course I want that; after all, the alternative is to meekly
>wait and hope that whoever/whatever turns SI first will have mercy on
>your soul. If I had that kind of attitude I'd be a devout Christian, not
>a transhumanist. Wanting to be among the first to upload is morally
>right, if nothing else, just like signing up for suspension is morally
>right, regardless whether it will work or not. It's man's duty (so to
>speak) to reject oppression of any kind, which means spitting death
>in the face, among other things. AI could very well be death/
>oppression in sheep's clothing (which reminds me of the movie
>"Screamers", btw, with the "cute" killer kid), so we should treat it
>accordingly.

Despite my disagreement with your zero-sum assumptions (if I'm getting your
views right--I only just starting reading this thread and you may simply be
running with someone else's assumptions for the sake of the argument), I
agree with this. While uploads and SI's may not have any inevitable desire
to wipe us out, some might well want to, and I agree that it makes sense to
deal with that from a position of strength.

I'm not sure how much we can influence the relative pace of research into
unfettered independent SIs vs. augmentation of human intelligence, but I
too favor the latter. Unlike Hans Moravec and (if I've read him right,
Eliezer), I have no interest in being superceded by something better. I
want to *become* something better.

Max

----------------------------------------------------------------------
Max More, Ph.D.
<max@maxmore.com> or <more@extropy.org>

http://www.maxmore.com

Implications of Advanced Technologies
President, Extropy Institute: http://www.extropy.org
EXTRO 4 Conference: Biotech Futures. See http://www.extropy.org/ex4/e4main.htm
----------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:36 MST