From: den Otter (neosapient@geocities.com)
Date: Mon Aug 10 1998 - 06:12:43 MDT
Max More wrote:
>
> At 01:37 PM 8/9/98 +0200, Den Otter wrote:
> >
> >Yes, but this is fundamentally different; godhood isn't something
> >that one would sell (or give away) like one would do with minor
> >technological advances such as phones, TVs cars etc. Just like nukes
> >were (and are) only for a select few, so will hyperintelligence,
> >nanotech, uploading etc. initially be only available to a select
> >group, which will most likely use them to become gods. There is
> >no rational reason to distribute this kind of power once you have
> >it.
> >
> >Powerful businessmen still need others to make and buy their products,
> >and dictators and presidents still need their people to stay in power
> >& to keep the country running, but a SI needs NO-ONE, it's
> >supremely autonomous. I can't imagine why it would share its
> >awesome power with creatures that are horribly primitive from its point
> >of view. Would *we* uplift ants/mice/dogs/monkeys to rule the world
> >as our equals? I think not.
>
> "No rational reason" is a strong claim. I doubt your claim. First, your
> view surely depends on a Singularitarian view that superintelligence will
> come all at once, with those achieving it pulling vastly far away from
> everyone else. I don't expect things to work out that way.
New technologies usually are the domain of government/company research
facilities, then the relatively few rich and powerful that can afford
the prototypes, then the well-off middle classes etc. Years or even
decades go by before a new technology is available to the general
population. IMO, the pioneers of AI, nanotech, uploading, chip-brain
interface etc,(which are all interconnected btw, so that the emergence
of one these technologies would considerably speed up the advent of the
others) will figure out how to transcend at approximately the same time
that the new products would have normally become more widely available;
i.e. when the major bugs have been taken out.
In the case of SI, any head start, no matter how slight, can mean all
the difference in the world. A SI can think, and thus strike, much
faster than less intelligent, disorganized masses. Before others
could transcend themselves or organize resistance, it would be too late.
So yes, someone will be the first to have SI, and unless the rest
is only seconds or minutes behind (or has vastly superior technology),
they *will* be neutralized *if* the SI is malevolent, or simply cold
and rational.
> I've explained
> some of my thinking in the upcoming Singularity feature that Robin Hanson
> is putting together for Extropy Online.
I'm looking forward to reading it...I do beleive I have a
solid case though: the chances that SI will be only for a small group
or even just one entity are *at least* 50%, likely more.
> Second, I also doubt that the superintelligence scenario is so radically
> different from today's powerful business people. [I don't say "businessmen"
> since this promotes an unfortunate assumption about gender and business.]
Well, that's the catch; SIs won't be "people" in any normal
sense, so there's absolutely no guarantee that they will behave (more or
less) like people. The situation is radically different because SI is
radically different, basically an alien life form.
> You could just as well say that today's extremely wealthy and powerful
> business should have no need to benefit poor people. Yet, here we have oil
> companies building hospitals and providing income in central Africa.
This is a typical feature of our interdependent society. Since no-one
has the power to be fully autonomous (do whatever you want and get
away with it), PR is important.
> I just
> don't buy the idea that each single SI will do everyone alone.
Surely you mean every*thing*? ;) IMO, a real SI will be fully
autonomous; it can soak up most of humanity's knowledge via
the web, reprogram and improve itself (evolve), manipulate its
surroundings with nanoprobes, macrobots etc. If it wants something, it
can simply take it. In other words, it's a world in itself.
> Specialization and division of labor will still apply.
Yes, but it will be parts of the being itself that will perform
the specialized tasks, not outside entities.
> at some SI's will
> want to help the poor humans upgrade because that will mean adding to the
> pool of superintelligences with different points of view and different
> interests.
That could be the case (roughly 33 %), but it wouldn't be the most
rational approach. Simply put: more (outside) diversity means also
a greater risk of being attacked, with possibly fatal consequences.
If a SI is the only intelligent being in the universe, then it's
presumably safe. In any case *safer* than with known others around.
Now, how can outside diversity benefit the SI? Through a) entertainment
b) providing extra computing power to solve problems that might
threaten the SI, like the end of the universe (if there is such a thing
at all). Do these advantages outweigh the risks? Rationally speaking,
they don't. A SI has ample computing power and intelligence to entertain
itself and to solve just about any problem (it's auto-evolving and
can always grow extra brain mass to increase its output -- it can
even turn the whole known universe into a computer, given enough
time. And time it has, being technically immortal). Why would it
want other entities around that are powerful, unpredictable, beyond
its control? There is a lot more at stake for a SI (eternity), so
presumably it will be *very* careful. I think that the need for company
is a typically human thing, us being social apes and all, and it no
longer applies to SI.
> Let me put it this way: I'm pretty sure your view is incorrect, because I
> expect to be one of the first superintelligences, and I intend to uplift
> others.
A bold claim indeed ;) However, becoming a SI will probably change
your motivational system, making any views you hold at the beginning
of transcension rapidly obscolete. Also, being one of the first may
not be good enough. Once a SI is operational, mere hours, minutes and
even seconds could be the difference between success and total failure.
After all, a "malevolent" SI could (for example) easily sabotage most of
the earth's computer systems, including those of the competition, and
use the confusion to gain a decisive head start.
> Or, are you planning on trying to stop me from bringing new members
> into the elite club of SIs?
I'm not planning to stop anyone, but what I want is irrelevant anyway
since I don't really expect to be among the lucky few that make it
(I will certainly try, though). Others *will* likely try to stop you,
not just from helping your fellow man, but also from transcending
yourself. After all, people won't be selected for their upright morals,
but for wealth, power, efficiency (ruthlessness) and plain luck. And
who's to tell how ascending will influence one's character. It
certainly wouldn't be unreasonable to assume that there's an
equal chance that SIs will be indifferent, evil or benevolent (or
some strange mix of these traits, way beyond our current
understanding).
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:26 MST