From: den Otter (neosapient@geocities.com)
Date: Sat Aug 15 1998 - 02:30:28 MDT
Peter C. McCluskey wrote:
>
> neosapient@geocities.com (den Otter) writes:
> >In the case of SI, any head start, no matter how slight, can mean all
> >the difference in the world. A SI can think, and thus strike, much
> >faster than less intelligent, disorganized masses. Before others
> >could transcend themselves or organize resistance, it would be too late.
>
> I predict that the first SI will think slower than the best humans.
Very unlikely, even for a "simple" AI. Our brains work with cumbersome
electro-chemical processes, and are easily outmatched in speed by any
PC. If I remember correctly, it was mentioned in an earlier thread that
Deep Blue (or a like computer) scored most points in the speedmatches,
while it was only mediocre in the longer matches where its human
opponent had more time to think. And that's just a contemporary
computer, by the time SI arrives we'll be at least a couple
of decades down the road (with a doubling of computer power every
1.5 years or so.) So, IMO the SI will almost certainly be superfast
(driven by CPUs with *many* times the current capacity), and
by definition Super Intelligent. That's a winning combination.
> Why would people wait to upload or create an artificial intelligence
> until CPU power supports ultrafast minds if they can figure out how
> to do it earlier?
Unless I'm very mistaken, there will *already* be hyperfast CPUs
when AI and uploading become possible. The CPU technology is simply
less complicated than uploading human brains or creating intelligent,
conscious machines.
> We have plenty of experience with small differences in speed of thought,
> and have seen no sign that thinking faster than others is enough to
> make total conquest (as opposed to manipulation) possible.
Although speed is very important, it's the SI's superior *intelligence*
that really gives it an edge. Combine that with nanotech, with which
a SI can rapidly add to its computing mass, refine itself, make all
kinds of external tools etc, _all without outside help_. Total autonomy
gives it unprecedented freedom. A SI could kill everything else and
suffer little inconvenience from it, while a normal human that causes
WW3 will almost certainly eventually die himself, even if he hides
himself in a high-tech bunker.
> A) attempting world conquest is very risky, as it unites billions
> of minds around the goal of attempting to conquer you.
Most people will simply never know what hit them. Even if everyone
somehow found out about the SIs plans (unlikely that it would be
so slow and sloppy), there wouldn't be much 99.999...% of the
people on this planet could do. While everyone would be panicking
and frustrating eachother's efforts, the SI(s) would methodically
do its (their) thing, helped by an unparalleled knowledge of the human
psyche.
> B) there's no quick way for the SI to determine that there aren't
> other civilizations in the universe which might intend to exterminate
> malevolent SIs.
The threat from creatures on earth is real, a proven fact, while
hostile aliens are just a hypothesis (that lacks any proof). It's
comparable to cryonics and religion: would you refuse to having your
body frozen upon death because of a miniscule chance that there is a
god, and that he/she/it doesn't like it? In other words: it wouldn't
stop a rational SI from taking out all known competition.
> C) given the power you are (mistakenly) assuming the SI has over the
> rest of the world, I suspect the SI could also insure it's safety by
> hindering the technological advances of others.
Given enough time, the SI would surely reach a state where it can
afford such luxuries. The thing is that it likely wouldn't have
enough time; it would be strong enough to stop the competition, but
likely not smart/powerful enough to do it gently. It's much easier to
destroy than to subdue, and with the hot breath of other ascending
forces (with undoubtedly similar intentions) in its neck, the choice
would be easy.
> > However, becoming a SI will probably change
> >your motivational system, making any views you hold at the beginning
> >of transcension rapidly obscolete. Also, being one of the first may
> >not be good enough. Once a SI is operational, mere hours, minutes and
> >even seconds could be the difference between success and total failure.
>
> An unusual enough claim that I will assume it is way off unless
> someone constructs a carefull argument in favor of it.
Are you referring to the motivation change or ultra-fast ascension
(or both)? Anyway, I can't imagine why our ancient, reproduction-
driven and hormone-controlled motivations wouldn't become quickly
as obsolete as the biological body from which they've sprung. It
would be arrogant indeed to think that we have already determined
"absolute truth", that our morals are definitive. Besides, even
without the massive transformation of man into SI, your motivations
would likely change over time.
As for the speed: to a being with thousands (perhaps more) of
times our brain speed, mere seconds are like years. While we
would stand around like statues (from the SIs perspective), it
would consider, simulate, reject, improve etc. thought upon thought,
plan upon plan. Computers are *already* much faster than humans,
and they're getting faster every year.
> >After all, a "malevolent" SI could (for example) easily sabotage most of
> >the earth's computer systems, including those of the competition, and
> >use the confusion to gain a decisive head start.
> Sabotaging computer systems sounds like a good way of reducing the
> malevolent SI's power. It's power over others is likely to come from
> it's ability to use those systems better.
A SI would probably get offplanet asap, using planets, asteroids and
anything else that's out there to grow to a monstrous size, a massive
ball of pure brain power and armaments (including multiple layers
of active nanoshields or whatever else it thinks of). A deathstar.
Given the current pace of space exploration & colonization, it should
meet little resistance. The earth would be a sitting duck for a massive
EMP/nuclear strike, for example.
> Driving people to reduce
> their dependance on computers would probably insure they are more
> independant of the SI's area of expertise.
One major area of a SI's expertise is wholesale slaughter, and it
really doesn't matter much whether you die behind your computer
or while reading a book, for example.
> Also, there will probably be enough secure OSs by then that sabotaging
> them wouldn't be as easy as you imply (i.e. the SI would probably need
> to knock out the power system).
Of course there's no saying what tactic a SI would use, after all, it
would be a lot more intelligent than all of us combined. But if even
a simple human can think of crude tactics to do the trick, you can
very well imagine that a SI wouldn't have a hard time doing it.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:28 MST