Re: 5,000,000,000 transhumans?

From: Peter C. McCluskey (pcm@rahul.net)
Date: Fri Aug 14 1998 - 09:26:31 MDT


 neosapient@geocities.com (den Otter) writes:
>In the case of SI, any head start, no matter how slight, can mean all
>the difference in the world. A SI can think, and thus strike, much
>faster than less intelligent, disorganized masses. Before others
>could transcend themselves or organize resistance, it would be too late.

 I predict that the first SI will think slower than the best humans.
Why would people wait to upload or create an artificial intelligence
until CPU power supports ultrafast minds if they can figure out how
to do it earlier?

 We have plenty of experience with small differences in speed of thought,
and have seen no sign that thinking faster than others is enough to
make total conquest (as opposed to manipulation) possible.

>That could be the case (roughly 33 %), but it wouldn't be the most
>rational approach. Simply put: more (outside) diversity means also
>a greater risk of being attacked, with possibly fatal consequences.
>If a SI is the only intelligent being in the universe, then it's
>presumably safe. In any case *safer* than with known others around.

 A) attempting world conquest is very risky, as it unites billions
of minds around the goal of attempting to conquer you. This should
be especially true in a period of rapid technological change when
many new inventions are popping up but haven't had their military
implications tested. I can't see how the risks of diversity could
be as big as this.
 B) there's no quick way for the SI to determine that there aren't
other civilizations in the universe which might intend to exterminate
malevolent SIs.
 C) given the power you are (mistakenly) assuming the SI has over the
rest of the world, I suspect the SI could also insure it's safety by
hindering the technological advances of others.

>> Let me put it this way: I'm pretty sure your view is incorrect, because I
>> expect to be one of the first superintelligences, and I intend to uplift
>> others.
>
>A bold claim indeed ;) However, becoming a SI will probably change
>your motivational system, making any views you hold at the beginning
>of transcension rapidly obscolete. Also, being one of the first may
>not be good enough. Once a SI is operational, mere hours, minutes and
>even seconds could be the difference between success and total failure.

 An unusual enough claim that I will assume it is way off unless
someone constructs a carefull argument in favor of it.

>After all, a "malevolent" SI could (for example) easily sabotage most of
>the earth's computer systems, including those of the competition, and
>use the confusion to gain a decisive head start.

 Sabotaging computer systems sounds like a good way of reducing the
malevolent SI's power. It's power over others is likely to come from
it's ability to use those systems better. Driving people to reduce
their dependance on computers would probably insure they are more
independant of the SI's area of expertise.
 Also, there will probably be enough secure OSs by then that sabotaging
them wouldn't be as easy as you imply (i.e. the SI would probably need
to knock out the power system).

-- 
------------------------------------------------------------------------
Peter McCluskey          | Critmail (http://crit.org/critmail.html):
http://www.rahul.net/pcm | Accept nothing less to archive your mailing list


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:27 MST