From: Peter C. McCluskey (pcm@rahul.net)
Date: Thu Aug 20 1998 - 13:37:46 MDT
neosapient@geocities.com (den Otter) writes:
>Peter C. McCluskey wrote:
>> We have plenty of experience with small differences in speed of thought,
>> and have seen no sign that thinking faster than others is enough to
>> make total conquest (as opposed to manipulation) possible.
>
>Although speed is very important, it's the SI's superior *intelligence*
>that really gives it an edge.
If you expect us to believe this difference will come about as one sudden
leap when the first AI is created rather than through a gradual process of
improvement, and don't think that assumption needs any justification, then
you are nuts.
> Combine that with nanotech, with which
>a SI can rapidly add to its computing mass, refine itself, make all
>kinds of external tools etc, _all without outside help_. Total autonomy
>gives it unprecedented freedom.
None of this is likely to happen with the first AI. We will have time to
create a variety of AI's before they acquire control over external tools
this powerfull.
>> A) attempting world conquest is very risky, as it unites billions
>> of minds around the goal of attempting to conquer you.
>
>Most people will simply never know what hit them. Even if everyone
>somehow found out about the SIs plans (unlikely that it would be
>so slow and sloppy), there wouldn't be much 99.999...% of the
>people on this planet could do.
You're claiming, for instance, that they couldn't turn the power off
to all the worlds computers if that looked like the easiest way to stop
the SI?
>> B) there's no quick way for the SI to determine that there aren't
>> other civilizations in the universe which might intend to exterminate
>> malevolent SIs.
>
>The threat from creatures on earth is real, a proven fact, while
I've seen no hint of any such proof, and I'm fairly sure it would
be inconsistent with the claims you make about the SI's power.
>> C) given the power you are (mistakenly) assuming the SI has over the
>> rest of the world, I suspect the SI could also insure it's safety by
>> hindering the technological advances of others.
>
>Given enough time, the SI would surely reach a state where it can
>afford such luxuries. The thing is that it likely wouldn't have
>enough time; it would be strong enough to stop the competition, but
>likely not smart/powerful enough to do it gently. It's much easier to
>destroy than to subdue, and with the hot breath of other ascending
It might be easier to destroy than to make people subservient, but I
can't see how it's easier for an SI to destroy much of civilization than
it is to sabotage a few key pieces of equipment or software in key
research labs.
And if it's really thinking thousands of times faster than those other
ascending powers, how do you expect them to catch up with it's
technology even if it leaves them alone and improves it's own power
as fast as possible?
>(or both)? Anyway, I can't imagine why our ancient, reproduction-
>driven and hormone-controlled motivations wouldn't become quickly
>as obsolete as the biological body from which they've sprung. It
>would be arrogant indeed to think that we have already determined
>"absolute truth", that our morals are definitive. Besides, even
>without the massive transformation of man into SI, your motivations
>would likely change over time.
Evolutionary forces may well cause some gradual change in motivations.
If SI is created via uploading, there is no obvious force to change the
motivations of any individual SI, and there is a desire to preserve one's
identity which should deter change.
For other types of SI, it's hard to generalize. So much depends on how
and why they are created.
>As for the speed: to a being with thousands (perhaps more) of
>times our brain speed, mere seconds are like years. While we
>would stand around like statues (from the SIs perspective), it
>would consider, simulate, reject, improve etc. thought upon thought,
If everything involved (CPU speed, inter-processor communications,
and sensory I/O) all sped up by factors of a thousand or more compared
to human brains and if none of the algorithms involved are sufficiently
less efficient than the equivalent human algorithms, then you might be
right.
If the first SI is achieved through uploading, the need to emulate
lots of quirks in neural behavior is likely to soak up enough CPU
power to make that thousand fold speedup unlikely.
With any other approach, it is unlikely that all the software needed
will be implemented as efficiently on the first successfull attempt as
million of years of evolution have made the human brain, so I expect
some SI thought processes to work quite slowly.
>plan upon plan. Computers are *already* much faster than humans,
Computers are much slower than humans even for simple problems like
Go. For hard problems like military strategy, computers are too slow to
measure.
>> Sabotaging computer systems sounds like a good way of reducing the
>> malevolent SI's power. It's power over others is likely to come from
>> it's ability to use those systems better.
>
>A SI would probably get offplanet asap, using planets, asteroids and
>anything else that's out there to grow to a monstrous size, a massive
>ball of pure brain power and armaments (including multiple layers
>of active nanoshields or whatever else it thinks of). A deathstar.
>Given the current pace of space exploration & colonization, it should
>meet little resistance. The earth would be a sitting duck for a massive
>EMP/nuclear strike, for example.
Are you claiming it would be able to do these things as soon as it
was created, or are you arbitrarily mixing a different time period into
the discussion?
>> Driving people to reduce
>> their dependance on computers would probably insure they are more
>> independant of the SI's area of expertise.
>
>One major area of a SI's expertise is wholesale slaughter, and it
Probably not.
>Of course there's no saying what tactic a SI would use, after all, it
>would be a lot more intelligent than all of us combined. But if even
>a simple human can think of crude tactics to do the trick, you can
>very well imagine that a SI wouldn't have a hard time doing it.
The fact that I can imagine it doesn't imply it is true. I suspect
the first SI's control over non-digital processes will be rather limited.
-- ------------------------------------------------------------------------ Peter McCluskey | Critmail (http://crit.org/critmail.html): http://www.rahul.net/pcm | Accept nothing less to archive your mailing list
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:29 MST