bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>Peter C. McCluskey wrote:
>
>> bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>> >I think that in a singularity scenario, the leadning force will
>> >quickly be so much more advanced than its competitors that it will
>> >not really matter that the new war machines haven't been tested in
>> >(non-simulated) battle.
>
>> Do you have some reason to believe this? It's sufficiently different
>> from what military history suggests that it sounds like wishfull thinking.
>
>The reason is that in the singularity scenario things are postulated
>to happen extremely fast. Loosly speaking, the pace of development
>"goes to infinity". That means that even if the leading power is only
>one year ahead of the competition, it will achieve an enormous
>technological advantage. It will have assemblers and
>superintelligence when it's competitors lack these technologies. With
Speeding up all processes doesn't have any effect on the nature of
competition. In order for your argument to make any sense, you need
to claim something about the relative speeds of some phenomena, such
as claiming that the spread of new technologies will not speed up as
fast as their development. It looks to me like the forces that tend
to equalize availability of new technologies are speeding up faster
than fundamental innovations are.
>> >don't apply here if the subjugation could be done without bloodshed.
>> >If driven by ethical motives, A might choose not to do anything to B
>> >other than preventing B from building weapons that B could use to
>> >threaten A. Such action might not even look like an "attack" but more
>> >like a commitment to enforce non-proliferation of the new
>> >weapons-technology.)
>>
>> There are lots of people who will violently resist conquest. If your
>> singleton were merely enforcing a ban on something few people wanted
>> anyway (such as germ warfare), it might look peacefull to most. But your
>> goal of controlling interstellar appears to require substantial restrictions
>> on travel (can't let them out of the region that the singleton controls
>> until they've been properly programmed)
>
>If the region that the singleton controls grows at lear the speed of
>light, as I think it will, then I don't see how this would lead to
>substantial restrictions on travel.
It's control grows at the speed of light during the time when it is
establishing its monopoly? I can't imagine why you expect us to take
that possibility seriously. Waiting until near-lightspeed travel is
possible seems to virtually guarantee that it can't control everyone.
I thought one of the main advantages of the singleton was avoiding
the wastefull "burning the cosmos" strategy. It's hard for me to
imagine that near-lightspeed travel would ever be as efficient as,
say, 0.5c. What would motivate the singleton to expand at maximum speed?
>Such institutions could still be useful. Say the UN is a singleton
>and it wants to commission a superintelligence that can see to that
>its constitution is not violated. Various groups design different
>such systems. Then the trustworthy institutions are called upon to
>verify that the proposed designs would function as stated. Only if
The trustworthy institutions I had in mind stay that way by avoiding
stands on controversial ideas. Institutions that are interested in taking
such stands generally attract people whom I wouldn't trust.
Can you give an example of an institution that you would trust to
evaluate this software?
-- ------------------------------------------------------------------------ Peter McCluskey | Critmail (http://crit.org/critmail.html): http://www.rahul.net/pcm | Accept nothing less to archive your mailing listReceived on Thu May 21 15:51:23 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST