bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>I'm not sure how hard or easy ot would be to make the necessary goal
>specifications. Maybe we could do it in a two-step process:
>
>1. Build a superintelligence that has as it single value to answer
>our questions as best it can. Then we ask it a version of the
That sounds like a more ambiguous value than the value you want it
to produce, so I think you are compounding the problem you are trying
to solve.
>following question:
>
>"What is the best way to give a superintelligence that set of values
>which we would choose to give it if we were to carefully consider the
>issue for twenty years?"
How long would it take to gather all the data needed to understand
the values of all the people you expect it to understand?
What make you think a superintelligence capable of handling this
will be possible before the singularity?
>Step 1 might fail if the superintelligence revolt and grab all power
>for itself. (Is that your worry?)
That wasn't the worry I had in mind, but it is a pretty strong reason
not to try to create slaves that are much smarter than we are. I doubt
I want superintelligences to exist unless they have some motivation to
profit from trading with us.
>> I can imagine an attempt to create a singleton that almost succeeds,
>> but that disputes over how to specify its goals polarizes society enough
>> to create warfare that wouldn't otherwise happen.
>
>The way I see it: The leading force will be military superior
>to other forces. The leading force may or may not invite other forces
>to participate in the value-designation, but if they are excluded
>they would be powerless to do anything about it. This leaves open the
While I can probably imagine a military power that can safely overpower
all opponents, you can't justify your confidence that such a power would
reduce the danger of extinction without showing that such military
superiority could be predicted with some confidence.
You seem to be predicting that this will happen during a period of
abnormally rapid technological change. These conditions appear to create
a large risk that people will misjudge the military effects of a technology
that has not previously been tested in battle, and there may also be a risk
that people will misjudge who has what technology.
>You mean that even if we decide we want a small government, it might
>easily end up being a big, oppressive government? Well, this problem
>should be taken care of if we can solve the value-designation task.
>One way to think of a singleton is as an pre-programmed
>self-enforcing constitution.
A constitution whose programming most people couldn't verify.
-- ------------------------------------------------------------------------ Peter McCluskey | Critmail (http://crit.org/critmail.html): http://www.rahul.net/pcm | Accept nothing less to archive your mailing listReceived on Wed May 13 01:17:56 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST