Re: Thinking about the future...

From: Dan Clemmensen (dgc@shirenet.com)
Date: Wed Sep 04 1996 - 17:13:32 MDT


Eric Watt Forste wrote:
[SNIP]
>
> If the superintelligence is ambitious enough and powerful enough to be a
> true value pluralist, to find it boring and trivial to undertake any course
> of action other than the simultaneous execution of many different
> orthogonal projects in realtime (and I think this does describe the
> behavior we observe in many of the brightest human beings so far), then I
> don't think we'll have too much to fear from it. Perhaps I'm being an
> ostrich about this, but I'd love to hear some solid counterarguments to the
> position I'm taking.
>
 My simplest example is an SI that deems that its goals are best served
by
a maximal increase in its computing capacity. I can think of many such
goals.
This SI may choose to convert all available resources into computing
capacity,
wiping out humanity as a trivial side effect.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST