From: Dan Clemmensen (dgc@shirenet.com)
Date: Tue Sep 10 1996 - 17:39:43 MDT
Anders Sandberg wrote:
>
> I have noted something interesting in this thread, that we seem to assume
> that SI appears out of nowwhere in a vacuum. The archetypal scenario is the
> nanoworkstation at MIT that during the night transcends and takes of the
> world.
>
> In reality, it is very unlikey that we will get SI before merely HI and
> HI before subhuman intelligence. When the techniques for creating better
> and better minds appear, they will lead to a succession of better and
> better minds. This will also lead to a better and better understanding of
> the problems and risks of intelligence engineering; unless the growth is
> very fast and uncontrolled we will know about some of the dangers and
> possibilities.
>
> If we create SI, we will most likely have plenty of HI and >HI, we will
> know what to expect. It should be noted that even SIs have limitations,
> and they won't be a single giant among lilliputs - there will be many
> human-sized beings to deal with, and armies of dog-sized beings, and
> trillions of lilliputs...
>
Your scenario may be plausible, but I feel that my scenario is more
likely: the
Initial SI (for example an experimenter together with a workstation and
a bunch
of software) is capable of rapid self-augmentation. Since the
experimenter and
the experiment are likely to be oriented toward developing an SI, the
self-augmentation
is likely to result in rapid intelligence gain. Your sub-human SIs are
presumably computer
only AIs, lacking a human component. I don't see an AI as the likely SI.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST