Re: The SI That Ate Chicago (was: Thinking about the future...)

From: Dan Clemmensen (dgc@shirenet.com)
Date: Sat Aug 31 1996 - 10:00:59 MDT


Eric Watt Forste wrote:
>
> At 8:04 PM 8/30/96, Dan Clemmensen wrote:
> >The SI may not want to destroy humanity. Humanity may simply be
> >unworthy of consideration, and get destroyed as a trivial side effect
> >of some activity of the SI.
>
> I liked your point that the SI to worry about is the SI whose primary goal
> was further increase in intelligence. But consider that the human species,
> as a whole, is a repository of a significant amount of computational power.
> Just as most of us know to look up something up on the Web before we invest
> a significant amount of effort into trying to figure it out for ourselves,
> from scratch, just using our brains and nothing else, the SI will probably
> be a lot more intelligent (in terms of being able to figure out solutions
> to problems) if it cooperates with the existing body of human researchers
> than if it eats them.
>
[SNIP of a lot of interesting stuff]

I agree that in its early stages, the SI will likely interact with
humans in the way
you describe. The reason that I use the term SI instead of AI is that my
hypothesis
doesn't depend on the exact nature of the SI. As I said, it's likely to
start as
some type of human-computer collaboration and augment itself from that
base. My feeling is that the SI will rapidly reach a point at which it
can re-derive knowledge
ab-initio faster than it could get the same knowledge from a human.
Alternatively, the SI will almost certainly solve all of the
then-current human research problems, specifically including nanotech
and uploading. It can then offer to host an upload
of any human that wishes to upload. When enough humans accept the offer,
the amount of useful knowledge known only to "meat" humans will cease to
be very attractive.

Even if the SI has goals other that increasing its intelligence, most
such goals
are more readily achieved by a more intelligent SI. Therefore,
increasing its intelligence is likely to be an intermediate goal for any
of the other goals.

I've argued that the same is true for today's humanity. Probably the
fastest way
for us to get to nanotech and upoading is to develop the SI. IMO, the
quickest way to develop this SI is by concentrating on software tools
and visual representation of
information, leading to easier-to-use computers and to human-computer
collaboration.
This will lead to the singularity within ten years.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST