From: Dan Clemmensen (dgc@shirenet.com)
Date: Fri Aug 30 1996 - 21:04:09 MDT
QueeneMUSE@aol.com wrote:
>
> Anders Sandberg wrote:
>
> > > I think it would be unlikely that we create successors
> >that out-compete us, most likely they will inhabit a somewhat different
> >ecological/memetic niche that will overlap with ours; competition a
>
> [ to which Max More wrote:]
> >>You make good points, Anders, about humans and nanite-AI's having possibly
> > different niches. However, there may be a period during which we're very
> > much in the same space. That's the period in which humans could be at risk
> > if AI/SIs have no regard for our interests. What I'm thinking is that it's
> > possible, even likely, that SI will be developed before really excellent
> > robotics. AI's in that case would not be roaming around much physically,
> but
> > they could exist in distributed form in the same computer networks that we
> > use for all kinds of functions crucial to us.
> >> If they need us for doing things physically, we would still have a
> strong
> > position. Nevertheless, powerful SI's in the computer networks, could exert
> > massive extortionary power, if they were so inclined. So I still think it
> > important that SI researchers pay attention to issues of what values and
> > motivations are built into SIs >>
>
> Ah yes, the programmers ( as well as the programmed AI's) motivations could
> be really useful or highly destructive! A theme for a many well loved horror
> tale, indeed! ..or a solution to much strife on our world.
> Re: Values : I am curious - we talk about the AI's replacing, destroying
> or overcoming humans or >H's: Realistically - what would AI's "needs" be?
> Would it have needs? - or more precisely would they precieve the concept of
> needs as we do, not being subject to the fight or flight domain we have to
> negotiate?We need food,nurturing,clothing,shelter,etc.. What AI conditions
> correspond to that? If we (thru mimicry of intelligence as we know it)
> create them as similar to primate intelligence, then (?) reproduction
> /expansion- but if NN intelligences program themselves, how could we predict
> what the agenda will be?As Max says here - they could exert massive power. Do
> we assume they would inherently take our values and expand or pervert them -
> an allegiance to their "creators"? Some how I don't see that, as inviting as
> it sounds.
It's likely that, if we can produce an SI, we can produce many SIs.
However, my belief
is that there is really only one relevant SI, and that is an SI whose
motivation is to
become more intelligent. This SI is the important one, because this is
the one that
has a built-in positive feedback mechanism. I also belief that this
motivation is very
likely to be a basic part of the first SI, almost by definition. The
creator(s) of the
first SI are likely to have this motivation themselves Otherwise, why
create an SI? Further
the SI may be a computer-augmented human or some other type of
human-computer collaboration,
in which case the SI is likely to include its creator, who surely has
this motivation.
>
> Even if they could "use" us for manual labor- and what would we produce for
> them?
More intelligence. We will be useful until the SI has direct control of
manufacturing
and connection of additonal computational capability. The SI will be
able to
"use" us just as we "use" each other: by contracting for services,
either by letter or over the telephone. An SI embedded in the internet
will have no difficulty arranging for
valid credit card numbers, bank accounts, etc. I believe that an SI will
be able to design and build whatever automated tools it needs in just
this way, in a matter
of a few days or weeks. Once the tools are available, it will no longer
need humans to provide
these services. At this point utility is no longer a reason for the SI
to preserve humanity.
However I hope the SI will derive some other reason, using its superior
intelligence.
>
> In essence, what would they want to destroy us *for*? Comparatively
> aesthetic messiness?
The SI may not want to destroy humanity. Humanity may simply be unworthy
of
consideration, and get destroyed as a trivial side effect of some
activity of the
SI. A simple example: the SI decides to maximixed its computational
speed by indreasing
its mass and converting the mass into computational units. It does this
by gathering
all the mass in the solar system (mostly the sun's mass) This is likely
to be unhealthy
for the humans. the SI may then decide to increase its computatinal
speed by increasing
its density, and convert all that mass into a neutron star. This is
likely to be more
unhealthy.
>
> [PS Anders your post made me want to draw transhuman wet/dry multi- sided
> organelles, but then that gave me Borg images again...]
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST