From: Max More (maxmore@primenet.com)
Date: Mon Aug 26 1996 - 11:38:50 MDT
At 02:24 PM 8/26/96 +0200, Anders Sandberg wrote:
>
> I think it would be unlikely that we create successors
>that out-compete us, most likely they will inhabit a somewhat different
>ecological/memetic niche that will overlap with ours; competition a
You make good points, Anders, about humans and nanite-AI's having possibly
different niches. However, there may be a period during which we're very
much in the same space. That's the period in which humans could be at risk
if AI/SIs have no regard for our interests. What I'm thinking is that it's
possible, even likely, that SI will be developed before really excellent
robotics. AI's in that case would not be roaming around much physically, but
they could exist in distributed form in the same computer networks that we
use for all kinds of functions crucial to us.
If they need us for doing things physically, we would still have a strong
position. Nevertheless, powerful SI's in the computer networks, could exert
massive extortionary power, if they were so inclined. So I still think it
important that SI researchers pay attention to issues of what values and
motivations are built into SIs.
Upward and Outward!
Max
Max More, Ph.D.
maxmore@primenet.com
http://www.primenet.com/~maxmore
President: Extropy Institute (ExI)
Editor: Extropy
310-398-0375
http://www.primenet.com/~maxmore/extropy.htm
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:43 MST