Re: Nightline comments on AI

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Tue Aug 20 2002 - 07:39:30 MDT


On Tue, 20 Aug 2002, Michael Wiik wrote:

> Anders Sandberg wrote:
> > If we think we need superhuman intelligence to survive those
> > technologies, why do we think we can survive developing superhuman
> > intelligence without any guide?
>
> My apologies, I had thought (from previous discussions back when I
> mostly lurked) that developing some sort of superintelligence was
> considered necessary to avoid nanowar or some such. I may have been
> mistaken, and I don't necessarily disagree with you, though I can
> understand the rationale of desiring such an intelligence prior to (for
> example) the availability of individual weapons of mass destruction.

Mike, an SI is one approach for people who tend to be pessimistic
about humans learning to manage their technologies. Another approach
would be a completely transparent society where anyone is able to
find out about anyone who purchases material that may be used to
create individual WMD. A third approach is one where you develop
the technologies in a safe way.

Take nano for example. One path would be to only allow assemblers to
exist only in secure facilities (similar to plutonium manufacturing
or manipulation facilities now). Such facilities would only allow
the assembly of designs "generally accepted as safe" (e.g. designs
that could not be disassembled to create assemblers). These facilities
would crank out large quantities of nano-enabled materials. Everything
from semi-intelligent self-assembling sapphire beams for skyscrapers
to meal synthesizers that can feed off of the atmosphere and an energy
source and produce Lean Cuisine meals.

As an individual I get >99% of the benefits of nanotech without exposing
society to the risks of everyone on my block having general purpose
assemblers with self-replicating capabilities in their basements.
It doesn't take an SI to setup and police such a situation.

I have yet to see a reasonable argument that in such an environment
nanotechnology would be "dangerous" excepting those cases where
there are individuals who want to use it to force other people to
think the way they think or eliminate people who do not think the
way they think. Provided we are prepared to defend ourselves
against such individuals it is difficult to see how nanotech
becomes something to be feared.

How long do you think someone like Sadaam would remain in power
if the U.S. were dropping 3000 sq. ft. nanotech enabled luxury
homes (capable of replicating themselves based on instructions
beamed down from satellites, i.e. a "broadcast" architectures)
into the middle of the Iraqi desert and Sadaam resorted to
using his artillery to blowing them up so the people would
stop migrating out of Bagdahd so he could retain his power
structure? Why would you want to serve in Sadaam's honor
guard when you could lie by your pool all day?

The entire way people think about politics shifts when we
eliminate our current conditions of scarcity.

Robert



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:16:16 MST