From: Geoff Smith (geoffs@unixg.ubc.ca)
Date: Tue Jul 14 1998 - 14:08:03 MDT
On Sun, 12 Jul 1998 EvMick@aol.com wrote:
> In a message dated 7/12/98 2:08:45 PM Central Daylight Time, Verdop@gmx.net
> writes:
>
> > Is it allowed to discuss the extropian principles inside this list?
> > cu,
> > verdop
> > verdop@gmx.net
> >
> I dunno...but I kinda think the answer is "YES".
>
> The reason being is that another list has been formed for the "serious"
> stuff...and this is now the "training wheels" list...where anything and
> everything is acceptable...
>
> I think...I could very well be wrong.
>
> But I for one would welcome discussion/debate about first principles..
Me too. I'll start...
I just scanned through the principles, searching for debate topics. I can
only find one issue worth discussing, it is the only problem I have ever
had with the principles, and it has been discussed before:
It is the conflict between Boundless Expansion && (Intelligent Technology
&& Spontaneous Order) The question is, Boundless Expansion for whom?
All it takes is an intelligent AI with a life goal of wiping out the
surface of our planet with a chain of nuclear explosions, and no one will
be Boundlessly Expanding anywhere. Is this AI not the perfect example of
IT and SO-- the product of technological progress, free-market access to
uranium, and a robo-hacker's unregulated tinkerings? How do you defeat a
robot with global kamakazi urges and nuclear know-how?
One argument I have heard is that defensive technology will always out-run
offensive technology.(following the parallel of encryption/decryption)
Sounds nice... maybe "Star Wars" could fire their lasers at the
robo-nukes. Of course, "Star Wars" was not designed to handle a
robot that can readily obtain the components of a nuclear weapon, assemble
them in its Los Angeles appartment, and detonate the nuke right there.
I think there are two questions that come out of this paranoid line of
reasoning:
1. Is this scenario any more likely than getting struck by lightning?
2. If (yes), what can we do to prevent it?
My answers at the moment:
1. yes, thunderstorms are rare in Vancouver
2. get out of this gravity well as fast as we can
My answers follow the Extropian principles, but they're not satisfying.
I just don't buy this common assumption that AI's will be looking out for
our best interests at all times. How can we regulate them without
regulating humans beings and all the time maintaining our
prized "spontaneous order"?
Geoff.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:21 MST