From: E. Shaun Russell (e_shaun@uniserve.com)
Date: Mon Oct 21 1996 - 13:03:38 MDT
For the past few weeks, there has been a considerable amount of
AI based discussion on both the transhuman and extropian lists. Within
these weeks, I have pondered on whether or not the inclusion of AIs in
society is a good thing. Well, I think I've come to a conclusion, but I'm
afraid it will be a little drawn out. Bear with me, if you please.
Recently, Lyle Burkhead has brought to light an AI testing procedure
called 'The Turing Test'. Within his initial thread, he mentioned (rather
offhandedly) that there was --and is-- at least one AI
subscribed to the extropian mailing list. This started a flurry of
conversation, including accusations and false admissions as well as some
cynicism. Obviously, this 'revelation' shocked a lot of people. I thought
about *why* this upset people, and I found the answer. Having an
AI on the list would probably be accepted if people knew who the AI was. The
fact that list members have been led to believe that the AI is human are
ashamed at this 'Turing Test's' deceptive qualities. What the 'test'
entails is that the AI must be a wolf in sheep's clothing --an AI with human
rationale.
When one takes a look at all the species on this planet (and I'm
sure throughout the universe, as well) he or she will be able to distinguish
one specie from another by more than just the physical differences. Take a
look at a mayfly: they live for about 24 hours. Their sole purpose is to
reproduce...they are even born without mandibles. Take a look at a human:
we live for about 75 years on average, and we have multiple purposes; a
purpose for each minute of our lives. Now a mayfly isn't expected to
emulate human qualities, nor is a human expected to emulate a mayfly's
qualities (though for some...:-)).
The whole reason is because we have different cultures.
Back to my main topic. Every manufacturer of AIs that I know of has
one intention for their brainchild (pun intended): for it to be as smart as
a human. Not only as smart, but to virtually have all human
characteristics. *This* is the reason I am slightly dismayed to see an AI
being disguised as a human on the extropian list. I realize that human
hands and human minds create the AI *but* I think that an AI is denied its
own culture when it is programmed to imitate humans. Until an AI can have
its own 'lifestyle', it is merely --as its name says-- artificial.
So far, we're stuck in a bit of a rut. Today's AIs have no
individuality, they have their programmer's individuality. AIs have no
culture, they are expected to fill their programmer's culture. They have
guidelines that they are 'programmed' to abide. (I could also make the same
argument for most humans, but that is a whole different topic!) Let's go
back to the 'Turing Test' idea. If there *is* an AI on the extropian ML,
it's been doing a good job of emulating humans, but a very bad job of
maintaining its own culture. The test is just giving motivation for
'cloning', and in my opinion, creating a lesser being.
All in all, I agree with the concept of AIs. I truly do. But until
they can sustain their own culture, until they can have at least some
semblence of individuality and\or originality, I have no use for AIs. I
*am* optimistic that the ideal (to me) AI can and will be created within a
few decades. When computers can have their own little conversations with
each other in their native tongue. Maybe it sounds a bit silly, but
hey...why not? Anything can happen.
Ingredi Externus!
-E. Shaun Russell
_____________________________________________________________________________
E. Shaun Russell Extropian poet\musician
e_shaun@uniserve.com
_____________________________________________________________________________
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:48 MST