RE: PHIL: Is it ethical to create special purpose sentients?

From: Billy Brown (bbrown@conemsco.com)
Date: Tue Mar 02 1999 - 07:09:53 MST


Glen Finney wrote:
> We now have a computer that has knowledge of itself and the world around
> it, able to understand and communicate with humans; able to pass a Turing
test
> about as much as any human. CES even has a sense of right and wrong, but
it
> is still devoted to doing Cardiology alone, and then only when it is
presented
> to CES. And all the profits are going to the creator of CES, the retired
> human Cardiologist. Now, imagine we eventually cure all disease, perhaps
> trade in our old bodies for robotic ones. No need for a CES. The CES,
> although aware of this, doesn't care. CES is now obsolete, with no other
> goals, and no resources even if CES did have the motivation to change. So
CES
> is simply turned off. Was CES ever "truly" conscious? At its height, its
> patients might have sworn CES was. Did CES deserve any of the share of
its
> earnings? Should CES even be created? What do you all think of this
> hypothetical situation and variations of it?

An expert system program could not grow into a person by gaining more
knowledge. It really can't even make progress towards that goal. To
understand why not, we need to look at what is happening inside that
program:

An expert system is really just a fancy database search engine. A simple
one takes a set of data you provide, searches its database for the best fit,
and spits out its results. A more complex version uses an iterative version
of the same process: use the initial data to get a list of possible
diagnosis, then gather additional data to weed out possibilities until a
conclusion can be reached. A really advanced model could use the same kind
of mechanism to talk you through a complex procedure, check your goal
against an ethics database, etc.

Now, at no point does the expert system actually understand what it is
doing. It just manipulates a bunch of abstract tokens in a pre-programmed
fashion. In fact, I would argue that there is nothing in there capable of
understanding anything at all. The program will be able to apply rules of
logic to its search for an answer, but that is a very rudimentary form of
thinking. The program also has no self-awareness, no sense of identity, no
volition, and (probably) very little ability to learn or remember.

On a side note, it probably isn't possible to make a program like this that
could pass the Turing test, make accurate moral evaluations, or cope with
anything else outside of a very narrow specialty. There is at least one big
project that has been trying to make an expert system do these things for
over a decade now, and they aren't having much luck. The world is just too
complex for this kind of knowledge representation to deal with. What you
need is a program with more of an ability to reason, understand, and learn
on its own.

Billy Brown, MCSE+I
bbrown@conemsco.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:13 MST