RE: Consciousness and its vehicles

From: altamira (altamira@ecpi.com)
Date: Sat May 13 2000 - 09:33:59 MDT


Michael LaTorra wrote: "But it may improve impossible to keep a leash on
AI's."

Why would a person WANT to keep a leash on AI's? Would it be rational for
the less intelligent to control the more intelligent? Could the superhuman
intelligence (whatever form it might take) function if it were fettered?

Suppose the first AI is created with a meta-constraint to do no harm to any
human and to build this same constraint into every succeeding AI that the
first AI might devise. Without even considering generations of AI's beyond
F1 and F2 I can see major problems that would arise if humans attempted to
control them. For example, the AI's, being more intelligent than the
humans, might be in a position to better predict when harm would result from
a particular course of action a human wanted to engage in. Thus, if a human
demands that the AI perform a service that the AI judges to be ultimately
harmful to the human, the AI would be bound by the meta constraint to
refuse. But assuming that the human is acting in what he considers to be
his own best interest, the AI's refusal to follow his orders will be
interpreted as mutiny (or at least malfunction).

I must confess I've only read one of Isaac Asimov's robot books, and that
was a long time ago (it was so long ago that I've forgotten the title). He
may well cover these sorts of issues in those books.

Bonnie



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:36 MST