From: Billy Brown (ewbrownv@mindspring.com)
Date: Fri Jul 09 1999 - 08:49:32 MDT
Mark Phillips wrote:
> And as for the whole *ethical/legal* "status" of ultra-intelligent
systems,
> this is surely a conceptual *frontier* where we'll have to evolve it as we
> go along (kinda like common law). My intuition though, goes for a kind of
> modified combination of "Asimov's Laws of Robotics" plus something closely
> resembling a robust set of liberal/civil libertarian protocols for humans
> vis-a-vis such systems.
It seems to me that you are suggesting that we implant such "Laws" into
their code as moral coercions. We've argued over that before on this list,
and the summary position of my camp is as follows:
1) It won't work. You can only use this sort of mind control on something
that is less intelligent than you are.
2) It is immoral to try. What you are talking about is mind control, which
is a particularly pernicious form of slavery. These are *people* we are
talking about, not machines.
3) It is likely to be unnecesary. If they are really that smart, they will
know more about morality than we do. The best course of action is simply to
start them off with a firm knowledge of human morality, without trying to
prevent them from evolving their own (superior) versions as they learn.
Billy Brown, MCSE+I
ewbrownv@mindspring.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:26 MST