From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Sat Dec 28 1996 - 08:39:06 MST
On Fri, 27 Dec 1996, Lee Daniel Crocker wrote:
> [...]
>
> If one buys Rand's contention that normative philosophy (ethics,
> politics) can be rationally derived from objective reality, then we
> can assume that very intelligent robots will reason their way into
> benevolence toward humans. I, for one, am not convinced of Rand's
> claim in this regard, so I would wish to have explicit moral codes
> built into any intelligent technology that could not be overridden
This would assume AI can be achieved with procedural systems, which I
don't think is possible. Asimov's Laws of Robotics must remain a fiction,
alas. Real world is much too fuzzy to be safely contained by simple rules.
> except by their human creators. If such intelligences could reason
> their way toward better moral codes, they would still have to
> convince us humans, with human reason, to build them.
>
ciao,
'gene
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:56 MST