Re: Profiting on tragedy? (was Humour)

From: Lee Daniel Crocker (lcrocker@calweb.com)
Date: Sat Dec 28 1996 - 21:38:18 MST


> > If one buys Rand's contention that normative philosophy (ethics,
> > politics) can be rationally derived from objective reality, then we
> > can assume that very intelligent robots will reason their way into
> > benevolence toward humans. I, for one, am not convinced of Rand's
> > claim in this regard, so I would wish to have explicit moral codes
> > built into any intelligent technology that could not be overridden
> > except by their human creators. If such intelligences could reason
> > their way toward better moral codes, they would still have to
> > convince us humans, with human reason, to build them.
>
> As I explained in an earlier post, the ethicality of the Powers depends
> on their ability to override their emotions. What you are proposing is
> taking a single goal, the protection of humans, and doing our best to
> make it "unerasable". Any such attempt would interfere with whatever
> ethical systems the Power would otherwise impose upon itself. It would
> decrease the Power's emotional maturity and stability. You might wind
> up with a "Kimball Kinnison" complex; a creature with the mind of a god
> and the emotional maturity of a flatworm.

And I'm supposed to accept your wild speculations over mine? If that's
what will happen when I hard-wire a robot not to kill me, then so be it.
I leave those wires where they are. If, and only if, I can rationally
convince myself--with solid reason, not analogy and extrapolation--that
clipping those wires will be in my interest, will I consider it.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:56 MST