COMP: Re: Profiting on tragedy? (was Humour)

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Sun Dec 29 1996 - 03:19:48 MST


On Sat, 28 Dec 1996, Lee Daniel Crocker wrote:
> > [...]
>
> And I'm supposed to accept your wild speculations over mine? If that's
> what will happen when I hard-wire a robot not to kill me, then so be it.
> I leave those wires where they are. If, and only if, I can rationally
> convince myself--with solid reason, not analogy and extrapolation--that
> clipping those wires will be in my interest, will I consider it.

Surely, a Power will be slightly more complicated than a human being.
Surely, because of the complexity/robustness necessary a Power will need
nonalgorithmic control. If yes, then the whole concept of "hardwired
behaviour" is meaningless. Imagine you could control a human: if a threat
is detected you could disrupt his motorics, so that he is frozen
immobile. But how does one detect a threat? There are zillion
possiblities how a human being might kill another one with dire violence,
and even more of them how it could be done with cunning. To catch them
all, the filter must become arbitrarily complicated, nay, even sentient.
Clearly, such a filter should become unreliable?

ciao,
'gene



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:56 MST