Re: Attacks (was Re: Why would AI want to be friendly?)

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Oct 01 2000 - 18:39:40 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > If that was what it was then I would also find it revolting. But I
> > think a more interesting set of possibilities might be present. Today
> > we kill or lock away for life those we consider irredeemably criminal.
> > Putting the person instead in a VR with karmic consequence mode turned
> > on would a) not involve the irreversible destruction of the individual;
> > b) give them a chance to learn and grow without harming other people.
>
> That is an unacceptable violation of individual freedoms. If someone *wants*
> to walk into an environment with karmic consequences, they have the right to
> do so. Nobody has the right to impose that on them. Once the Sysop Scenario
> is achieved - once "society" has the technological power to upload a criminal
> into a karmic environment - society no longer has any conceivable right to do
> so. The criminal is no longer a threat to anyone. There's no need to
> discourage future crimes.

Is it a more acceptable violation of individual freedom to kill someone
outright because they are dangerous to others? Is it more acceptable to
force people to change regardless of what sort of world they prefer to
live in? Is it more acceptable to impose an ultra-hitech future on
those simply in no way able to deal with it? I can see that it is not
palatable to put people into a VR involuntarily. But that violation may
be smaller than the larger violations that would occur otherwise. It is
that possibility I want to raise.

How is the criminal not a threat to anyone? What if I don't want to be
fragged even if the friendly SI will resurrect me instantaneously?
There is plent of need to discourage criminal abuses regardless of
whether there is an SI. Or do you want the human race to go totally
infantile where nothing is real, nothing is at stake and nothing can
really be changed at all?

>
> Pain is *never* an intrinsic good, no matter who it happens to! Certain
> people, by their actions, make themselves more "targetable" than others - if
> either a murderer or an innocent human must die, then it might as well be the
> murderer. Adolf Hitler, for example, is so targetable that we could shoot him
> on the million-to-one off-chance that it might save someone's life. But once
> there's no longer any need for *anyone* to suffer, then nobody is targetable

I don't propose to "target" anyone. I simply am floating the idea that
the ultimate in freedom with infinite room to grow is to be able to live
within whatever world-constraints one most wishes and see how that is.

What do you think humans will do exactly once your sort of SI is a
reality?

- samantha



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:19 MST