Re: ECON The Abolition Of Work

From: Daniel Fabulich (daniel.fabulich@yale.edu)
Date: Thu Apr 30 1998 - 12:18:18 MDT


On Thu, 30 Apr 1998, John K Clark wrote:

> I don't understand the significance of the distinction between sentient and
> non sentient robots made in this thread. We have non sentient robots now,
> (probably, I can't prove it) but they're also without any doubt dumb as dirt
> and that's exactly why they haven't changed the world yet. The subjective
> experience of the machine is of interest to it, but from our point of view
> the only important distinction is that between a smart robot and a stupid one.

Good point. This distinction is much clearer. Doesn't really affect any
of the relevant points, but it IS much clearer.

> I also think that the question of whether humans should give rights to
> machines is moot, the question of whether machines will give rights to humans
> is not.

By this you seem to be implying that very smart robots WOULDN'T grant us
rights, and so, in order to protect our rights, we shouldn't try to build
them too smart.

It's true, smart robots might decide to rob us of our rights. I'd like to
think that they wouldn't, but that's really just a guess. However, since
we can only guess at this sort of thing, the underlying premise behind an
argument like this is that we shouldn't try to create new entities smarter
than ourselves, including trans/posthuman children, if there's a chance
that doing so will jeopardize our liberties.

If germline engineering turns out to be a good way to build a better
human, and that happens to mean that our children will be superior to us
in every way, does that mean we shouldn't have smarter children? After
all, it would be a moot question to ask whether we should grant THEM
rights... the question would be whether they would grant rights to us.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:01 MST