Nick Bostrom wrote:
> Does this mean you think that no animals are sentient? Sounds
> implausible to me.
No, it just means I'm trying to avoid starting an argument about animal sentience by phrasing the issue narrowly.
> No, at least that is not what I am proposing. Let it be able to think
> about morality. Let it also be able to change its fundamental values.
> If I am right then that won't matter, because it will not *want* to
> change them. (I'm almost tempted to defin a "fundamental value" as: a
> preference that you would not want to change.) What I am suggeting is
> that any SI we build has repect for human rights as a fundamental
> value. As long as we make sure it has that value, then we need have
> nothing to fear. It will go about its business and perhaps transform
> itself into a power beyond all human understanding, but it would not
> harm us humans, because it would not want to harm us. Maybe speaking
> of an "inviolable moral code as a core element of its programming"
> conjures up the wrong conotations -- as if it were some form of
> coercion going on. I see it simply as selecting one type of value
> (human-friendly) rather than another (indifferent or hostile)..
Ah. I see. I think its the word "inviolable" that does it.
Billy Brown, MCSE+I
bbrown@conemsco.com