Re: Hawking on AI dominance

From: Mike Lorrey (mlorrey@datamann.com)
Date: Sat Sep 08 2001 - 08:44:10 MDT


"J. R. Molloy" wrote:
>
> Incidentally, Kurzweil's last two sentences in his response to Stephen
> Hawking, to wit:
> "I don't agree with Hawking that "strong
> AI" is a fate to be avoided. I do believe that we have the ability to shape
> this destiny to reflect our human values, if only we could achieve a
> consensus on what those are."
> has prompted me to add "human values" to the list of useless hypotheses,
> because to the extent we shape "strong AI" to reflect a consensus of "human
> values," we thereby make it "weak AI."

I disagree, though the use of the term 'human' is likely causing the
conceptual conflict. Try 'sentient values'. Just because an entity has
an IQ of 1000+ doesn't mean it doesn't have any values we would
recognise. Whether such an entity has values like 'trust', 'empathy',
etc doesn't distinguish it being strong or weak AI, only whether it is
to be seen by the human race as malignant or benign.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:25 MST