From: mark@unicorn.com
Date: Thu Sep 02 1999 - 11:03:55 MDT
Eliezer S. Yudkowsky [sentience@pobox.com] wrote:
>I'm not sure how you're defining "AI" here, but such a process certainly
>wouldn't be "intelligent". It would not be creative, self-modelling, or
>capable of representing general content. It wouldn't have goals or a
>world-model except in the same way a thermostat does. Deep Blue doesn't
>know that it knows.
Assuming that you take yourself to be "intelligent", how do you know that
you know that you know and that you're not just a pre-programmed system which
has been programmed to claim that it knows that it knows? I'm not being
facetious here, I'm perfectly serious: most arguments against AI start by
claiming that humans have wonderful facilities which computers don't, when
they have absolutely no proof.
Can you prove that you can do all those things you're claiming that the
computer can't do? If not, why should I accept this argument?
Mark
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:00 MST