From: Davin Enigl (enigl@earthlink.net)
Date: Thu Jun 28 2001 - 11:51:14 MDT
Brent writes:
----- Original Message -----
From: Brent Allsop
Sent: 6/27/01 8:59:41 PM
Subject: Question for Ray Kurzweil
Let me ask you this: If an AI you were designing was asked:
"What is the taste of salt like?" how should it answer? Would it be
lying if it said it knew what the taste of salt was like?
Yes, I see what you mean. It even goes deeper, philosophically (e.g.,
qualia). Can anyone of us be sure we are describing our taste to a different
human? Just try to describe what a loquat tastes like, or even salt. It seems
like an unfair question to a human or to an AI. Reverse the question: The AI
asks me "What is the wateractivity of a 12% aqueous sodium chloride salt
solution "like." I can run over to my wateractivity meter and read off the value
it gives and tell you the relative humidity it equates to, and what food can be
preserved by adding the salt, but I couldn't really answer the qualia-question
you are asking. If humans can’t answer why should AIs be expected to answer? I
would expect AIs to answer 1) as well, and 2) faster, and 3) not leave out some
aspects that I forgot, or did not sense/know. So, the AI is answering in 2) and
3), better than I can answer. I think that is all I expect from an AI.
(Brent said: ". . . This will be to finally
include the search for, discovery and classification of whatever
natural physical process there are that have or produce the
"phenomenal properties" of the "qualia" which our brain uses to
represent conscious information.
I saw a speculation that RNA (or another chemical reaction) might do that in
Karl Popper's Knowledge and the Body-Mind Problem p. 123-4. I will look at
your draft article, too.
-- Davin
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:20 MST