From: Brent Allsop (allsop@swttools.fc.hp.com)
Date: Tue Feb 03 1998 - 13:40:02 MST
DOUG.BAILEY@ey.com <extropians@extropy.com> polled:
> I would like to take a survey of opinions on this list regarding the
> Strong AI hypothesis.
I think we can already do it, we just need a lot more of the
same. For example, we can produce a machine that can tell us what
color something is much better than we can. Such machines are aware
of what color something is much better than we are. But this is all
missing the real point of what our consciousness is built out of. Our
knowledge of color is represented by phenomenal color qualia or
feelings or sensations. A "strong AI" device represent color with
abstract sets of bits. The fundamental quality of these
representations is, by design, not relevant. But to us the particular
fundamental qualia or sensation we use to represent a particular color
(and what it is fundamentally like), which is our knowledge of what
color something is, is all important. If you swap the hardware
representations of two particular colors in a "strong AI" machine,
there is no significant difference. But if we represent 700 nm light
with a green sensation rather than a red one, there is all the
difference in the world!
I say yes, strong AI is possible given an absurd amount of
increased and unnecessary complexity. A "strong AI" device isn't
"strong" enough because it lacks our phenomenal ability to represent
knowledge in more than almost meaningless and non unified "abstract"
ways.
Brent Allsop
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:48:33 MST