From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 02 2001 - 10:46:28 MDT
Chris & Jessie McKinstry wrote:
>
> I just joined this list after Amara Angelica from KurzweilAI pointed out
> that there was some talk about GAC in this group. I've looked at some of the
> posts and would like to make some comments:
Hello, Chris, welcome on board. If my comment was one of the ones you
read, well, I hope you weren't too offended, but I stand by all of it.
"Basically friendly toward people, unremittingly harsh toward ideas" is my
motto.
> 1 - GAC is a black box. I have made no disclosures on what it uses for
> pattern matching (not that it should matter if you place any value in the
> Turing Test which bars knowledge of the internals of a system.) But, I am
> doing experiments now with SOMs and SRNs.
(SOMs and SRNs: Self-Organizing Maps and Simple Recurrent Networks.)
The fact that GAC is a black box is not encouraging, at least to me. In
the continuing fight between the group that believes consciousness to be
fundamentally a simple algorithm, and the group that thinks you have to
build and build and build before the AI has enough internal complexity to
represent the single thought "Hello, world", I'm with the latter group.
In fact, I would identify my stance with Tooby-and-Cosmides and modern
functional neuroanatomy rather than any of the traditional AI sources.
Anyway, if GAC is a black box, it says to me that you think a simple
algorithm lies at the core; if you're playing with SOMs and SRNs, it says
to me that the internal functionality complexity of GAC is almost nil.
> 2 - The primary purpose of GAC is to build a fitness test for humanness in a
> binary response domain. This will in the future allow GAC to babysit a truly
> evolving artificial consciousness, rewarding and punishing it as needed at
> machine speeds.
That certainly isn't what it says on your Website. On your Website, it
says things along the lines of: GAC! The new revolution in AI! The
first step towards true artificial consciousness! We're teaching it what
it means to be human!
I judge GAC according to those claims. (I have no objection to someone
claiming a revolution in AI, of course; it's a legitimate claim that can
be legitimately argued. It's just that in this case, I think the claim
turns out to be totally, totally wrong.)
> 2.5 - The key to evolving anything is the fitness test. If I want to evolve
> a picture of the Mona Lisa, then I need a fitness test for the Mona Lisa. A
> good fitness test for the Mona Lisa would be a copy of an image of the Mona
> Lisa. To rate the quality of an evolving picture, I would just need to
> compare pixels. The next best thing to using an image would be random
> sample of pixels; the larger the sample, the better will be the evolved
> copy. Right now, GAC is a 50,000+ term fitness test for humanness. At each
> one of those points GAC knows what it should expect it were testing an
> average human, because for each one of those points GAC has made at least 20
> measurements of real people.
I disagree with this whole approach to AI, of course, since GAC is mostly
a useful fitness test for Cyc, and I think that Cyc is also on the wrong
tack. In fact, I might even go farther than that, and state that, like a
static image of the Mona Lisa, GAC is chiefly useful for evolving another
copy of GAC; you can't use it to evolve Leonardo da Vinci or even good
pictures in general. Regardless, my chief objection is that you are not
selling a "fitness test", you are selling a "generic artificial
consciousness".
> 3 - Any contradictions in GAC are real contradictions in us. It can't
> believe anything that hasn't been confirmed by at least 20 people.
Okay. You do realize that two groups of 20 people can quite often have
inter-person disagreements that would never be represented within a single
mind? There are rather more than 20 people who would confirm the
statements "Christianity is the one true religion", "Judaism is the one
true religion", "Hinduism is the one true religion", and "Zoroastrianism
is the one true religion", but very few people who would believe all of
these things simultaneously. But, what the heck, I'm being pedantic; I
agree with (3) as stated.
> 4 - GAC is science. Over 8 million actual measurements of human consensus
> have been made. There are at least two other projects that claim to be
> collecting human consensus information - CYC and Open Mind - neither has
> actually done the science to verify that what is in their databases is
> actually consensus human fact. It's all hearsay until the each item is
> presented to at least 20 people (central limit theorem.)
4.1: I'm not fond of Cyc either. But Cyc isn't claiming to collect human
consensus information; rather, they are claiming to collect the
commonsense knowledge that one human might be expected to have. I think
Cyc has nothing but a bunch of suggestively named LISP predicates. If
they *were* collecting knowledge, however, what would be relevant would
not be whether the knowledge was true, or whether it was consensual, but
whether it duplicated the relevant functional complexity of the
commonsense knowledge possessed by a single human mind.
4.2: Performing lots and lots of actual measurements does not make it
science. To make it science, you need to come up with a central
hypothesis about AI or human cognition, use the hypothesis to make a
prediction, and use those lots and lots of measurements to test that
prediction. Analogously, I would also note that until GAC can use its
pixels to predict new pixels, it is not "AI" in even the smallest sense;
it remains a frozen picture, possibly useful as a fitness test for some
other AI (I disagree), but not intelligent in itself; as unthinking as the
binay JPEG data of the Mona Lisa.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:25 MST