Re: fluffy funny or hungry beaver?

From: natashavita@earthlink.net
Date: Mon Jun 10 2002 - 17:33:35 MDT


Pssst .... for someone who hasn't followed this thread, I had to LOL at the subject line. Shows you where my mind was.

Natasha

Original Message:
-----------------
From: Hal Finney hal@finney.org
Date: Mon, 10 Jun 2002 15:31:38 -0700
To: extropians@extropy.org
Subject: Re: fluffy funny or hungry beaver?

--- Hal Finney <hal@finney.org> wrote:

> ... But in fact there is little evidence that AI is on a successful path.
> The recent spasm of publicity about the massively failing Cyc project
> just reminds us how far we are from a proven strategy which can lead to
> successful human-level AI, let alone super-human.

Jeff Davis replied:

> But wait... Maybe it's the same publicity. Maybe it's
> not in the publicity, but rather it's Hal's own
> assessment that the cyc project is a "massive
> failure". Perhaps cyc is a massive failure. What do
> I know? I'm not contending here, just seeking
> clarification. A modular and copyable database of
> common sense, while a bit less bells-and-whistles
> hi-tech code-savant glitzy, nevertheless seems like
> something which might be useful aong the road to ai.

Yes, it is my opinion that Cyc has been a massive failure, and I
was responding to the publicity which included an AP story published
in the L.A. Times,
http://www.latimes.com/business/la-000040734jun10.story
also at CNN,
http://www.cnn.com/2002/TECH/ptech/06/09/common.sense.computer.ap/index.html
and discussed at slashdot,
http://slashdot.org/article.pl?sid=01/06/22/1214229.

The problem with Cyc is that it has consistently failed to meet its
goals and objectives from the time the project was begun in 1984. It has
been almost 20 years since then, twice the original projected timeline,
and Cyc has still not shown itself to be useful, let alone to actually
exhibit common sense.

I reviewed a couple of articles by Lenat in CACM, one from 1990 and one
from 1995. In the 1990 article, which he characterizes as a mid-term
report, he wrote:

"How are we to judge where we are going? How do we make sure that we do
not go through these 10 years of labor, only to learn in 1994 that the
assumptions upon which we based our efforts were fundamentally misguided
all along? We do this by getting others to actually use our system."

He then lists a number of joint efforts, including activities at NCR,
Bellcore, and Apple. "Academic collaborations include coupling with
large engineering knowledge bases..., large data bases..., standardizing
knowledge interchange formats..., axiomatizing human emotions...,
machine learning by analogy..., and qualitative physics reasoning in
the service of understanding children's stories." (I am eliding the
researchers' names.) None of those have been a success in the sense of
still being in use today.

In the 1995 article he adds some more potential uses: information
retrieval customized to the user, such as web searches; linking multiple
external information sources, such as remote databases, news feeds,
etc.; allowing dynamically updated data displays where changing a name
entry, for example, allows Cyc to infer other corresponding changes to
make; increasing the functional richness and power of word processors,
for example a smart spell checker which understands the context of the
sentence; content checking, looking for inconsistencies or missing data
in reports; even fleshing out incomplete outlines into full documents(!);
aiding simulations to be more realistic through common sense knowledge;
improving AI in role playing games; aiding with natural language input;
improving the accuracy of speech recognition; smart routing of email
based on user models and partial understanding of the message; even
smart direct marketing.

That was 7 years ago. None of this has come to pass, except possibly
as some kind of toy or pilot program. The recent article describes the
attempts to use Cyc at Lycos and, reading between the lines, it really
didn't work. "It showed promise" is what people say when they are being
polite about a failed project.

As late as around 1990, Lenat was still promising that Cyc would be able
to read and learn from natural language texts by 1995, so that it would
no longer be necessary to hand-feed it information. That goal is as
far off as ever.

In 1994 Cyc allowed a researcher to visit their lab and evaluate the
system with a public report. It was an embarrassing disaster. Vaughn
Pratt had prepared beforehand a list of common-sense questions for Cyc,
http://boole.stanford.edu/cycprobs.html, based on an enthusiastic
presentation by Lenat. The actuality was completely different,
http://boole.stanford.edu/cyc.html. Cyc could not come close to
answering virtually any of the questions on Pratt's list. Its knowledge
was extremely spotty, at best. It did not have any real common sense,
it just had a haphazard set of facts that it knew. Since then the team
has never had another public evaluation like this one, as far as I know.

I think a fair conclusion is that Cyc has failed to achieve virtually
every one of the objectives and goals that it set for itself. Lenat has
continued to move the goal posts and is as optimistic as ever that success
is just around the corner. But by objective standards, the project has
to be considered a failure so far. And given the millions of dollars
and hundreds of man-years spent, probably the biggest AI project ever
(possibly excepting Deep Blue), it has to be counted as a massive failure.

Hal

--------------------------------------------------------------------
mail2web - Check your email from the web at
http://mail2web.com/ .



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:43 MST