From: Robin Hanson (rhanson@gmu.edu)
Date: Thu Jun 21 2001 - 17:39:17 MDT
hal@finney.org wrote:
> > > In July 1998, the Pentagon put Cyc and a dozen other AI
> > >systems through their analytical paces, giving each team a package of 300
> > >pages of abstruse data to program in their systems and following up with a
> > >series of complicated strategic queries. Cyc scored better than all the
> > >other systems put together, ...
>
> Actually I think this competition result is consistent with what Pratt
> saw. CYC does a very good job at putting together facts if it knows them.
> As long as the "ontology" (the set of concepts and words that is input to
> the system) represents the area in question, it can find inconsistencies
> and cross correlate different data bases. It apparently excels at that
> and it sounds like this is what the Pentagon test required. Input 300
> pages of data and then query the system on that data. This is exactly
> what CYC (and, equally importantly, its trainers) have been practicing
> for 20 years now.
>
> But that's not really a test of common sense. ...
This isn't clear to me. Common sense might be just what you need to
create a new ontology for a new area.
> Would a three-fold improvement of coverage have made a qualitative
> difference in what Pratt saw? I am skeptical. CYC's coverage seemed
> less than 1/3 of what Pratt had been led to expect by Lenat's optimistic
> reports. He had a list of questions prepared and they didn't even get
> into it, really, the questions being obviously far beyond what could
> be expected.
I agree. Cyc must now still be far below Pratt's hopes. And it may be
far below whatever impression "hype" creates. But it may still be a
remarkable achievement.
> Even if it is farther along, the relevant question is whether it is far
> enough along to be useful at applying common-sense knowledge.
Depends on what the application test is.
> > If Cyc
> > does eventually succeed, all the other AI researchers should be called
> > to account for what they were doing instead of helping to improve Cyc.
>
> Well, you can hardly blame researchers for trying many paths to the
> truth, or criticize them when one of the paths works out much better
> than the others. If we always knew in advance which project would work
> we wouldn't need to do science in the way we do.
I think this too easily lets everyone off the hook. Idea futures would
be a better way of holding people accountable, I suppose.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:14 MST