From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Sat Dec 11 2004 - 09:35:59 MST
> > I don't agree that hardware is the problem at all.
> > You can get 10 Tflops now simply by adding on extra
> > processors. You don't need a super-computer. Just
> > link together a large number of less powerful
> > computers and you have a 'cheapo' super-computer.
>
> Yeah, for about $2 million if a Beowulf cluster is
> adequate. See my "Hardware Progrss" series of posts
> in the SL4 archives. Someone could, if they had
> the money, but my point above was that no one actually
> has built a 10 Tflops machine for AI work. There
> are several machines in that performance class at
> present, but they are doing weather and atomic bomb
> work mostly.
At the recent AAAI fall symposium series there was a funding
seminar with program managers from NSF, DARPA, ARDA, etc.
During a panel I asked if there were any AI projects using
ASCI-scale machines. The only answer came from Dave Gunning
of DARPA, who referred to the SRI PAL project funded under
the DARPA Cognitive Systems program. But I cannot find any
evidence that the PAL project is using very large machines.
Dean Collins of ARDA, which does R&D for the "intelligence
community", said their goal was to avoid surprises. If they
really mean that, then they ought to have some AI and machine
learning projects using very large machines. It is plausible
that they are too dumb to do that, but it is equally
plausible that they are doing such work in secret.
> Caveat - I'm not up on all the AI projects going
> on in the world right now. But I think collectively
> this list has knowledge of a significant fraction
> of such work. So I'll ask the list - what's the
> most powerful hardware installation you know of that
> is being applied to AI work?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT