From: Ben Goertzel (ben@goertzel.org)
Date: Mon May 28 2001 - 06:56:56 MDT
Eliezer:
> I think the only correct statement we can make at this point is that
> we don't know whether it would be possible to do human- or superhuman-
> level AI using an architecture like the Internet. Perhaps current
> approaches to AI don't parallelize very well, but of course they don't
> come anywhere near to human level AI either.
>
> James is right that it seems that only a small fraction of problems can
> exploit the kind of parallelism found on the net, but Eliezer is right
> that we don't know whether it would be possible to design a cognitive
> architecture which could run efficiently there.
>
While I can't claim an omniscient knowledge of all possible AI
architectures, I've explored this conceptual space well enough that I can
state with a very high degree of certainty that, indeed,
-- there are some aspects of a digital mind's function that exploit the
Net's massively parallel architecture very well
-- there are others that just plain don't
Specifically, real-time conversation processing or real-time
perception/action or data analysis of any kind does NOT exploit this
architecture well. For this, you really want a supercomputer, or a highly
powerful cluster of fast computers. There are also aspects of "logical
thinking" that seem to have about the same requirements (they require rapid
SEQUENTIAL processing of a certain kind, rather than distributed
processing) -- I think Dennett had it about right here in his "Consciousness
Explained" book .. he didn't explain consciousness very well, but he did
very nicely explain the serial nature of some linguistic/logical processes
(and talk a lot about the seriality of these processes as compared to the
brain's underlying parallel hardware).
On the other hand there's a hell of a lot of processing that benefits *very,
very well* from Internet-style massive distribution. We need to distinguish
two cases here
1) Internet nodes that have a high-bandwidth constant connection to be
utilized during problem-solving
These can be used for a lot of things. One example is procedure learning
("schema learning" as we call it in Webmind), which can be done by using the
remote nodes to 'breed' populations of procedures, with the fitness
evaluation of each population element relying on calls back to a central
server containing a database of information and executing an inference
function.
2) truly remote nodes, accessible via very low-bandwith or
frequently-inactive connections (e.g. modems)
These can be used for fewer things.
One example of what they can be used for is parameter optimization. Log
files from a complex system can be data mined by various distributed
schemes, resulting in new results about optimal parameter settings in
various contexts.
We've worked through various cases like this in a lot of detail... and
designed a framework called Webworld, a sister to Webmind, that was intended
to exploit the massively distributed power of the Internet to support WM in
various ways. This was crudely prototyped but never really developed.
Ben
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:48 MST