Classical AI vs. connectionism

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Sep 14 1998 - 10:07:12 MDT


Emmanuel Charpentier wrote:
>
> On the other side, we can try to copy the (natural) neural network.
> The artificial neural networks that we can program nowadays are so
> simple (and yet so effective in some tasks) that we can easily predict
> the emergence of many more features. Memory, analogy, imagination,
> semantics, intuition... etc.

The idea that all computers work by manipulating semantically charged
computational tokens may be summarized as "classical AI". In other words,
symbols are the basic units of computation, manipulated by the high-level
rules of abstract thought.

"Connectionism" is the idea that you cannot explicitly program intelligence;
it has to be caught in a neural net.

Both paradigms are flat wrong. The battle between them has been refought many
times; I do not intend to refight it. The seed AI in _Coding_ is based on
neither principle. Thought does not magically materialize from abstract
manipulation, or inside a neural network. Thought has to be programmed BY
HAND with LOTS OF HARD WORK.

> How do you program analogy?

See Hofstadter's Copycat for an excellent demonstration of the basic principle
involved. You program analogy by reducing "analogy" to bonds, groups,
distinguishing descriptors... The basic cognitive elements underlying our
perception of an analogy.

> One more thing, natural neural network can hold conflicting
> beliefs: I can believe that the earth is flat (sitting atop of four
> giant elephants atop a great turtle) and try to calculate its radius
> using angles of the sun shadows in deep wells. No problem. -I/we- can
> be unconsistant! (and so easily) And it's a great feature, because
> finally, when you look at science, it's only a set of beliefs, some of
> which might conflict between each other (until better beliefs come
> into play).

Any system that's more than a first-order-logic game can hold conflicting
beliefs. You're fighting the wrong war. The seed AI in _Coding_ isn't
classical AI. I think I may have even put in an explicit explanation of how
probabilities are estimated given conflicting ideas.

> So, why do you want to program a perfect AI? And how do you
> manage unconsistency and/or uncompleteness (not having all/enough data)?

I don't want to program a perfect AI. I want to program an AI that has the
capability to consciously direct the execution of low-level algorithms. The
high-level consciousness is no more perfect than you or I, but it can handle
inconsistency and incompleteness. The low-level algorithm can't handle
inconsistency or incompleteness and it's downright stupid, but it's very, very
fast and it's "perfect" in the sense of not having the capacity to make
high-level mistakes.

Again: The _high_level_ is not classical, but it has the capability to _use_
algorithmic thought, in addition to all the _other_ capabilities. Let me
explain it your way: The high level is not connectionistic, but it has all
the magical capabilities you attribute to neural nets, and it can direct evil
classical AI in the same way that you can write a computer program. It
doesn't use classical AI for anything other than, say, arithmetic or chess
programs. And when I say "use", I don't mean it like "the brain uses neural
networks", I mean it like "Emmanuel uses a pocket calculator".

I don't know how much of _Coding_ you've read. If you read the whole thing,
you should hopefully understand that _Coding_ is not remotely near to
classical AI.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:34 MST