Re: Near-Term Futility of AI Research

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Apr 18 1999 - 12:32:44 MDT


Quaenos@aol.com wrote:
>
> Eliezer Yudkoswky writes:
>
> >Tell James Bradley that we are coming for him. ("We", in this case,
> >being AI researchers.) We are coming for the Mona Lisa. We are coming
> >for music. We are coming for laughter. We are coming for love. It's
> >only a matter of time.
>
> AI research is a dead end in the short term. Current AI research isn't
> accomplishing anything. Until we have a better understanding of human
> cognition and the brain in general AI is not going to make any significant
> progress. Co-called AI researchers would perform a better service for
> themselves and the AI field if they devoted their time and resources to
> neuroscience.

I disagree; I would substitute "cognitive science" for "neuroscience" in
the sentence above. We need to work down, not up. What good does it do
to know how neurons fire? It's the same basic pattern used by a
chimpanzee, or a cat, or a flatworm. If we understood the whole mind of
a cat it would be a major advance, this I admit, but understanding
neurology - even on the level of Edelman or Calvin - doesn't really help
when it comes to coding a transhuman AI. It would take a much higher
level of understanding - solving *in toto*, in fact - before neurology
will be really useful in AI. Of course, where we really do understand
what individual neurons are doing, like in the visual cortex, that *is*
extremely useful - if you're trying to design a visual cortex.
Likewise, gross neuroanatomy, or even just knowing that the limbic
system is more evolutionarily ancient than the large frontal lobes, can
also be useful - but only in understanding what parts of a human are
legacy systems that should *not* be duplicated faithfully in the AI.

Cognitive science is where it's at. Hofstadter and Mitchell's Copycat,
one of the few real advances in the field (although, alas, not a recent
one), was created by observing what real people did when they were
making analogies, and using those observations to deduce what the
sub-elements of analogies were. They worked down, not up.

> Look at the deities of the AI pantheon, Minsky, Searle, what have they
> produced in the last 5 or 10 or even 20 years? Who has expanded on Turing's
> work? If Turing is the AI field's Newton, where is the Einstein?

I don't know about Einstein, but someday I'd like to be the Drexler.

> All I see
> lately is discussions about the limits of computation, club-handed
> discussions of consciousness, and starry-eyed fantasies of Powers. There is a
> disconnect. Where is the transhumanist explanation for lack of AI progress?
> Why don't AI researchers realize they aren't getting *anywhere* until we
> understand the operation of the human mind? And if the transhumanists don't
> even realize this quagmire, who else will?

I freely admit that most of the design in "Coding a Transhuman AI" is
directly derived from introspection, and the rest is derived indirectly.
 But it's also possible to get too caught up in trying to duplicate the
human mind. The neural-network people, for example, rest their entire
field on the fact that they use the same neurons humans use. Leaving
aside for a moment the fact that this isn't true, the same could be said
of their using the same atoms.

The accomplishment is not in creating something with a surface
similarity to humanity (classical AI) or that uses the same elements as
humanity (connectionist AI), but in reducing high-level complexity into
the complex interaction of less complex elements. Both classical AI and
connectionist AI take great pride in claiming to have found the
elements, but they usually toss out the complexity - the elements don't
interact in any interesting way.

What is the problem with modern AI?
I present the following unpublished fragment:
"Waking Up from the Physicist's Dream."

==
The problem with much of previous AI
is that it attempted to prove something rather than build
something, a problem that intersected with the attempt to reduce
cognition to simpler components - rather than to more complex
components, as should have been done. A programmer does not take
the user's requirements and attempt to prove that they can all arise
from ones and zeroes; a programmer takes the simple but high-level
requirements and works out a complex (but lower-level)
specification. Programming, in a sense, is the opposite of physics;
physics proves a reduction to simpler components, programming
builds from complex components. It's the same hierarchy, but a very
different attitude. Physics focuses on the elements and their rules
of interaction; programming takes the elements for granted and
focuses on extremely complex patterns of elements. And yet it is the
physical paradigm that dominates AI, whether classical or
connectionist.

Thus you will see a discourse on emotion which seriously states:
"The hypothesis is developed that brains are designed around reward
and punishment evaluation systems, because this is the way that
genes can build a complex system that will produce appropriate but
flexible behavior to increase fitness." This is the hypothesis?
Bleeding obvious is what it is; the question is how. (Not only that,
but since the "hypothesis" is true of lizards, human emotions - the
focus of the book in question - are undoubtedly far more
complicated.) But since the focus is on the elements, one feels safe
in predicting that only very simple combinations of these elements
will be treated.

"Look at me, I have a neural network!" "Look at me, I have a Physical
Symbol System!" These are the basic messages of connectionist and
classical AI, point-missing imitations of the "Look at me, I have a
quark!" physicists. How many times have you heard: "This system
uses neural networks, the same system used in the human brain..."
Big whoop. It's the same system used by a flatworm's brain. You
might as well say that it uses the same atoms.

Reduction to complex elements, as with Copycat's reduction of
analogies to bonds and correspondences, is an entirely legitimate
effort - in fact, it is the focus of this entire site. It is the art of
reducing object-level complexity to interaction-level complexity -
taking a complex behavior exhibited by a monolithic object, and
showing how some or all of that complexity arises from the
interaction of slightly less complex components. The focus is on the
interaction, not on the fact that simpler components have been
found. The high-level complexity is transferred, not destroyed.

The physical paradigm cares nothing for preserving the complexity of
high-level behavior; the focus is on finding the most fundamental
elements. It is this attitude that poisons both connectionist and
classical AI. Is all human behavior explained by the laws of physics
acting on atoms? Very likely. Is all human behavior explained by the
statement that "The laws of physics act on atoms"? No. We know
that there is a low-level explanation, but we don't know the
explanation. A proof that an explanation exists is not an explanation.
The goal of AI must be to find the explanation for human intelligence,
not to prove that it can be explained in terms of ones and zeroes, or
Physical Symbol Systems, or neural networks, or Lord knows what.

Wake up, I say, from the physicist's dream! You will never discover
a set of simple elements and a set of simple interactions from which
human thought spontaneously arises with no other programming work
on your part. Not neural networks, not first-order logic, nothing.
Nobody would buy this method if you wanted to create a spreadsheet
program. Why do you think it will work with the infinitely more
complex human mind?

The high-level complexity arises from the extremely complex
interactions of atoms, not from the atoms themselves. To build a
system of atoms proves nothing.

Reduction means "Explain the complexity!", not "Find the basic
elements!" Reduction has to proceed one level at a time; you must
reduce the human body to organs before you can reduce it to atoms.
For a physicist to reduce rocks to quarks in one fell swoop would be a
great accomplishment indeed; for an AIer to "reduce" the human mind
to ones and zeroes is useless. The physical paradigm ignores
high-level complexity; the programmer's paradigm has it as its
object.
==

-- 
        sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/singul_arity.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:33 MST