Re: Universality of Human Intelligence

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Oct 04 2002 - 14:37:27 MDT


Lee Corbin wrote:
>
> Eliezer's comparison with the Chinese room is very apt
> here, and I will deal with it when I have more time.
> At the end of a trillion-year calculation that the
> human finally succeeded in performing, the end result
> would be like talking to a librarian who knew, or could
> figure out, where any intermediate result is stored.
>
> The key point however, is that the human, unlike the
> busy little guy in the Chinese room, knows the meanings
> of the final result and a huge number of intermediate
> results. The word "knows" perhaps should even be in
> quotes because asking him about the results determined
> during the 701st of the billion years comprising the
> trillion, may send him off on another two million year
> quest.

Sounds like the human knows nothing and is simply behaving as an
unnecessarily complex CPU. I don't see how a human could possibly know
the meaning of the final result, nor any of the interim results, if those
results were incompressible to the size of the human brain (which is
certainly mathematically possible). There is a difference between
simulating something and understanding it, and contrary to Searle, the
difference is not mysterious biological properties of neurons; the
difference is the explicit presence of cognitive mind-state that expresses
the high-level regularity being understood. Can humans simulate anything
given infinite time and paper? Sure. We can stand in for CPUs if we have
to. Can humans explicitly understand arbitrary high-level regularities in
complex processes? No. Some regularities will be neurally incompressible
by human cognitive processes, will exceed the limits of cognitive
workspace, or both. A human being can simulate a Turing machine that is
capable of explicitly representing and understanding those high-level
regularities, but the explicit cognitive representation will still be
stored in the gazillions of bits of paper, and will be nowhere mirrored
inside the human's actual mind.

If the cognitive representation stored in the gazillion bits of paper -
the real understanding - interacts in no interesting way with the human's
neural data structures, then the human is simply standing in for an
ordinary CPU.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:25 MST