RE: Universality of Human Intelligence

From: Lee Corbin (lcorbin@tsoft.com)
Date: Sat Oct 05 2002 - 16:12:13 MDT


Eliezer writes

> [Lee wrote]
> > The key point however, is that the human, unlike the
> > busy little guy in the Chinese room, knows the meanings
> > of the final result and a huge number of intermediate
> > results. The word "knows" perhaps should even be in
> > quotes because asking him about the results determined
> > during the 701st of the billion years comprising the
> > trillion, may send him off on another two million year
> > quest.
>
> Sounds like the human knows nothing and is simply behaving as an
> unnecessarily complex CPU. I don't see how a human could possibly know
> the meaning of the final result, nor any of the interim results, if those
> results were incompressible to the size of the human brain (which is
> certainly mathematically possible).

Here is an imaginary discussion with the (by now great) mathematician
who has "understood" the result of his trillion year inquiry.

Us: What do you know? What was it about?

Him: I now know when X implies Y under a huge variety
      of possible conditions.

Us: Can you tell us anything about X and Y?

Him: Of course, practically my whole brain is packed with
      chunking information about X and Y. X is that under
      conditions A, B, C, D, E, and quite a few more that
      I'd have to look at my notes about, X is the case.
      Y little more complicated. And if you ask me about
      A, or B, etc., you will find that my understanding
      recurses to a degree unimaginable to one of Earth's
      finest historians or mathematicians of your era.

Us: Well, just out of curiosity, how long did it take the
      the SIs to get the result, historically? And how do
      you answer the charge that your trillion-year project
      was not challenging enough? Aren't there *other*
      things that they know but that you don't have the
      capacity to even state, let alone *ever* know the
      proofs of?

Him: An SI in 2061 determined the result that X implies Y.
      As for more difficult projects, I'm eager to begin,
      of course, but in principle there are no projects
      *beyond* me, and for this reason: Those things that
      the SIs know that I cannot understand are not, in
      essence, understandable by them either. Those are
      things that just "work", like that old chess puzzle
      of K+R+B vs. K+N+N, or the weather. Now, one of my
      SI friends can tell you the weather almost instantly
      on an Earth-like planet given some initial conditions
      ---he basically just simulates it---and often in my
      discussions with them, it *does* feel like they
      "understand" things that I cannot.

      But hell, they don't *understand* the chess solution
      or exactly how my brain tells my arm to move any
      better than I do.

> There is a difference between simulating something and
> understanding it, and contrary to Searle, the difference
> is not mysterious biological properties of neurons; the
> difference is the explicit presence of cognitive mind-state
> that expresses the high-level regularity being understood.

Yes.

> Can humans simulate anything given infinite time and paper?
> Sure. We can stand in for CPUs if we have to.

Of course. But this is not what we are discussing.

> Can humans explicitly understand arbitrary high-level
> regularities in complex processes? No. Some regularities
> will be neurally incompressible by human cognitive processes,
> will exceed the limits of cognitive workspace, or both.

Here is where I hope that my dialog above addressed your point.
I say that so long as the ideas are chunkable, all the human
needs is a lot of paper, patience, time, energy, and motivation.

> A human being can simulate a Turing machine that is
> capable of explicitly representing and understanding those high-level
> regularities, but the explicit cognitive representation will still be
> stored in the gazillions of bits of paper, and will be nowhere mirrored
> inside the human's actual mind.

Yes. But I am of course submitting that the human has
for a trillion years engaged in more than acting like
at Turing machine.

> If the cognitive representation stored in the gazillion bits
> of paper - the real understanding - interacts in no interesting
> way with the human's neural data structures, then the human is
> simply standing in for an ordinary CPU.

Yes, and I am hoping (in order to avoid being wrong) that
anything worthy of being deemed *understandable* by anything
falls to the perseverance of my extremely idealized human
being.

Lee



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:25 MST