Re: Universality of Human Intelligence

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Oct 05 2002 - 16:59:40 MDT


Lee Corbin wrote:
>
> Here is an imaginary discussion with the (by now great) mathematician
> who has "understood" the result of his trillion year inquiry.
>
> Us: What do you know? What was it about?
>
> Him: I now know when X implies Y under a huge variety
> of possible conditions.

Apparently the human has chunked two high-level regularities in the
process being simulated, with these regularities known as "X" and "Y".
Suppose that the regularities are chunkable but not humanly chunkable?
For example, suppose that X and Y both exceed the simultaneous
representation capacity of the human brain, while nonetheless being highly
compressible relative to the system being simulated? Suppose that an SI
manipulated the concepts describing X and Y with easy facility, tinkering
with them, recombining them, seeing them as a part of a whole and a whole
made of parts, where to a human X and Y are not even conceivable as
concepts, even though the human can (given infinite time and paper) stand
in as a CPU in a system simulating a mind which understands those concepts?

That is my reply to your discussion: you are giving a case where the human
*does* understand high-level regularities of the result, and I am replying
with a case where the human does not and cannot understand high-level
regularities of the result that are readily comprehensible to an SI.

> Us: Can you tell us anything about X and Y?
>
> Him: Of course, practically my whole brain is packed with
> chunking information about X and Y. X is that under
> conditions A, B, C, D, E, and quite a few more that
> I'd have to look at my notes about, X is the case.
> Y little more complicated. And if you ask me about
> A, or B, etc., you will find that my understanding
> recurses to a degree unimaginable to one of Earth's
> finest historians or mathematicians of your era.

Again, you are giving an example of a situation where the entire project,
no matter how huge, happens to have a holistic structure consisting of
human-sized concepts broken down into a humanly comprehensible number of
humanly comprehensible concepts, and so on, turtles all the way down.
Yes, this *specific* type of inordinately huge simulation is
comprehensible to a human with infinite swap space. But this seems to me
to characterize an infinitesimal proportion of the space of possibilities.

> Us: Well, just out of curiosity, how long did it take the
> the SIs to get the result, historically? And how do
> you answer the charge that your trillion-year project
> was not challenging enough? Aren't there *other*
> things that they know but that you don't have the
> capacity to even state, let alone *ever* know the
> proofs of?
>
> Him: An SI in 2061 determined the result that X implies Y.
> As for more difficult projects, I'm eager to begin,
> of course, but in principle there are no projects
> *beyond* me, and for this reason: Those things that
> the SIs know that I cannot understand are not, in
> essence, understandable by them either. Those are
> things that just "work", like that old chess puzzle
> of K+R+B vs. K+N+N, or the weather. Now, one of my
> SI friends can tell you the weather almost instantly
> on an Earth-like planet given some initial conditions
> ---he basically just simulates it---and often in my
> discussions with them, it *does* feel like they
> "understand" things that I cannot.
>
> But hell, they don't *understand* the chess solution
> or exactly how my brain tells my arm to move any
> better than I do.

And here, again, we see a very carefully selected scenario. Let's suppose
that there are no regularities in KRB vs. KNN. I'd bet you're wrong,
actually, and that an SI going over the solution, or, heck, an ordinary
seed AI, would readily perceive regularities in the solution. Whether a
human would be able to understand these regularities, if the AI explained
them, is an interesting question; I'd bet on some, but not all.

But let's suppose the solution were incompressible. Let's also suppose
that this solution pattern is, itself, a regularity in another problem.
Let's suppose that it's one of an ecology of, say, 1,000,000 similar
regularities in that problem set which the SI has found convenient to
chunk, out of an explosive combinatorial space of, say, 300! possible
regularities. I submit to you that a human being simulating an SI
exploring that problem set will:

1) Never independently chunk all the regularities that the SI perceives;
2) Never independently chunk even a single one of those regularities;
3) Be similarly unable to chunk the SI's *perception* of the regularity
by examining the SI's low-level bit state, even given infinite time.

Why? Because the individual elements of the SI's cognitive process are
simply too large for a human to understand, not just as a single concept,
but even using the full power of human abstract understanding. Stack
overflow.

>>Can humans explicitly understand arbitrary high-level
>>regularities in complex processes? No. Some regularities
>>will be neurally incompressible by human cognitive processes,
>>will exceed the limits of cognitive workspace, or both.
>
> Here is where I hope that my dialog above addressed your point.
> I say that so long as the ideas are chunkable, all the human
> needs is a lot of paper, patience, time, energy, and motivation.

I reply that the class of systems humanly chunkable into human-sized
sub-regularities arranged in a holonic structure of humanly understandable
combinatorial complexity, is a tiny subset of the set of possible systems
with chunkable regularity, holonic structure, and compressible
combinatorial complexity.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:25 MST