RE: Universality of Human Intelligence

From: Lee Corbin (lcorbin@tsoft.com)
Date: Sat Oct 05 2002 - 15:51:06 MDT


Anders writes (and I apologize for the many messages I have
not yet read, if they're pertinent)

> On Fri, Oct 04, 2002 at 09:22:33AM -0700, Lee Corbin wrote:
> >
> > I forgot to mention the *absolutely* key role of language.
> > Without the ability to linguistically chunk previous results
> > (and to write them down, of course, as someone charitably
> > noted), yes, this would be impossible.
>
> I think the flexible chunking part is important, but language as
> communication is not necessary for understanding (but it helps
> immensely).

I agree. What is important about language in this context
is its ability to help us chunk, summarize, create mental
pointers.

> Of course, there is no reason to search for a laser-sharp
> definition here [of what "understanding" is to mean",
> the boundary between understood and not understood is
> blurry (one can also understand things more or less).

Yes, let us hope that by judicious use of the term or
frequent resort to more precise phrases further confusion
can be avoided.

> > This sounds as though you are referring to systems unanalyzable
> > by any intelligence of any organization whatsoever. I mean to
> > exclude such projects. My claim, to put it in the most graphic
> > terms, is that no alien race, nor any SI, is *smarter* than
> > humans except for time and speed, when the potentially immortal
> > human is not handicapped by health, poverty, or death.
>
> And my claim is that unanalyzable systems are so common that many
> interesting domains will be riddled with them, making even
> understanding many understandable questions hard and not subject
> to general rules.

I think *your* claim uncontroversial. I have a great example.
Computer chess programs were the first to completely solve
many extremely complicated chess endings. One in particular
that fell to programming methods is K+R+B vs. K+N+N. It turned
out to require in the most difficult position 332 moves for the
side possessing the rook and bishop to checkmate the side
possessing the two knights. Now the most peculiar feature of
the process was the way that the King and two knights are
driven here and there around the board in no apparent pattern.
That is, to a grandmaster (unless he or she has *extensively*
studied the process), no apparent progress has been made
during the first two hundred moves or so. The position looks
just as difficult and random as the original position did.

This example shows that, just as you are saying I believe,
a wonderful lot of processes defy "chunking". They exist
as merely random points in a complex solution space, and don't
appear as points in any simpler space. Another great example
from science fiction is Vernor Vinge's Skrode Riders. When
a human examined the circuitry connecting a living Skrode to
its Rider-machine, the human found only a maze of wires not
yielding to chunking analysis---it was implied that no such
higher order explanation even existed for the unbelievably
complex maze of wires. This was the technology of the Beyond,
where certain kinds of SI's could (perhaps with GAs) come up
with this kind of solution to a problem. It's a subtle point,
but I do not believe that this contradicts my claim that given
enough time a human being can understand anything that is
understandable.

> With the right mappings even a very messy domain
> might have a simple structure that can be understood, but most
> such mappings will be unanalyzable or hard to come by in the first
> place. The result is that entities can be equally "smart" but
> unable to understand things other entities understand due to quirks
> of their mental architectures that are non-trivial to re-implement
> in another mental architecture.

I think so. I hope that I have understood you correctly here.

> A human with an arbitrarily extended memory can emulate a
> Turing-machine and apparently a quantum computer, with an
> exponential slowdown. So if physics is computable he could in
> principle emulate the underlying physics of any other thinking
> system, and hence "understand" in some sense anything
> else understands...

One must be cautious here about the Chinese room. The whole
*system* understands Chinese, but not the human moving pieces
of paper around.

> But this is not very useful: matter and time constraints are real
> for every intelligence, and when emulation becomes too large it is
> no longer possible even in principle.

This is a different meaning of "in principle" that the one
to which I am accustomed. One says that in principle any
program could be emulated by a Turing Machine composed of
squares separated by a trillion lightyears on which the
little machine crawls at an infinitesimal speed. But okay,
I will be wary of using "in principle" without redundancy.

> Chunking means combining several concepts into one composite
> concept; it is in many ways arbitrary (you can chunk anything with
> anything). Abstraction combines several concepts into a general
> class, which is a special kind of chunking.

Very well, but with your permission I will continue to
use "chunking" in this more restricted sense for the
time being. ;-) Thanks for the warning, however.

> What I was talking
> about was more like abstraction - you see that objects in domain X
> can be mapped to objects in domain Y, and relations in X and/or Y
> can be mapped to relations in the other domain. If the domains are
> isomorphic every object and relation can be mapped to a
> counterpart. One can then say that these two domains really
> are the same abstract domain.

To fix terms, then, I suppose that the solutions to the chess
problem and "how to connect a Skrode to its rider" can't be
abstracted? A good phrase eludes me. Can you improve on
"can't be chunked"?

> Imagine that being A manages to understand all that is
> understandable by mapping it (sometimes in very complex ways) into
> domains A, B, C, ... and being B achieves the same thing with
> domains X, Y, Z, ... It could happen that there is no way of
> mapping between ABC and XYZ in ways that would be comprehensible
> (or even decidable?) for the entities. In that case they would
> have different kinds of understandings.

I have doubt that this situation could exist. For by
"understandable" I mean to refer to a real structure
lying within the phenomenon, but perhaps not explicitly.
An *understanding* ferrets out this simpler structure.

For example, Kepler (forgetting for the moment that he
had predecessors such as Ptolemy and Copernicus) saw
through the mass of planetary data to simpler motions,
and described these simple motions in terms of the
familiar concept of "ellipse".

*Any* rival abstracting of that data would result in
an isomorphic understanding, and the isomorphism would
be *easily* discerned, far more easily than either of
the abstractions (or chunkings) had been to arrive at
in the first place.

Lee



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:25 MST