Re: Universality of Human Intelligence

From: Anders Sandberg (asa@nada.kth.se)
Date: Fri Oct 04 2002 - 12:48:58 MDT


On Fri, Oct 04, 2002 at 09:22:33AM -0700, Lee Corbin wrote:
>
> I forgot to mention the *absolutely* key role of language.
> Without the ability to linguistically chunk previous results
> (and to write them down, of course, as someone charitably
> noted), yes, this would be impossible.

I think the flexible chunking part is important, but language as
communication is not necessary for understanding (but it helps
immensely). One could imagine a clever AI with a randomly
generated "language" nobody understands or a person with total
aphasia, and both would be able to understand things and act
intelligently.

> Of course, one should not distinguish "human" from "biologist"
> here since I contend that any normal adult from the former
> set can emulate one of the latter.

Yes. I think all normal humans are able to understand the same
things (with the exception of some borderline objects or people).
But interindividual differences in how easy understanding is are
large - just because we can in principle understand everything
else another person understands doesn't mean it is even practical.

> Anders writes:
> > Universal understanding would mean that a being could gain an
> > understanding at any given level of resolution or domain of an
> > arbitrary thing, given enough information. It seems equivalent
> > to the creation of an internal simulation that is an emulation
> > within a certain level of resolution (one could talk about
> > probabilistic understanding: conclusions are right with a finite
> > probability, useful understanding occurs when this probability
> > is high).
>
> I think that *emulation* is too strong. For a model of
> what I have in mind, consider an extremely complicated
> mathematics proof that one has "understood". This hardly
> means either that one has memorized it, nor that one could
> readily reproduce it, nor that one even now could answer
> any question about. All that was achieved is that the
> mathematician *at some point* while he was studying the
> proof *could have* attended to that question---all the
> mathematician has left are notes to the effect that he
> or she has verified the calculations.

I think this is a too weak form of understanding. It is more like
knowing where something is written down than actually knowing what
is written. When I think I understand something I can usually do
some mental inferences from the concept, *use* it in some way.

Of course, there is no reason to search for a laser-sharp
definition here, the boundary between understood and not
understood is blurry (one can also understand things more or
less). It is a bit like the issue of when an attractor neural
network has learned a pattern - does it have to retrieve it
perfectly from a noisy cue, with a finite precision from the cue
or just have a fixed point corresponding to the pattern? All three
definitions produce different memory measures, but they all
usually give the same qualitative answers.

> > As I see it there is likely no universal understanding because
> > the boundary between domains where understanding is possible is
> > a kind of fractal mess of undecidable, information limited and
> > mental resource limited systems that cannot be understood or
> > mapped in general. It is not just that we cannot understand
> > every object, we cannot easily predict if certain objects are
> > amenable to understanding.
>
> This sounds as though you are referring to systems unanalyzable
> by any intelligence of any organization whatsoever. I mean to
> exclude such projects. My claim, to put it in the most graphic
> terms, is that no alien race, nor any SI, is *smarter* than
> humans except for time and speed, when the potentially immortal
> human is not handicapped by health, poverty, or death.

And my claim is that unanalyzable systems are so common that many
interesting domains will be riddled with them, making even
understanding many understandable questions hard and not subject
to general rules. With the right mappings even a very messy domain
might have a simple structure that can be understood, but most
such mappings will be unanalyzable or hard to come by in the first
place. The result is that entities can be equally "smart" but
unable to understand things other entities understand due to quirks
of their mental architectures that are non-trivial to re-implement
in another mental architecture.

A human with an arbitrarily extended memory can emulate a
Turing-machine and apparently a quantum computer, with an
exponential slowdown. So if physics is computable he could in
principle emulate the underlying physics of any other thinking
system, and hence "understand" in some sense anything anything
else undestands (It could of course be that there exists
uncomputable elements of physics that cannot be emulated by other
subsets of physics or systems built from them. In that case there
might exist different kinds of intelligences that have some kind
of partial ordering of understanding).

But this is not very useful: matter and time constraints are real
for every intelligence, and when emulation becomes too large it is
no longer possible even in principle. It is possible to construct
an awfully complex concept that requires more memory storage than
can be encoded in the universe or takes more time to compute than
is available - that concept cannot be understood by any system.

Even worse, in some domains understanding involves short time
limits. Understanding football in the domain of actual playing or
cocktail party wit in the domain of witty retorts cannot be said
to occur if the responses take too long. Here speed of computation
might not be enough if the time complexity of the understanding
algorithm emulated by hand is too bad.

> > It should be noted that separation in this space of objects *
> > domains of action * levels of precision is extremely
> > non-trivial: understanding often acts by demonstrating
> > isomorphisms between different regions, essentially connecting
> > them with cognitive "wormholes" into fewer isolated regions.
>
> Could you possibly provide examples of what you are talking
> about here? I can think of several nice interpretation, but
> they might be mine and not yours! ;-)

One of the best examples is analytic geometry, where Descartes
showed how to create a one-to-one correspondence between the
arithmetic of number pairs and geometrical objects. Every point
corresponds to an x-y coordinate pair, every line to a linear
equation, every circle to a quadratic equation of the form
(x-a)^2+(y-b)^2=c^2 and so on. By switching between the two domains
(geometry and equations) a large number of problems becomes easier
to solve, and in fact become the same problem.

> > Mathematics proved to be a region that could map itself nicely
> > onto a lot of other regions, uniting them into a simpler region.
> >
> > The better understanding, the more the entire space has been
> > reduced into a minimal set of "primitive" regions. So maybe a
> > better question than whether there exists universal
> > understanding is the structure of the set of primitive regions,
> > and if it is unique. If there are non-unique sets of primitive
> > regions there would exist different *kinds* of understanding
> > (which may be differently useful in different environments).
>
> I read "The better [the] understanding, the more the entire
> space has been reduced into a minimal set of "primitive"
> regions" as referring to *chunking*: in my examples, the
> human at the end of his trillion year computation or mastery
> of some proof, has possibly before him one final sentence:
> A is true because of B, C, D, and E, even though E implies
> some very other interesting things.

Chunking means combining several concepts into one composite
concept; it is in many ways arbitrary (you can chunk anything with
anything). Abstraction combines several concepts into a general
class, which is a special kind of chunking. What I was talking
about was more like abstraction - you see that objects in domain X
can be mapped to objects in domain Y, and relations in X and/or Y
can be mapped to relations in the other domain. If the domains are
isomorphic every object and relation can be mapped to a
counterpart. One can then then say that these two domains really
are the same abstract domain.

> Sorry, I'm baffled by your last sentence, but will have more
> time later to study it.

Imagine that being A manages to understand all that is
understandable by mapping it (sometimes in very complex ways) into
domains A, B, C, ... and being B achieves the same thing with
domains X, Y, Z, ... It could happen that there is no way of
mapping between ABC and XYZ in ways that would be comprehensible
(or even decideable?) for the entities. In that case they would
have different kinds of understandings.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:25 MST