From: Lee Corbin (lcorbin@tsoft.com)
Date: Tue Oct 08 2002 - 19:45:42 MDT
Originally it was claimed:
Everything that aliens or SI's can understand
could be understood by a human, given enough
time, persistence, motivation, and paper.
(The presence of pencils and trees is assumed.)
Current progress:
The meaning of "understand" may not be clear
or agreed upon by all parties.
Oh, I can just hear the calls now: "There goes that
wascally Wee Corbin with his symantics and semitics.
Why don't he know what even Humpty Dumpty know:
'When I use a word, it means exactly what I want
it to mean, nothing more, nothing less.'" Why
don't he go for what is *meant*?
Eliezer writes
> Lee Corbin wrote:
> >
> > Here is an imaginary discussion with the (by now great) mathematician
> > who has "understood" the result of his trillion year inquiry.
> >
> > Us: What do you know? What was it about?
> >
> > Him: I now know when X implies Y under a huge variety
> > of possible conditions.
>
> Apparently the human has chunked two high-level regularities
> in the process being simulated, with these regularities known
> as "X" and "Y". Suppose that the regularities are chunkable
> but not humanly chunkable? For example, suppose that X and Y
> both exceed the simultaneous representation capacity of the
> human brain, while nonetheless being highly compressible
> relative to the system being simulated?
In a certain trivial sense, of course, it requires but
a few dozen bytes to store pointers to vast, vast
amounts of information, and, in my example, that's
what the mathematician must do. When you interrogate
and interrogate and interrogate at finer and finer
levels, he is forced finally to say, "well, I must
consult my notebooks. I think that maybe I answered
that question to myself somewhere around six-hundred
billion A.D."
So let us inquire into the nature of the X that you
specify here. I concede that I understand only two
possible cases: either X has a surface sub-structure
(i.e., the surface part of an enormous sub-structure)
that itself forms a high level explanation despite its
usage of a huge number of unfamiliar concepts, or else
it does not. If it does not, then it *must* be analogous
to my example of the chess ending KRB vs. KNN.
Now, a few words about that analogy. I don't really
know (having never studied it) whether it's likely that
this chess ending *can* be broken down into fewer than
332 concepts. So let us therefore agree that it cannot,
or that there exists a suitable analogy (perhaps the
Skrodes and their Riders). Gregory Chaitin claims that
most mathematical "theorems" (i.e., platonic theorems)
have the characteristic that they are, in a sense,
random. There isn't any particularly good explanation
of why they're true. I certainly would be willing to
bet that there exist many sets of numbers, each one of
which has some interesting property, but for which no
proof exists.
> Suppose that an SI manipulated the concepts describing X
> and Y with easy facility, tinkering with them, recombining
> them, seeing them as a part of a whole and a whole made of
> parts, where to a human X and Y are not even conceivable as
> concepts, even though the human can (given infinite time
> and paper) stand in as a CPU in a system simulating a mind
> which understands those concepts?
I wonder here if we are talking about a *feeling* one
often has when dealing with familiar material. If someone
asks me, "do you understand the relationship between
democracy and voting?", I would say yes. But so would
many ten year olds, and we would be critical of the
depth of their understanding. Vice-versa, I had a
friend who got an A in calculus, but didn't believe
that he understood it. We both thought that he at 20
understood it better than I had at 15, but what he
lacked was what we decided to call a "conviction demon",
or, the feeling that one understands.
Now since the cardinality of the set {X, Y} is only two,
then the human will have no trouble in "manipulating
the concepts describing X and Y with easy facility".
He may even see them as part of a whole and on the
contrary each made of up parts. One part may contain
ten or twelve or who knows how many parts. But if it
contains 233 parts or more, and there *are* no
intervening concepts, then what he really does with
the 233 pieces---namely, consults his notebooks where
he wrote them down in the right order---will be essentially
no different from what the SI does. (But see below where
I allow for the possibility of lists of relationships.)
Surely we don't want to say that the SI has a conviction
demon and the human does not, because surely this feeling
can be appended easily.
> That is my reply to your discussion: you are giving a case where the human
> *does* understand high-level regularities of the result, and I am replying
> with a case where the human does not and cannot understand high-level
> regularities of the result that are readily comprehensible to an SI.
Yes, I have tried to focus on such a case, but
wonder---especially if it's a list that the SI
has in ROM but the human only has in a notebook
---why one would say that the SI *understands*
but the human does not. What do we mean by
understanding?
BTW, my instinct assures me that this would be
a most excellent time at which to apply Corbin's
Semantic Rule 1: "When a term begins to cause
confusion, simply replace it with a variety of
phrases or words that also convey what you mean."
> Let's suppose that there are no regularities in KRB vs. KNN.
> ...suppose the solution were incompressible. Let's also suppose
> that this solution pattern is, itself, a regularity in another problem.
> Let's suppose that it's one of an ecology of, say, 1,000,000 similar
> regularities in that problem set which the SI has found convenient to
> chunk, out of an explosive combinatorial space of, say, 300! possible
> regularities. I submit to you that a human being simulating an SI
> exploring that problem set will:
>
> 1) Never independently chunk all the regularities that the SI perceives;
> 2) Never independently chunk even a single one of those regularities;
> 3) Be similarly unable to chunk the SI's *perception* of the regularity
> by examining the SI's low-level bit state, even given infinite time.
>
> Why? Because the individual elements of the SI's cognitive process are
> simply too large for a human to understand, not just as a single concept,
> but even using the full power of human abstract understanding. Stack
> overflow.
If I understand the sentence where 10^6 first crept in,
then perhaps you are saying that the 233-gauge chess chunk
(which I *refer* to as KRB vs. KNN) is one of 10^6 other
1K-gauge chunks, and at a higher level the set of 10^6
things is referred to as (to be concrete) M1. Well, then,
if the 10^6 set M1 is no more analyzable than was one of
the 233-gauge items, then both the SI and the human are
going to have a list somewhere. Yes, since the SI has
the list in lightning-fast ROM, it may *feel* much more
in control than does the poor human who must glance
apprehensively at a filing cabinet. But recall, this
human *is* equipped with total confidence, and a grade AA
conviction demon, and though he knows it will be tedious
and will take some years, he can review the whole 1M items
in M1.
So I ask, what do you mean by "the individual elements of the
SI's cognitive process are simply too large..."?
> >>Can humans explicitly understand arbitrary high-level
> >>regularities in complex processes? No. Some regularities
> >>will be neurally incompressible by human cognitive processes,
> >>will exceed the limits of cognitive workspace, or both.
> >
> > Here is where I hope that my dialog... addressed your point.
> > So long as the ideas are chunkable, all the human needs is
> > a lot of paper, patience, time, energy, and motivation.
>
> I reply that the class of systems humanly chunkable into human-sized
> sub-regularities arranged in a holonic structure of humanly understandable
> combinatorial complexity, is a tiny subset of the set of possible systems
> with chunkable regularity, holonic structure, and compressible
> combinatorial complexity.
In other words, you claim that there exist systems with
chunkable regularities and holonic structure that *could*
be chunked by SI's, but not by humans.
I conclude by saying that in many cases chunking appears
to be nothing more than making a list of items. In other
cases, the list must be augmented by a list of relationships
between the items. In either case, I don't see why the
human doesn't finally *understand*.
Often, we mean by "understanding" for an entity to have
speedy access to subcomponents. I refuse to say that I
understand a certain theorem when all I have is an
extremely good book that I know will explain it well.
Yet if years ago I mastered the proof of a deep topology
theorem, then I may or may not have the feeling that I
still understand it.
> 2) Never independently chunk even a single one of those regularities;
It's curious that I have always had in mind for my
claim the *understanding* of a theorem, *not* the
original creation of the proof. Creativity is for
me a much less certain terrain. So I wonder if by
"independently" you mean without help, or if you
mean *discover* regularities within structures.
Lee
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:28 MST