From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Mon May 22 2000 - 22:02:39 MDT
Famous model Matt Gingell wrote:
> You seem to think a model is right because it's useful. That's
> backwards: A model is useful because it's right. Concepts reflect
> regularities in the world - their accuracy is a function of their
> correlation to fact. That our aims are well served by a accurate
> conception of the world is true but beside the essential point.
No. Causation is not at work in my picture. Rather, I come to know
that a model is right once I know that the model is useful. Actually,
I think I'm spinning tautological wheels when I talk like that; that
we're wasting our time to ask "we know it's useful... but are we sure
it's RIGHT?" "We know it's right... but are we sure it's useful?" I
know them both at once, in the same way that I know that A and ~~A
simultaneously, though I may derive the one from the other
immediately, if I must.
Euclid, for example, isn't "right" about anything at all in the
strictest sense, except imaginary Euclid space. The relevant property
which Euclid's axioms DO have, along with all of the rest of the
beliefs we hold, is the property of being "close enough." The very
term "model" implies this willingness to accept deviations, so long as
they are kept within acceptable limits.
But the notion of "close enough," which is what we normally mean when
we say "right," is intimately tied up in the question of what you're
using it for. Peano's arithmetic is a good model for bricks, but a
bad model for rabbits. ("1 + 1 = ... hold STILL won't you!?")
> Certainly there's an issue of perspective - if I live near a black
> hole or move around near the speed of light, my model of reality would
> be different. I would never claim otherwise - I'd only expect a
> machine to develop concepts similar to my own if it's experience were
> also similar. If I'm a picometer tall and you're a Jupiter brain,
> neither of our world views is wrong - we are just each the other's
> special case. The same holds for hyperbolic vs classical geometries -
> neither is arbitrary: they're just describing different things. (Or rather
> the first is a generalization of the second.)
What is Euclid describing other than Euclid space? Or is that it?
Can't we read ANY belief as "right, about whatever it is it's talking
about"?
> Goals do effect attention (While I think they're essential arbitrary
> and uninteresting, I admit we have them.) Goals effect what you choose
> to spend your resources contemplating - there being more things in
> heaven and earth than dreamable at our speed C. A learning machine
> without a goal is like a Zan master, sitting still, hallucinating,
> sucking up fact and building purposeless towers of abstraction. The
> mind is a means and not an end. Our drive to perpetuate ourselves and
> our species is an artifact of our evolutionary history, our will to
> survive is vestigial and as random as an appendix. Intelligence is an
> engine perched on an animal - the forebrain being a survival subsystem
> for an idiot limbic blob. Plug in some other root set of desires and
> it'll as usefully tell you how to castrate yourself as how to spread
> your genes. It'll identify cliffs I can jump off as faithfully as it
> does wolves to run away from.
... but despite our differing motivations, we will, at least, agree on
the Facts, right? What are these except the beliefs which we cannot
imagine ourselves rejecting? How could we tell the difference between
the two? Why would we care about such a difference?
I'm not pulling this "radical interpreter" stuff out of a hat, as it
were. Have you read Quine on this question?
> If motivation is made up but reality isn't, then it seems better to
> describe the mind as something that parses the real world than a hack
> that keeps you alive.
I'm rejecting the notion that a useful distinction can be made between
realism and anti-realism. I'm a pragmatist about that question: it
just doesn't matter. The whole distinction between "made up" and "not
made up" in this context is a useless artifact: there are no answers
there, and no need for answers.
Will you wear a different tie based on whether you have free will or
not? Will you build a bridge differently on the basis of whether your
worldview is right for your own purposes or objectively right? Will
you behave any differently at all if you decide that P is a
proposition you believe unflinchingly or if you think that P is a
Fact?
You won't even build an AI differently, I argue. This question is
totally irrelevant as to whether there is a simple generic
truth-finding algoritm that we're running, or whether there is a nasty
complicated mess leading us to our conclusions. We are willing to
agree that anything that follows our algorithm (or one like it) will
reach our conclusions. We're even willing to agree that our algorithm
is mostly right. The only added claim, a useless one, is that we got
to have this algorithm, and not another, because it's right. I don't
see what you get out of saying this.
> I wouldn't claim that the semantics, the ideas, I'm writing would be
> understandable, even in principle, by anyone but an English
> speaker. Yet there is still information content - it isn't random
> noise (Even if it occasionally sounds a bit like it). The structure -
> that is the characters, words, simple syntax, etc - are
> extractable. Whether anybody would bother to investigate it is one
> question, but that the structure is real and determinable, and is
> independent of goals or survivability or what not, is unambiguously
> true.
How is this different from drawing a line in the epistemological sand
and saying "No uncertainty past this point!" I can always raise
useless questions like "how do you know that's a structure, rather
than some arbitrary set?" Or is that a structure simply because you
CALL it a structure?
> I've done a little bit of work on computer models of language
> acquisition - the problem for a child is turning examples of speech
> into general rules, inferring grammars from instances. It's a bit like
> trying to turn object code back into source, figuring out structures
> like for-loops from an untagged stream of machine instructions. Not
> entirely unlike trying to unscramble an egg... That we are able to do
> it at all, even to the controversial extent language is actually
> really learned, amazes me.
Again, I highly advocate some Quine, who was, ah, also interested in
this question (though he approached it from the perspective of a
linguist out in the field attempting to understand the language of
natives).
> Out of curiosity, how would you explain Kasparov's ability to play a
> decent game of chess against a computer analyzing 200 million
> positions a second? Certainly not a regular occurrence on the plains of
> Africa, what more general facility does it demonstrate?
Eh? It can't just be his "chess-playing" facility? I'm not aware of
anything *else* at which Kasparov is the best in the world, suggesting
that it is his chess-playing facility, and, apparently, nothing else,
that did the work. :)
Were I to assume that something more general was, in fact, at work,
I'd have to guess that his capacities to plan ahead, imagine in a
structured manner, and empathize with his opponent were all at work.
Shot in the dark on my part...
> This is a very anthropomorphic view - I'm looking for a definition
> that transcends humans and evolution, the essential properties shared
> by all possible intelligences. You seem to be saying there isn't such
> a thing - or it's the null set.
No *interesting* properties, other than stuff like "X can pass the
Turing test," "we'd probably call X intelligent," etc.
> You can go ahead and start coding, bang out behaviors for all the
> situations you want - write vision systems, theorem provers, cunningly
> indexable databases - but without an understanding of the principles
> at work all you'll end up with is a undebuggable heap of brain damaged
> cruft.
If the code is written in a way that a "programmer" subsystem could
understand it, or, as Eliezer calls it, a "codic cortex" analogous to
the visual cortex, then each part may attempt to code itself better.
Each domdule can participate in improving the capacity of the rest of
the domdules. That's how the system will improve, and improve itself
better and faster than any human could have done. But you need a rich
system before this can take off. That's where the hand-coding comes in.
> The answer to that question is 42. I could give you a reasonable,
> logical argument that logic and reason are a good way at looking at
> the world, but that would be circular. (Though if we're not assuming
> logic, maybe there's nothing wrong with a circular argument...)
Try this one. I've got some beliefs which must be interpreted by
others in order to be understood. Suppose people in the future
figure out the best model of the laws of physics we'll ever figure
out. Suppose they look back at Aristotle's physics. They'll find
that Aristotle was right about some things, even right about MOST
things, but that his theory could have been substantially improved.
They'd look back at Newton, at Einstein, at Bohr and Heisenberg, and
find that they were right, mostly, but that their theory could have
improved. Similarly, if Aristotle was hard-headed but charitable, he
would look FORWARD at the history of science, and say much the same
things about the developments to come: mostly right, could be better.
In general, unless I employ the principles which Quine and friends
have laid out: princples of charity, of humanity, etc. I can't even
understand a bit of speech as *language* at all, say nothing of true
or false language. I have to interpret a person as having a body of
mostly true beliefs before I can say that I understand the person at
all, before I can begin to pick out one belief as right or wrong.
But Quine's radical interpretation begins at home. Our own beliefs,
by our best lights, are mostly right, though some of them, presumably,
are wrong.
So whatever way you look at it, if I'm making sense to begin with, I'm
mostly right.
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:46 MST