Re: SITE: Coding a Transhuman AI 2.0a

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Mon May 22 2000 - 01:10:54 MDT


Matt Gingell, stinking Nazi [;)] wrote:

> Well, take something like geometry: is it true that the interior
> angles of a triangle always add up to 180, or is that just an
> arbitrary decision we made because it happens to suit our particular
> purposes?

Arbitrary decisions! In non-Euclidean geometries, the angles of a
triangle may NOT add up to 180. And HOW do we decide which geometry
to use in a given situation? Those darn *goals* again! If only we
could rid them from our thought; then we'd see the Forms at last.

> Consider this message: you're looking at a bunch of phosphors lit up
> on a screen, does it have any information content outside our
> idiosyncratic conventions?

Language is the *quintessential* example of a set of idiosyncratic
conventions. If this weren't true, then our language would have to
be just as it is, necessarily. I don't think you'd want to make that
strong a claim.

> Independent of our desire to communicate, is any way of perceiving
> it as good as any other? I would say it has structure: that it is
> built from symbols drawn from a finite alphabet and that this is
> true regardless of the perceivers goal.

> This is where the criterion of minimum description length comes in: if
> I generalize a little bit and allow pixels some fuzziness, then I can
> rerepresent this message at 7 bits per symbol - which is a much
> smaller encoding than a bitmap. This is a nice evaluation function
> for a hypothesis because it doesn't require feedback with the outside
> world. With a big enough sample I can get space savings classifying
> common strings into words, and then lists into structured instances of
> a grammar.

Sure. It's got a structure. Anybody who's like us in the relevant
way would notice that. But you elide too much if you fail to take a
close look at the ways in which we'd have to be similar. You think
that any old "radical interpreter" will do. I don't see any
motivation for believing this.

(Unless, as I suggest later, you simply refrain calling something
"intelligent" unless it can find the results you want, in which case,
you've got a hollow victory on your hands.)

> If we are to understand what intelligence is, we must construct a
> definition which is not particular to a design process, a set of
> goals, or a set of sensors and limbs. Implicit in this statement is
> the notion that the word 'intelligent' actually means something, that
> there's a meaningful line somewhere between intelligence and clever
> algorithms which fake universality by virtue of shear vastness.

I reject the notion that arguing against you would require me to
conclude that the word "intelligent" is meaningless. On the contrary,
I argue that the word "intelligent" does have meaning *to us*, in the
language *we* speak, *today* at the beginning of the 21st century.
Your assertion requires me to believe that this word somehow has
meaning beyond our language, beyond us. It requires "intelligence" to
be something transcendent, rather than simply meaningful.

It is in coming to terms with the fact that intelligence is not
transcendent that AI will finally get off the ground. We'll finally
start coding in the stuff that we'd hoped the AI would find simply by
virtue of it being true. ("...and, since it's preloaded with the
general truth finding algorithm, it'll SURELY find it eventually,
given enough computing power, time, and most of all GRANT MONEY...").

> Minds are imperfect and heuristic, they only approximate a truth which
> is, as you point out, uncomputable. A machine might out do us, as
> Newton was out done by Einstein, by finding a better model than
> ours. But any intelligent machine would have a concept of, say,
> integer at least as an special case of something (perhaps vastly)
> broader.

Actually, I think I'd largely agree with you in saying that any
"intelligent" machine would have concepts like that, but not for the
reasons you state. Rather, this is true just because *we'd* rule out
the possibility that something is intelligent if it wasn't
sufficiently like our image of an intelligent being, integers and all.
This isn't telling me anything interesting about a "natural kind" (tm)
which we call "intelligent", but about the sort of things which we'd
be likely to call intelligent.

In the same respect, any human-equivalent intelligent machine would be
able to pass the Turing Test. And of course, you simply *must* have a
concept of integers (or something close enough) to pass the Turing
Test. So any human-equivalent intelligent machine must have something
like a concept of integers. Doesn't have the same *oomph*, I think.

> Ockham's razor would be one of the core principles of the general
> purpose learning system I'm interested in - hand coded rather than
> acquired, though not necessarily explicitly. Something is wired in,
> obviously I don't think you can just take a blank Turning machine tape
> and expect it to do something useful.

No, you just think that if you take a Turing machine and plug in the
"general answer finder" tape along with raw sense data, you get all
the same beliefs about objects we do. But the "general answer finder"
isn't a simple learning algorithm. It's just the way we go about
solving problems: hacks, kludges and all. It's all got to go in by
hand; the simple learning algorithm was "try it and see if you breed."
We just don't have that kind of time on our hands.

> > Epistemologically speaking, how would we know if we had stumbled upon
> > the general algorithm, or whether we were just pursuing our own
> > purposes again? For that matter, why would we care? Why not call our
> > own beliefs Right out of elegance and get on with Coding a Transhuman
> > AI?
>
> We couldn't know, but if we got good results then we'd be pretty sure
> we were at least close. Whether you care depends on your motivation:
> I'm interested in intelligence because I like general solutions to
> problems more than I like special case ones, and you don't get a more
> general solution than AI. I futz around with this stuff out of
> intellectual curiosity, if the mind turns out to be necessarily ugly
> I'll go do something else. I don't really care about saving the world
> from grey goo or the future of the human race or whatever.

That's a little cavalier, considering that you're one of us, isn't it?
;) Sure, sure, let the rest of us do the HARD work... ;)

Anyway. I think you're forgetting that this is the general truth
finding algorithm which WE supposedly use in finding truth. So how
would we know if we'd found it? We'd "check" to see if our results
were good, and if so, we're close, you say. But how would we "check"
on this? Well, we'd run our truth finding algorithm again, of course,
since that's all we've got in the search for truth. According to it,
our results are "good." Have we found the general truth finding
algorithm? Well, the general truth finding algorithm seems to say so!

Our truth finding algorithm is all we've got. It's not general or
universal, but it is mostly right. That's good enough for me... for
my purposes.

-Dan

      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:45 MST