Re: SITE: Coding a Transhuman AI 2.0a

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Sun May 21 2000 - 03:25:01 MDT


Matt Gingell cried out:

> Dan Fabulich wrote:
>
> > Matt Gingell wondered:
> > ...
>
> > BTW, the fact that no such Holy Grail exists also provides a plausible
> > explanation as to why AI has failed so often in the past. In an
> > important sense, were you right about what intelligence is like, AI
> > would be easier than it is harder.
>
> How many attempts at heavier than air flight failed before we figured
> that out? Just because we haven't found an answer yet or we haven't
> got a fast enough machine to try it (a light enough engine) doesn't
> mean it isn't out there. We've only had computers for 50 years and
> already we've done things that even a hundred years ago would have
> been generally thought impossible. Give AI a break. Obviously there
> are huge holes left to be filled and principles yet to be discovered,
> but it's a very young science.

That claim was really just a sidebar designed to give my main argument
more plausibility. On its own, this explanation, as given, won't fly.

> > Look, suppose you WERE somehow able to map out a somewhat
> > comprehensive list of possible conceptual schemes which you would use
> > to categorize the "raw sense data." How could you algorithmically
> > determine which of these conceptual schemes worked better than some
> > others? Any others? Our ancestors had a way: use it as a rule for
> > action, see if it helps you breed. You and your machines, even at
> > 10^21 ops/sec, would have nothing to test your values against.
>
> When Newton developed with theory of gravitation, did he iterate
> through the space of as possible physical laws till he found one that
> matched his data? You seem to still be convinced that learning,
> discovering patterns in facts, is a blind search.

No. Newton employed conceptual tools which his ancestors had stumbled
upon and compared his results against those. These were, as the
saying goes, necessary, but not sufficient. Yet they were necessary;
he couldn't have done it without them. Shoulders of giants and whatnot.

> A concept scheme is a theory about the world, a model of the way
> things in the world work and the sort of laws they obey. Some ways of
> looking at the world are objectively better than others, regardless of
> their utility as a tool for perpetuating your own genes. Intelligence
> is that which extracts those models from raw data - Feedback with the
> world is a useful tool for finding those them, but it isn't the
> only one.

Now, you might be making a number of different kinds of claims when
you say this; I would agree with you if you're making one kind of
argument, and mostly disagree with you if you're making a certain
other kind of argument.

One kind of argument you might be making goes like this: some
conceptual schemes are useful independently of their utility in
breeding, because there are *other* purposes we might have for a
conceptual scheme which can justify one conceptual scheme over
another, despite evolutionary disadvantages. I'd have to agree with
this. I'm only making the smaller claim that the reason we happen to
have the belief-fixing faculties we do is because they helped our
ancestors breed; that this just happens to be the way Newton did it.

Another kind of argument you might be making goes like this: some
conceptual schemes are just Truly Right, independent of any purpose we
might have for them today or ever. (This sounds more like the
argument you're actually making.) I don't think making a statement
like this matters. Certainly, there are some conceptual schemes which
are just right for the purposes which we have now, and we largely have
to assume that we're mostly Right about our beliefs and purposes. So
we're going to have the beliefs we've got, whether we are in touch with
the One Truth or whether they just suit our purposes.

With that having been said, however, I'd say you're on the wrong track
to think that a "pure mind" abstracted from any goals would share our
beliefs. Better to say that we've got the right intentions, we've got
the right purpose, and that any machine built to that purpose would
also stumble across the same means of fulfilling it as we do.

Visions of elegance, simplicity, etc. are excellent. I share them
with you. However, we got OUR beliefs about elegance through
evolution; maybe that got us in touch with the one true Platonic
Beauty; maybe it didn't. Either way, there's no reason to think that
an AI will stumble across Ockham's Razor and find it right for its own
purposes (which it may or may not think of as 'objective') unless it
shares ours, (in which case, they'll have to be hand coded in, at
least at first) because being right is no explanation for how a mind
comes to know something. If you asked me "how did you know that she
was a brunette?" and I replied "because I was right," I'd have missed
your point completely, wouldn't I?

> Heres a simple example of the sort of thing I'm talking about:
>
> Suppose there exists some Vast set of character strings, and I've
> posed you the challenge of characterizing that set from some finite
> number of examples. The sample you've got contains instances like: (ab),
> (aab), (abb), (aaabbb), (abbb), etc.
>
> The answer is obvious, or at least it would be with enough samples:
> this is the set of strings beginning with one or more instances of 'a'
> followed by one or more instances of 'b.' Of course, any other answer
> is defensible: you could say we're looking at the set of all strings
> built of characters in the alphabet and we just got a misleading
> sample. Or you could say this set only contains the examples you've
> seen. Both those answers are wrong though, in the same way epicycles
> are wrong. It's my contention that there exists some general algorithm
> for determining those 'right' answers.

If it's an algorithm, it's incomplete. There will be some undecidable
questions, which are decidable on another stronger algorithm. This
may not bother you, but it should tell you that Truth is not an
algorithm, that it cannot be reached, or even defined,
algorithmically.

Epistemologically speaking, how would we know if we had stumbled upon
the general algorithm, or whether we were just pursuing our own
purposes again? For that matter, why would we care? Why not call our
own beliefs Right out of elegance and get on with Coding a Transhuman
AI?

> > Consider a search space in which you're trying to find local maximums.
> > Now imagine trying to do it without any idea of the height of any
> > point in the space. Now try throwing 10^100 ops at the project.
> > Doesn't help, does it?
>
> You do have a criterion: The representation of a theory should be as
> small as possible, and it should generalize as little as possible while
> describing as many examples as possible. It's Occam's Razor. I'll read
> up on seed AI if you agree to read up on unsupervised learning
> (learning without feedback or tagged examples.)

Ahem. And WHY do we have Ockham's Razor? I've got my story. What's
yours? Surely not "because we're right about it"? That's missing the
point.

Show me the faculty, I'll show you the feedback. (Though I may have
to point to history to do it.)

> > I see no reason to think that there is a "raw mind." There are some
> > minds, such as they are, but there is nothing out there to purify.
> > (Eliezer and others call this mythical purificant "mindstuff.")
>
> A heart is a pump, an eye is a camera, why can't a brain be a baroque
> biological instance of something simpler and more abstract?

What's a raw pump? What's a pure camera? I think I don't see your point.

> > To the extent that I can make this analogy in a totally non-moral way
> > (I'll try), this is the difference between fascist eugenics and
> > transhuman eugenics. Fascist eugenics tries to breed out impurities,
> > to bring us back to the one pure thing at our center; transhuman
> > eugenics works to create something completely different, in nobody's
> > image in particular.
> >
> > [Again, I don't use this to imply anything morally about you or anyone
> > who agrees with you, but merely to draw the distinction.]
>
> Thanks for qualifying that, but it's still a hell of a loaded
> analogy. I prefer to think of blank-slate intelligence as a
> egalitarian notion: we are all the same, differing only in our
> experience and hardware resources, be we human, alien, or
> machine. The politics is irrelevant to the question, of course, but
> I'd still rather not be called a Nazi.

I didn't intend to "call you a Nazi." One can share some beliefs with
the Nazis without sharing all of them, and without suffering from any
moral problems as a result. I share lots of beliefs with the Nazis,
but I also disagree with them on a variety of substantial issues. I'm
sure you do too. The interesting thing to note here is not that it's
the fascists who said it, but that the distinction exists and is
interesting.

-Dan

      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:44 MST