From: Christopher Maloney (dude@chrismaloney.com)
Date: Fri Jul 02 1999 - 14:21:30 MDT
Sending this again ... sorry if it gets duplicated, I don't know
what the deal is. I've waited 16 hrs and it didn't show up.
Eliezer S. Yudkowsky wrote:
>
> "O'Regan, Emlyn" wrote:
> >
> > Eliezer,
> >
> > The seed AI concept has been rattling around in my head for a while. What's
> > buggin me is the concept of environment. I asked you what environment you
> > expected would be necessary for the AI a while back, and you said that the
> > environment would be the AI's code itself
>
> Yup.
>
> > (I think - I could have missed
> > your point, please have patience with a mere mortal).
> >
> > Absolutely, this must be the core of the environment. But there must be more
> > than this, mustn't there? For an AI to optimise itself, there must be some
> > definition of optimal, which implies a frame of reference. I think that
> > frame must be external to the code, because the idea of optimising you code
> > to make you better at optimising your code has an unfortunately circular and
> > empty feel to it.
>
> Only to the same mathematicians who gave us propositional logic. I
> mean, it sounds circular in theory, but in practice, it doesn't work
> that way, for a very simple reason which, unfortunately, doesn't
> translate into English without a blackboard. What I'd like to say is
> that the AI is reductive and the elements present obvious
> suboptimization metrics, and that the elements sum to noncodic abilities
> as well as codic abilities, so codic optimization is non-sterile.
>
> Okay, try this. The AI isn't composed of a single, "code-optimizing"
> domdule, right? It's composed of a causal analysis module and a
> combinatorial design module and a heuristic soup module and so on.
> These architectural modules, plus analogies to other application
> domdules, plus the codic domdule, all sum to the "code-optimizing"
> ability. In a given optimization problem, you have subproblems that are
> spread across the domdules. The performance on the subproblems, and the
> contribution of individual domdules to the success on subproblems, allow
> for local optimization.
>
> No, I'm still being too complex. Remember EURISKO? It had heuristics
> optimizing heuristics? There were even heuristics being optimized to be
> better heuristic-optimizers? It wasn't sterile. Why? Because the
> heuristics were also being optimized for all sorts of other problems.
> And, more importantly, because "Examine nearby cases" being applied to
> "Investigate extreme cases" to yield "Investigate cases close to
> extremes" is a lot more specific than "optimize heuristics for
> heuristic-optimizing". The general case sounds sterile and circular
> because it's monolithic and general. But in actuality, you have
> sub-abilities dealing with subproblems.
It sounds like one could argue that that is all humanity is
engaged in now, and ever has been. In a way, everything is
a "sub-problem" of the problem of advancement to the next
level. Or is that stretching it too much?
I mean, when I think about your suggestion, I can imagine what
(for example) my wife would say. Something like "what a dry
and sterile existence"! But I would argue, and maybe you would
too, that the "sub-problems" are really what life is all about.
And I always think it's rather Anthro-centric to think that
appreciating art and beauty and so forth are peculiar human
abilities. I personally think that they are just ways of
thinking about patterns and analogies, an ability which an AI
will certainly have. I.e., "sub-problems"!
> You don't even need any other kinds of problems at all. The subproblems
> of the general problem of self-optimization provide enough diversity to
> prevent the circularity you're worried about. The major reason for
> programming other environments would be to provide sources of analogies
> and incremental paths to ideas that would bottleneck otherwise - the
> same reason a hacker learns languages ve'll never program in. But it's
> not *necessary*.
>
> -
>
> BTW - the surrealism involved in saying "Stop holding on to the past" to
> a Singularitarian is considerably larger than that involved in saying it
> to a 19-year-old.
> --
> sentience@pobox.com Eliezer S. Yudkowsky
> http://pobox.com/~sentience/tmol-faq/meaningoflife.html
> Running on BeOS Typing in Dvorak Programming with Patterns
> Voting for Libertarians Heading for Singularity There Is A Better Way
-- Chris Maloney http://www.chrismaloney.com "Knowledge is good" -- Emil Faber
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:22 MST