From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Sep 23 1998 - 18:25:47 MDT
Robin Hanson wrote:
>
> >I noticed that paper because it explained punctuated equilibrium in a way
> >that exported fairly well to breakthrough/bottleneck AI trajectories.
> >It was the first explanation I had seen with that property.
>
> I can describe my sense of AI progress in terms of this model as well. Early
> on in AI research people came across the big win concepts, and the rate of
> discovery of such big wins then declined with time. The main way in which the
> environment for AI programs is changing now is hardware improvement. Some big
> wins have to await enough compute power to verify/study them, and these sort
> continue to show themselves more steadly with time. But none of these are so
> huge as to create an average factor of ten productivity win for AI programs.
As a description of Then and Now, I think I agree with this. I certainly
don't agree that we've got all the "big wins" needed to get to the human
level, or even all the big wins possible at current levels of computing power.
By implication, there are some left, although that might not be true for Von
Neumann architectures.
> As best I can tell, your reason for expecting big future wins seems to be that
> you, Eliezer, have personally come up with great (largely untested) designs
> for AI programs, and you're sure they're enough to change everything. Are you
> aware of how stereotypical this is of young people when they first get into AI?
Well, my other reason for expecting a breakthrough/bottleneck architecture,
even if there are no big wins, is that there's positive feedback involved,
which generally turns even a smooth curve steep/flat. And I think my
expectation about a sharp jump upwards after architectural ability is
independent of whether my particular designs actually get there or not. In
common-sense terms, the positive feedback arrives after the AI has the ability
humans use to design programs.
But aside from that, yeah, I'm aware of how stereotypical it is. I'm hoping
that my Personal Improbability Field is large enough to break the rules. My
understanding of the AI Stereotype is that the youngster only has a single
great paradigm, and is loath to abandon it. I've got whole toolboxes full of
design principles, some of which I learned from non-AI code written over the
course of two years, and others which I abandoned, building new theories from
the remnants. I have no illusions about the tremendous power of any
particular principle. I think building the AI will take a lot of hard work.
I can list major problems. As I understand it, this is generally not the
case. Still, I could be wrong.
But remember - every now and then some youngster is right, and you _do_ get a
major revolution! Might as well say that physics is hopeless, since the vast
majority of new theories don't pan out...
(And don't tell me about my illusions about the tremendous power of positive
feedback. Positive feedback isn't a design principle, since it doesn't tell
you what to code. Designing modules that the AI can redesign is a principle,
and _not_ a very useful one.)
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:36 MST