AI big wins (was: Punctuated Equilibrium Theory)

From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Thu Sep 24 1998 - 10:51:22 MDT


Eliezer S. Yudkowsky writes:
>Well, my other reason for expecting a breakthrough/bottleneck architecture,
>even if there are no big wins, is that there's positive feedback involved,
>which generally turns even a smooth curve steep/flat. And I think my
>expectation about a sharp jump upwards after architectural ability is
>independent of whether my particular designs actually get there or not. In
>common-sense terms, the positive feedback arrives after the AI has the ability
>humans use to design programs.

Let me repeat my call for you to clarify what appears to be a muddled argument.
We've had "positive feedback", in the usual sense of the term, for a long time.
We've also been able to modify and design AI architectures for a long time.
Neither of these considerations obviously suggests a break with history.

>My understanding of the AI Stereotype is that the youngster only has a single
>great paradigm, and is loath to abandon it. I've got whole toolboxes full ...

I think you're mistaken - lots of those cocky youngsters have full toolboxes.
("Yup, mosta gunslingers get kilt before winter - but they mosta got only one
 gun, and looky how many guns I got!")

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:36 MST