From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Thu Sep 17 1998 - 11:29:49 MDT
Eliezer S. Yudkowsky seems to be the only person here willing to defend
"explosive growth," by which I mean sudden very rapid world economic growth.
So I'd like to see what his best argument is for this. But Eliezer, if
we're going to make any progress, we're going to have to *focus*. I don't
want to wander off on a dozen irrelevant tangents.
So first, please make clear which (if any) other intelligence growth
processes you will accept as relevant analogies. These include the evolution of
the biosphere, recent world economic growth, human and animal learning, the
adaptation of corporations to new environments, the growth of cities, the
domestication of animals, scientific progress, AI research progress, advances
in computer hardware, or the experience of specific computer learning programs.
You seemed to say no analogies are relevant, and then your last response to me
touches on AI, learning general relativity, Cro-Magnons, Lamarkian biology,
the rise of cities, and many other topics. I don't want to talk about these
things if they are tangential to your argument.
If no analogies are relevant, and so this is all theory driven, can the theory
be stated concisely? If not, where did you get the theory you use? Does
anyone else use it, and what can be its empirical support, if analogies are
irrelevant?
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:35 MST