From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Fri Sep 18 1998 - 16:29:18 MDT
Eliezer S. Yudkowsky wrote:
>That section of "Staring Into the Singularity" (which Hanson quoted) was
>intended as an introduction/illustration of explosive growth and positive
>feedback, not a technical argument in favor of it. ...
But it was by far the most technical argument I found in your web pages on
timing. If I missed something more technical, please point it out.
>Above all, the scenario completely ignores three issues:
>Nanotechnology, quantum computing, and increases in intelligence rather than
>mere speed. This is what I meant by a "pessimistic projection". ...
>So what good was the scenario? Like the nanotech/uploading argument,
>it's a least-case argument.
My & Damien's critiques suggest this scenario may be greatly optimistic
on other counts. You need to defend it against these critiques if you want
it to be accepted as a plausible lower bound on future growth rates.
>Actual analysis of the trajectory suggests ...
>Some of the things Hanson challenges me to support/define have already been
>defined/supported in the sections of "Human AI to transhuman" which I have
>posted to this mailing list ...
If you think that post is a persuasive technical analysis on timing claims,
rather than suggestive prose, I suspect you don't know what such analysis is,
and so my requests for it are futile. Does *anyone* reading this other than
Eliezer think that Eliezer's first post constitutes such a persuasive
technical analysis?
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:35 MST