Singularity: Human AI to superhuman

From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Fri Sep 18 1998 - 11:33:11 MDT


>> Eliezer S. Yudkowsky seems to be the only person here willing to defend
> ^^^^
>I object to the implication that I'm fighting a lone cause.

I meant no such implication.

>There are many others, not-surprisingly including AIers (like
>Moravec), and nanotechnologists (like Drexler), and others who actually deal
>in the technology of transcendence, who are also Strong Singularitarians. In
>fact, I can't think offhand of a major SI-technologist who's heard of the
>Strong Singularity but prefers the Soft.

Is it clear they mean the same thing by "strong singularity"? And if so many
people agree, why can I find no published (i.e., subject to editorial review)
coherent analysis in favor of explosive growth?

>> "explosive growth," by which I mean sudden very rapid world economic growth.
>
>I don't know or care about "very rapid world economic growth". I think that
>specific, concrete things will happen, i.e. the creation of superintelligence
>operating at a billion times a human's raw power, ...

But it's not clear what "X times human power" means. We already have machines
this much faster than humans at arithmetic. It seems to me that the important
measure of intelligence is its ability to solve real problems, and the natural
measure of this is the income such thinkers can command. If you reject this,
you need to propose some substitute.

Even if you do this, there is also the question of *when* you say this will
happen. Most people would accept it may happen within a billion
years, but you seem to be saying more. Economic growth rates seem to me a
natural "when" measure, but if you reject this, you need a substitute.

>> So I'd like to see what his best argument is for this.
>
>Arguments AGAINST a Singularity generally use the following assumptions:

Until the claim is clarified, and at least one argument offered for it,
there is no need to consider arguments against it.

>Arguments FOR a Singularity usually require some, but not all, of the
>following assumptions: ...
>* There exists at least one fast-infrastructure technology, such as
>nanotechnology, which is capable of manufacturing massive computer power in a
>short time.
>* Intelligence above a certain level can rapidly improve itself or design a
>successor, using self-sustaining increments of optimization or fast infrastructure.
>... (Many of these arguments are technical, rather than philosophical, which is as
>it should be.)

To make a technical argument, you need to make "short time", "rapidly", and
"self-sustaining" more precise.

>> If no analogies are relevant, and so this is all theory driven, can the theory
>> be stated concisely? If not, where did you get the theory you use? Does
>> anyone else use it, and what can be its empirical support, if analogies are
>> irrelevant?
>
>The _hypothesis_ can be stated concisely: Transhuman intelligence will move
>through a very fast trajectory from O(5X) human intelligence to nanotechnology
>and billions of times human intelligence. The valid reasons for believing
>this to be the most probable hypothesis are accessible only through technical
>knowledge, as is usually the case.

I think you will find I have sufficient technical background to understand
whatever reasons you may offer. I have skimmed through a lot of your web pages,
including http://www.tezcat.com/~eliezer/AI_design.temp.html , but find the
closest you get to an analysis of times/speeds is in http://www.tezcat.com/~eliezer/singularity.html which I will respond to in my
next post.
 

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:35 MST