From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Tue Sep 08 1998 - 12:21:36 MDT
Eliezer S. Yudkowsky writes:
>Quoting Max More:
> "... I have no doubt that
> human level AI (or computer networked intelligence) will be
> achieved at some point. But to move from this immediately to
> drastically superintelligent thinkers seems to me doubtful."
>...
>the seed AI's power either remains constant or has a definite maximum ...
>efficiency ... defined as ... the levels of intelligence achievable at
>each level of power ... more intelligence makes it possible for the AI
>to better optimize its own code. ...
>the basic hypothesis of seed AI can be described as postulating
>a Transcend Point; a point at which each increment of intelligence yields
>an increase in efficiency that yields an equal or greater increment of
>intelligence, or at any rate an increment that sustains the reaction.
>This behavior of de/di is assumed to carry the seed AI to the Singularity
>Point, where each increment of intelligence yields an increase of efficiency
>and power that yield a reaction-sustaining increment of intelligence.
>It so happens that all humans operate, by and large, at pretty much the
>same level of intelligence. ... the brain doesn't self-enhance, only
>self-optimize a prehuman subsystem. ...
>You can't draw conclusions from one system to the other. The
>genes give rise to an algorithm that optimizes itself and then programs
>the brain according to genetically determined architectures ...
But where *do* you draw your conclusions from, if not by analogy with
other intelligence growth processes? Saying that "superintelligence is
nothing like anything we've ever known, so my superfast growth estimates
are as well founded as any other" would be a very weak argument. Do you
have any stronger argument?
We humans have been improving ourselves in a great many ways for a long time.
By a six year old's definition of intelligence ("she's so smart; look at all
the things she knows and can do"), we are vastly more intelligent that our
ancestors of a hundred thousand years ago. Much of that intelligence is
embodied in our social organization, but even when people try their hardest
to measure individual intelligence, divorced from social supports, they
still find that such intelligence has been increasing dramatically with time.
This experience with intelligence growth seems highly relevant to me.
First, we see that the effect of smarter creatures being better able to
implement any one improvement is counteracted by the fact that one tries the
easy big win improvements first. Second, we see that growth is social; it
is the whole world economy that is improving together, not any one creature
improving itself. Third, we see that easy big win improvements are very rare;
growth is mainly due to the accumulation of many small improvements.
(Similar lessons come from our experience trying to write AI programs.)
Now it is true that AIs should be able to more easily modify certain
aspects of their cognitive architectures. But it is also true that human
economic growth is partly due to slowly accumulating more ways to more
easily modify aspects of our society and ourselves. The big question is:
why should we believe that an isolated "seed AI" will find a very long stream
of easy big win improvements in its cognitive architecture, when this seems
contrary to our experience with similar intelligence growth processes?
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:33 MST