From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 25 1998 - 12:05:28 MDT
Robin Hanson wrote:
>
> I am happy to consider simplified models of systems as a means to understanding.
> My complain with your simple models isn't that they are too simple, it is that
> it is not clear why they are better models than equally simple models without
> explosive growth.
I don't think that _any_ simple models are good, neither mine nor yours. I
don't think that any allegedly quantitative models are any good. If we want
to know what can be done with nanotechnology, the best way to find out is to
ask Drexler or go to work for Zyvex, not extrapolate from the rise of
agriculture. Similarly, there are big wins in the area of AI because when I
visualize the system, I see lots of opportunities for caching that an
intelligent compiler could easily use to shrink the system to tenth of its
size, as well as adding a great deal of cognitive capacity. Then when I
visualize the basic problems of intelligence, I think that a computer
programmer writing a programming cortex can get a big win in programming
ability - the same way you could get a big win in music by adding an auditory
cortex. Right now we're struggling to write the programs that to an intuitive
programmer are only words in a sentence.
In short, I think there are big wins because I've looked a little beyond the
limits of my own mind to see how intelligence can be enhanced with deeper
searches or new intuitions, just as Drexler has looked beyond the limits of
modern-day technology to what can be done with molecular engineering and
self-reproducing robots. I think there's an unimaginable revolution because
if my mind can see that far into the future, imagine how far the future can
see into the future! And these are ultimately the only reasons for believing
in a Horizon or a Singularity, and neither can be argued except with someone
who understands the technology. (You can still get memetic followers who
don't understand the technology but believe in it anyway, but they can't argue
the technology. I believe in nanotechnology because I'm too busy with AI to
understand molecular engineering, but I don't argue about it.)
Anyway, the primary point that I learned from the Singularity Colloquium is
that neither the skeptics nor the Singularitarians are capable of
communicating with people outside their professions. (Or rather, I should say
that two people in different professions with strong opinions can't change
each other's minds; I don't know if any nontechnical spectators were swayed,
or whether people with tentative technical opinions have been convinced.
Anyone?) I don't think there's anything horrible about that, either. It's
the way things usually are.
> >"Intelligence is not a factor, it is the equation itself." You've never
> >responded to my basic assertion, which is that sufficient intelligence (which
> >is probably achievable) suffices for nanotech; which in turn suffices to turn
> >the planet into a computer; which in turn counts as "explosive growth" by my
> >standards. It's difficult to see how the literature on the rise of
> >agriculture relates...
> >
> >"Sufficient" = Wili Wachendon with a headband.
> >"Achievable" = The end of my seed AI's trajectory, running on 10^13 ops.
> >"Nanotech" = What Drexler said in _Engines of Creation_.
>
> (Intelligence is an equation?)
Sure. I think that rather than A(P, O, I), a more evocative way of phrasing
the equation would be P' = I(P, O), O' = I(P, O), I' = I(P, O). In other
words, I think that all the complexity and pattern in the equation is internal
to "I". Intelligence is the most complex thing there is, and it understands
and manipulates any patterns it finds itself in, so given the omnipotence -
self-programming for internal omnipotence, nanotechnology for external
omnipotence - it tends to dominate any process it finds itself in. You
probably don't find this argument at all convincing, because it's ultimately
one of those profession-based intuitions.
> The question is *how fast* a nanotech enabled civilization would turn the
> planet into a computer. You have to make an argument about *rates* of change,
> not about eventual consequences.
If they can, if they have the will and the technology, why on Earth would they
go slowly just to obey some equation derived from agriculture?
In fact, I might say that here is the root of our dispute. Economics deals
with limits. Sometimes a limit gets raised by an order of magnitude and
there's a major revolution. But nanotechnology and other fast infrastructures
deal with abilities that are omnipotent from the viewpoint of anyone but a
physicist or cosmologist. And AI programming deals with what Bostrum calls
"autopotence", systems capable of arbitrary rewrites of their own source code.
It really isn't at all surprising that I would focus on the power and you
would focus on the limits.
I've tried to articulate why intelligence is power. It's your turn. What are
the limits? And don't tell me that the burden of proof is on me; it's just
your profession speaking. From my perspective, the burden of proof is on you
to prove that analogies hold between intelligence and superintelligence; the
default assumption, for me, is that no analogies hold - the null hypothesis.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:36 MST