Re: Professional intuitions

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 25 1998 - 15:53:58 MDT


Robin Hanson wrote:
>
> Asking Drexler has some obvious problems with bias. Better to evaluate the
> arguments he presents.

I did. As far as I can tell, they're OK. But I don't delude myself that I
can understand them simply because I'm an extremely bright person with a grasp
of basic physics and chemistry and because Drexler is a good explainer. There
is a difference between being an amateur at something, no matter how well
informed, and spending years at it. It's what you think of as My Specialty
that you wind up with powerful intuitions in. I can sort of understand why
quantum physics is ugly and General Relativity is beautiful, not because my
father was a physicist, but because there are basic principles that relate to
the nature of causality. But I don't think I understand in the same way
Einstein did, and I don't try to deal with the problem in the way Penrose
does. I have no intuition telling me that Drexler is right; I read his book
and he doesn't sound like a crank, he didn't say anything I know is false, so
I have no choice but to trust his conclusions - to default to Drexler on the
matter of nanotechnology.

> >In short, I think there are big wins because I've looked a little beyond the
> >limits of my own mind ... these are ultimately the only reasons for believing
> >in a Horizon or a Singularity, and neither can be argued except with someone
> >who understands the technology. ...
> >Anyway, the primary point that I learned from the Singularity Colloquium is
> >that neither the skeptics nor the Singularitarians are capable of
> >communicating with people outside their professions. (Or rather, I should say
> >that two people in different professions with strong opinions can't change
> >each other's minds; ...
>
> Please don't attribute any disagreement to my failing to understand AI.
> I think you will find that I can match any credentials you have in AI (or
> physics or nanotech for that matter).

Oh, I'm sure you understand AI! Enough to come up original ideas, for that
matter. Still, you've got more powerful intuitions in social science. I
daresay that you may even be leveraging your understanding of AI with your
understanding of psychology, social agent interactions, or just the intuitions
you picked up in economics. Not that there's anything wrong with this, you understand.

My understanding of economics is based on facts about system overhead I
learned while trying to program causal propagations. The idea of causal
insulation and preventing information loss that lies at the heart of complex
barter - I picked that up from a module I wrote in deep C++ over the course of
a month. I'm going to have different intuitions about economics, and you're
going to have different intuitions about AI, so we are inevitably going to
disagree if we're discussing something as factually unknown as
superintelligence and the future. It doesn't matter whether we both possess
invent-level intelligence in the field, because our specialties are different.

> >> The question is *how fast* a nanotech enabled civilization would turn the
> >> planet into a computer. You have to make an argument about *rates* of change,
> >> not about eventual consequences.
> >
> >If they can, if they have the will and the technology, why on Earth would they
> >go slowly just to obey some equation derived from agriculture?
>
> Economists know about far more than agriculture. And you really need
> a stronger argument than "I'll be fast because it's not agriculture."
>
> >I've tried to articulate why intelligence is power. It's your turn. What are
> >the limits? And don't tell me that the burden of proof is on me; it's just
> >your profession speaking. From my perspective, the burden of proof is on you
> >to prove that analogies hold between intelligence and superintelligence; the
> >default assumption, for me, is that no analogies hold - the null hypothesis.
>
> If no analogies hold, then you have no basis for saying anything about it.
> You can't say it will be fast, slow, purple, sour, or anything else.

You can have a lot of small analogies hold, to things like the laws of physics
and the processing of information in the brain, rather than great big
analogies to entire civilizations. At that point, you're working with rules
instead of correspondences - with simulations instead of analogies.

> To me "superintelligent" means "more intelligent", and we have lots of
> experience with relative intelligence. Humans get smarter as they live longer,
> and as they learn more in school. Average intelligence levels have been
> increasing dramatically over the last century. Humanity as a whole is smarter
> in our ability to produce things, and in scientific progress. Companies get
> smarter as they adapt to product niches and improve their products. AI
> programs get smarter as individual researchers work on them, and the field
> gets smarter with new resesarch ideas. Biological creatures get smarter
> as they develop adaptive innovations, and as they become better able to
> adapt to changing environments.
>
> All this seems relevant to estimating what makes things get more intelligent,
> and what added intelligence brings.

The Big Bang was instantaneous. Supernovas are very bright. State-vector
reduction is sudden and discontinuous. A computer crashing loses all the
memory at once. The thing is, human life can't survive in any of these areas,
nor in a Singularity, so our intuitions don't deal with them. The Universe is
full of gigantic, tremendously powerful phenomena, but for some reason we
don't seem to get close to them. If the planet was on a collision course with
the Sun, I can see people saying: "We've seen ice melt, and the deserts are
full of sand, and when things get extremely hot water evaporates, but there's
no reason to suppose that the entire planet will vaporize. After all, the
Earth's core is pretty hot and nothing happens except for a few volcanoes."
Why doesn't our intuition say that things get heavier when they go faster?
The extreme cases are scattered all over the Universe! But they don't support
mortal life, so they don't appear in comforting analogies.

The phenomena I'm talking about would have destroyed life as we know it if
they had appeared in any point in the past. How, exactly, am I supposed to
find an analogy? It's like trying to find an analogy for Earth falling into
the Sun! No, economic production doesn't go to zero no matter how bad the
catastrophe, but that's all at the local scale - and besides, if the economic
production had gone to zero, we wouldn't be discussing it. The same applies
for economic production going to infinity. Different laws apply at other
scales, whether it's microscopic interactions, velocities close to the speed
of light, or superintelligence.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:36 MST