From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 22 2002 - 11:51:10 MDT
Ben Goertzel wrote:
>
> Eliezer, while we're all free to our own differing intuitions, it seems
> wrong to me to can feel "dead certain" about something we've never seen
> before, that depends on technologies we don't yet substantialy understand.
If I can figure out how to solve a problem myself, I usually feel
comfortable calling it "dead certain" that a superintelligence can solve it.
Motivations might be different, although in this case it is very difficult
to see why they would be, but if I can see at least one easy way to go from
superintelligence to nanotechnology in a matter of days or weeks, there are
probably others.
> I think the period of transition from human-level AI to superhuman-level AI
> will be a matter of months to years, not decades.
I suppose I could see a month, but anything longer than that is pretty hard
to imagine unless the human-level AI is operating at a subjective slowdown
of hundreds to one relative to human thought.
> Perhaps for a while, a superhuman AI among humans will be like a human among
> dogs. A human among dogs *does* have a different and deeper understanding,
> and can do things no dog can do, including many things revolutionizing dogly
> existence ... but still, a human among dogs is not a god. How long might a
> phase like this last? Hard to say.
If the human is thinking thousands of times faster than the dogs, it
probably won't last very long from the dogs' perspective, however long it
might seem to the human. The AI is not coming into existence in a vacuum;
the human world is supersaturated with tools that can be used to construct
rapid infrastructure.
> Moravec-and-Kurzweil-style curve-plotting is interesting and important, but
> nevertheless, the problem of induction remains... . All sorts of things
> could happen. For instance, the superhuman AI's we build may continue to
> progress exponentially, but in directions other than those we foresee now.
Even if your goal is to progress exponentially in enlightened spiritual
directions, exponential physical progress is still a good way to get the
computing power to support that enlightened spiritual stuff and bring others
in on the fun.
> In short, as I keep repeating, one of the unknown things about our coming
> plunge into the Great Unknown, is how rapidly the plunge will occur, and the
> trajectory that the plunge will follow. Dead certainty on these points
> seems inappropriate to me.
I often encounter people who are amazed at my dead certainty that humanity
evolved rather than being created. Generic arguments against "dead
certainty" are not relevant.
Ben, we're "uncertain" relative to different priors. I would guess that for
you, the sentences "The Singularity takes place slowly" and "The Singularity
takes place quickly" are sentences of equivalent complexity and your
uncertainty manifests as a perceived balance between them. For me, there's
the positive statement "I impose my human expectations on the Singularity
and expect it to run on a human timescale and be limited by humanish things
like factories and venture capital" and "I don't know what the heck the
Singularity will use for manufacturing and computronium, but it's not going
to be insanely slow like our own, special, human way of doing things."
Humanity is a special case and our "uncertainty" has to be expressed
relative to the knowledge that humanity is a special case, not "uncertainty"
as a spherical volume centered around our own little island in the cosmos.
I feel comfortable saying that the Singularity will be faster than our own
dead-slow existence because the Singularity is not going to be
anthropomorphic enough to run on our own private timescale.
I am similarly dead certain that we will never see an AI sincerely giving
Agent Smith's speech in _The Matrix_ - "Humans are obsolete; we'll kill them
and steal their mates and territory".
That I am uncertain about many things does not change the fact that many
constructions that seem quite plausible to some people are easily revealed
as dead wrong. If you like, don't think of me as being "dead certain" that
the Singularity will be fast, just "dead certain" of the wrongness of the
common reasons offered for why the Singularity would happen to run on a
conveniently human timescale.
Uncertainty is a rational quantity and hesitancy is a social quantity which
is sometimes but not always correlated with uncertainty. In this case,
uncertainty is appropriate but hesitancy is not. If I am uncertain about
the essential strangeness of the Singularity, it doesn't mean that I need to
be the least bit hesitant about shooting down the various human-generated
(hence, humanly comprehensible) fallacies commonly offered because "Well,
we're all uncertain and nobody knows better than anyone else." That's just
faking the stereotypical appearance of rationality as it is interpreted by
nonrationalists in a social context.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT