From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 11 1998 - 15:09:46 MDT
Robin Hanson wrote:
>
> The basic problem with singularity discussions is that lots of people
> see big fast change coming, but few seem to agree on what that is or
> why they think that. Discussion quickly fragment into an enumeration
> of possiblities, and no one view is subject to enough critical analysis
> to really make progress.
>
> I've tried to deal with this by focusing everyone's attention on the
> opinions of the one person most associated with the word "singularity."
> But success has been limited, as many prefer to talk about their own
> concept of and analysis in support of "singularity".
I don't see the problem. For debate to occur, Vinge's statements have to be
fleshed out with a specific model using challengeable assumptions. As it
stands, Vinge's paradigm - although correct - is too abstract to be analyzed
for correctness or flaws. The paradigm of a seed AI cannot be challenged or
supported without a specific AI architecture. Paradigms are very powerful and
basic things, make no mistake; when I read Vinge's paradigm, it transformed my
life - but that was my immediate reaction, and I don't think that immediate
reactions can be modified by debate. A heuristic on a very high level can be
intuitively appealing or intuitively disappealing to our trained minds, but
fruitfully debating the intuitive reaction requires that the heuristic be
concretized to a specific context.
We can debate the model of the Singularity shown in "True Names", or "Blood
Music", or "Mother of Storms", or "A Fire Upon The Deep", or "Mindkiller", or
"The Gentle Seduction", or "Marooned In Realtime". We would have severe
difficulty debating the afterword to "Run, Bookworm, Run" - not without
examples, anyway. Maybe I'll post a "lessons learned" discussing how the
concept of a Singularity applies to a seed AI and how the heuristic worked in
that instance.
> In the above I was responding to Eliezer Yudkowsky's analysis, which
> is based on his concept of a few big wins. To respond to your question,
> I'd have to hear your analysis of why we might see an astronomical
> increase in the rate of insights. Right now, though, I'd really rather
> draw folks' attention to Vinge's concept and analysis. So I haven't
> responded to Nick Bostrom's nanotech/upload analysis, since Vinge
> explicitly disavows it. And I guess I should stop responding to Eliezer.
> (Be happy to discuss your other singularity concepts in a few weeks.)
(Why? He didn't disavow my analysis. Am I being too specific?)
In my opinion, the debate about Vinge's Singularity all comes down to either
the dynamics of a specific AI, which is an argument about real things, or
battling analogies, which is basically unresolvable. Unknowability can be a
philosophical argument or it can be a cognitive fact. Speed can be projected
from graphs or the trajectory of a designed mind can be debated.
Rational debate about the Singularity is ultimately an argument about human
technological progress (Moore's Law, and how long it can last), superhuman
technological progress (nanotechnology and how it works), and seed AI
trajectories (given the two computing-power functions). If you think that
human intelligence enhancement will mature before the computational
Singularity, you can add neurotechnology and cognitive science to the mix. If
you think that human society will disrupt the technological trajectories
(especially if society is unbalanced by technology), you can factor that in.
1. Progress of computing technology.
2. Dynamics of seed AI trajectories.
3. Transhuman technology (results of enhancement).
4. Neurotechnology of enhancement.
5. Cognitive science (results of enhancement).
6. Dynamics of human society's trajectories.
If there's any other facet of the path to Singularity - the debate over
whether a Singularity will occur - that depends on knowable facts, I can't
think of it offhand. There are some other things that control Life After The
Singularity, which I speculated about in "Zone Barriers" - but I don't
ultimately care, nobody can find out short of doing it, none of the
assumptions are subject to rational debate, and what _you_ (Hanson) originally
asked is whether the concept of a Singularity was flawed.
If there's any better way to resolve that than saying: "Lookit this
collection of reasonable assumptions, it gives you a Singularity", I don't
know what it is.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:34 MST