From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sat Jan 26 2002 - 01:09:49 MST
On Fri, 25 Jan 2002, Dan Clemmensen wrote:
> Yes, you should be very careful with that prediction. it was arbitrary:
> In 1996 I predicted that the singularity would happen within ten years.
Arbitrary! -- Arbitrary! Lord, I can't believe an esteemed extropian like
Dan Clemmensen (whose google quotient may exceed my own) made an arbitrary
prediction about when the singularity would occur. It such a shame that
I'm halfway around the world from the "wailing wall".
> However, if you accept the concept that the singularity is driven by
> exponential change, then you must realize that the actual data will not
> become obvious to most people until the singularity is nearly upon us.
I wonder whether that is really the case. The curves for CPU speed,
data storage densities, and communications bandwidth are following
fairly predicatble, though different, paths. I could cite DNA sequenced and
protein crystal structures in the Genbank and PDB databases as additional
examples. If there are discontinuities, they appear to be in the earlier
parts of the curves rather than the later parts. For example, there
is going to be a discontinuity in the "proteomics" information curve.
Up until 2000, most of the protein-protein interactions were determined
by relatively primitive lab experiments. From 2000 - 2002, Cellzome
(www.cellzome.de) built an engine to apply robotic MassSpec analysis
to the problem. Now they have 1/3 of the yeast proteome done.
Now that proteomics is on a curve, I don't really expect to see
any huge jumps in the development rate though.
I'll note from my own experience that putting myself back in 1996, we
are slightly ahead of where I would have expected us to be from
a genomics/biological knowledge standpoint.
So, the problem isn't that people aware of these concepts will
not see it coming, the problem is that "most people" don't
understand the concept of the singularity and don't know
what signs to look for.
> If you prefer the "phase change" model, then the situation is even
> worse. In this model, we have built a substrate that provides the
> resources needed to create an SI, needing only some single breakthrough
> to bring it into existence. With this model, the SI can be brought into
> existence without warning by an individual or a small group.
If its human-level you want, Blue Gene should be functional before 2006.
*But* even if you have access to such hardware, its a roomful of
hardware. You have to make a strong case that (a) significant
acceleration is possible using only software modifications [Ray and Hans
seem to suggest we may be limited to 1-2 orders of magnitude with clever
algorithms]; or (b) there is a rapid advancement of matter as software.
If you don't get (b) the SI hits a ceiling that can't be breached
without humans supplying it with more advanced hardware. That
presupposes that we would have the technology base at that time
to manufacture such hardware.
So there could be a number of bumps in the road.
The only smooth path I could envision for the singularity would
be an underground breakout that takes advantage of the WWW.
You would have to be able to co-opt a significant amount of
of underutilized resources to be able to manage an exponential
growth path. Ultimately you still face the requirement for
matter compilers. I don't think this will be feasible until
you have many high-bandwidth connections to a large fraction
of the installed computronium base -- the intelligence constraints
on low bandwidth connections between fractional human brain
equivalents seems to be a strong barrier.
If humans realize what is going on and decide they don't want
it to happen you run the risk that they will disconnect their
computers from the net. So you have the additional handicap
that it may only be able to sneak up on you if it operates
in a severely constrained stealth mode.
> As the substrate improves, The size of the needed breakthrough decreases.
This certainly is true. By 2010-2015, when human mind equivalent
computational capacity becomes available to small groups, the
probability of self-evolving AIs goes up significantly.
> I feel that the substrate is already very rich and is rapidly getting
> richer, as measured in available computing capacity.
*If* people allow the access. The opportunities for a Trojan
Horse are there. The possibility of someone sneaking self-evolving
AI code into SETI@Home, Folding@Home, etc. are something we should be
concerned with. There may be an argument here for ExI to become
involved in engaging Distributed Computing developers in such
discussions. While the Singularity Institute may have laudable
goals, Al Qaeda certainly does not.
It gives you pause -- 1 billion muslims around the world.
10% of them devoting their computers to a DC project to
evolve not a "Friendly AI" but an AI dedicated to advancing
a radical muslim hegemony.
That, IMO, is one of the problems with pushing AI technology.
Unlike the situation with most humans, there may be no "built-in"
human empathic perspectives in AIs. I'm deeply suspicious
of any arguments that only beneficent AIs may produced.
If malevolent AIs are as easy as friendly AIs then we may
have some serious problems.
> Conclusion: 2006 is a guess, but it is not completely ridiculous.
No, certainly not. But I think getting a smooth fit to the
curve seems to require either a late start (lots of underutilized
resources available) or robust technologies that allow compiling
matter as software.
Personally, I'd say 2016 is a better date than 2006.
Robert
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:00 MST