Challenge of Design Complexity

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Fri Dec 18 1998 - 03:17:34 MST


Billy Brown writes:
 
> Simply put, the more advanced a technology becomes, the more work it takes
> to improve it. As technology advances there is a general tendency for

Beg to differ. Unless you believe in Prime Cause, a biological organism,
comparing to our trivial handiwork is the ultimate in complexity and
robustness, and is not the product of rational design. As to
work (computation), that is cheap enough with MNT.

Hijacking that principle for technical designs appears perfectly
possible, in fact this is exactly what's happening right now, albeit
the results have not spilled into the industry yet.

> everything to become more complex, which means more work for the engineers.

Of course the artifexes, unless cheaply cloned, are the bottleneck in
the equation. The point is, you don't need them. YMMV.

> Sometimes you can counteract this trend with better information technology
> (the Internet is a good example), but not always (look at the amount of
> human effort needed to build successive CPU designs - its a pretty steep
> upward trend).

This trend leads exactly nowhere. The bulk of modern bloatware CPUs is the
result of silicon compilers, anyway. People can do lots better: look
at Chuck Moore's latest CPU i21. There is no way an off-shelf silicon
compiler system could detect a novel effect (rise in temperature of
certain areas under certain timing conditions and resulting change in
transistor properties), lest to suggest a simple workaround for this
(doubling the area of the hot spots). You could do a lot of funky
things though, if you'd apply GA to a really fast simulator engine
(let's say a throughput of about 1 gps (generation/s) on a population
of a few thousand individua).

> IMO, this principle has several important implications:
>
> First, it means that advanced nanotechnology is not possible without major
> breakthroughs in automated engineering and/or intelligence enhancement. Why
> not? Well, diamondoid parts might be simple repeating structures, but

Well, I don't see much simplicity nor repetition in, say, Drexler's
fine-motion controller, and that's an early design, made by a
human. GA-generated stuff will most likely be fractal polymer shrubbery,
difficult to make sense from even if visualized properly to a trained
observer. I'm speculating here, of course, but we should know soon
enough how clever evolutionary methods work in molecular design.

> something like smart matter or utility fog requires that you decide what to
> do with every single atom (that's about 10^22 design decisions per pound of
> object!). It isn't practical to design that with human minds.

I wouldn't be so rash when it comes to determine the limits of what a
human can do (some of the recent Drexler & Merkle stuff is beautifully
minimalistsic), but then, you don't need these minds to design stuff.

> Second, a self-enhancing AI can't expect to optimize its way into an SI
> unless it has SI-level hardware to run on. It might, if it is very lucky,

Hardware as http://209.220.44.33/mpf/Nano97/paper.html , you mean?
Narrowing down the design space a lot further does not appear exactly
impossible. If the omega hardware is not the proper hardware for the
SI to run on, what else?

> but it is more likely to proceed in a series of sharp upward jumps separated
> by lulls while it waits for faster hardware. You probably need nanotech to
> build the computers to run an SI, and you can't design the nanotech unless

You certainly need nanotech to build computers. No, you don't need to
be an SI to design with nanotech. I could initiate a search in rule
space and cast the optimal or near-optimal rule into molecular
circuitry with comparatively little effort (well, make that a
future me in a few decades). If your computer is a 3d array of
identical, simple cells, all you have to worry are details like
power, cooling, and I/O. Unless you're a pioneer with a faible
for herculean deeds, you can reuse parts of other designs.

So, yes, it appears doable.

> you are pretty close to being an SI anyway.
>
> Because of these factors, a Singularity is likely to have a slow takeoff.
> You may have sudden jumps (when the first sentient AI goes online, or the
> first general-purpose assembler goes into operation), but each improvement
> simply leads to a new plateau while you wait for the rest of your tech base
> to catch up. The open-ended, geometric nature of the critical enabling

It is good to have a clear understanding how things on the global
scale will be panning out, I only wish I could share your confidence.

> technologies (computers, telecommunications, and eventually AI and
> intelligence enhancement in general) means that the overall rate of progress
> will continue to increase, but a sudden discontinuity is unlikely.

Unless one of the experiments explodes right into your face in a
hard-edge Singularity (a Blight DIY kit), you do not perceive much
when actually traversing it. A lot of people will not chose to do it,
and to them things will very quickly appear incomprehensible. Their
prospects to survive even on the short run appear uncertain, a fact
which will be unknown to many, not believed in by the bulk of those
who do, and simply ignored by a small fraction. The camel, and the
needle's eye.

> Something that looks like a Singularity from a human perspective is still
> quite likely, but to the people who make it happen it will look like just
> another day of steady progress.
 
Possible, but not all will make it happen. A lot will be left
behind. In a hard-edge event, all will be left behind but maybe a few
personality fragments stripped of identity. As you can tell, I'm not
too happy about this possibility.

> So what do you guys think?

'gene



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:50:03 MST