SI: Singleton and programming

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Nov 20 1998 - 10:04:21 MST


Anders Sandberg wrote:
>
> "Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
>
> > For what it's worth, my guess is that there will not be duplicated
> > calculations or module-based programming, at least if efficiency is being
> > maximized.
>
> You must factor robustness into efficiency. Having just one copy of
> each algorithm is a bad idea if there is any risk of it being damaged
> or unavailable (which can be a big problem for a distributed mind;
> "Darn! I need my low temperature manipulation skills, but I left them
> in the outer solar system!"). Another factor to think of is
> evolvability: is the system designed from scratch, or the result of a
> combination of many systems? You cannot just ignore legacy systems,
> and having a non-modular system makes change very hard.

*I* cannot ignore legacy systems; non-modular design makes changes hard for us
and evolution. In both cases, the dependency trees have gone beyond where a
human mind could keep track of it. Here's an interesting scenario: Envision
a Singularity sphere expanding at lightspeed; the outermost fringes will
always be less advanced, hardly just past Singularity, than the center.

Now envision a lightspeed-unlimited Singularity, with the same problem, but in
software. The outermost frontiers are always at the limits of design, and the
PSE can only understand it well enough to design it, but not well enough to
get rid of the legacy code. But old applications can be redesigned ab initio
with the vast excess of computing power (advanced beyond what was necessary to
create them in the first place).

> You don't find any monoliths in nature.

Nor does one find Web browsers. I fail to see your point.

> > Two things to consider:
> > 1) There's a lot of duplicated processing in the human race. Is it really
> > necessary to have five billion copies of the walking algorithm?
>
> Yes, unless you want that a communications glitch with the central
> server makes us all temporarily handicapped.

Why can't all five billion copies fail in the same way, then?
Or in other words, what makes you think that redundancy is the best way to
deal with the kind of flaws (if any) that exist in Jupiter brains? Perhaps
there will be redundancy, but without duplication - multiple, separate
algorithms. And even if a classical '90s "communications glitch" occurs,
there's no reason to believe one would need five billion copies rather than,
say, quintuple redundancy.

Given that the number of humans keeps changing, we are not likely to be
exactly at the ideal redundancy level right now.

> > 2) Ideal efficiency requires that there be only one Post-Singularity Entity,
> > among all the races of all the Universes.
>
> Sounds Tipleresque.

If you want to be really Tipleresque, you can hypothesize a few nano-replicant
or descriptor-pattern harvestors sent back in time to surreptitiously record
our minds at death, a la Spider Robinson, which actually might not take much
more effort than a cryonic revival. The key question is not what is possible
but what the PSE(s) will find worthwhile.

> But efficiency for what end?

Ya got me. "We never really do learn all the rules of anything important,
after all." (_One For The Morning Glory_, John Barnes).

> If the goal is not
> well-defined or requires complex information top-down solutions like
> you propose tend to be inferior to bottom-up solutions, even if they
> involve a high amount of redundancy and diversity.

"That had been one of his earliest humiliations about the Beyond. He had
looked at the design diagram - dissections really - of skrodes. On the
outside, the thing was a mechanical device, with moving parts even. And the
text claimed that the whole thing would be made with the simplest of
factories, scarcely more than what existed in some places in the Slow Zone.
And yet the electronics was a seemingly random mass of components without any
trace of hierarchical design or modularity. It worked, and far more
efficiently than something designed by human-equivalent minds, but repair and
debugging - of the cyber component - was out of the question."
        - _A Fire Upon The Deep_, Vernor Vinge

(Logical flaw alert: One of the Powers would have noticed, however.)

Anyway, the point is that "top-down" and "bottom-up" represent two design
methods forced by two limitations on intelligence, two possible styles out of
a vast space, not two ends of a continuum. Evolution has unlimited local
optimization, but cannot perceive global patterns. Humans can consciously
design and improve architectures, but are hard-pressed to write a few lines of
code, and for large projects simply cannot devote the attention necessary to
optimize everything in assembly language. The ideal solution, of course,
avoids *all* pattern, top-down or bottom-up.

A more interesting question is what sort of sub-optimal solutions might be
created by the requirement not to spend more computing power optimizing an
algorithm than that algorithm will consume if unoptimized. Why spend a
million generations evolving a piece of code that only needs to run once?

Intelligence-conserving programs won't be "top-down" or "bottom-up"
sub-optimal, I can tell you that much. Top-down design focuses on
architecture and tends to be fragile and break in interesting ways; bottom-up
design focuses on a lot of highly-optimized modules, multiple redundancies,
and legacy code that can derive from thousands of major generations past.
Perhaps intelligence-conserving programs will focus around the innermost
loops; the advantages and disadvantages of this approach, I really can't guess.

With respect to the actual Singularity, rather than playing with ideas, I
should note that all of these speculations just became possible in the '80s,
and that it may use ideas that are only the fads of this generation rather
than the underlying principles of reality. Our fascination with bigger and
better software may be as silly as the Victorian fascination with bigger and
better steam engines.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:49 MST