Anders Sandberg wrote:
>
> "Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
>
> > For what it's worth, my guess is that there will not be duplicated
> > calculations or module-based programming, at least if efficiency is being
> > maximized.
>
> You must factor robustness into efficiency. Having just one copy of
> each algorithm is a bad idea if there is any risk of it being damaged
> or unavailable (which can be a big problem for a distributed mind;
> "Darn! I need my low temperature manipulation skills, but I left them
> in the outer solar system!"). Another factor to think of is
> evolvability: is the system designed from scratch, or the result of a
> combination of many systems? You cannot just ignore legacy systems,
> and having a non-modular system makes change very hard.
*I* cannot ignore legacy systems; non-modular design makes changes hard for us
and evolution. In both cases, the dependency trees have gone beyond where a
human mind could keep track of it. Here's an interesting scenario: Envision
a Singularity sphere expanding at lightspeed; the outermost fringes will
always be less advanced, hardly just past Singularity, than the center.
Now envision a lightspeed-unlimited Singularity, with the same problem, but in
software. The outermost frontiers are always at the limits of design, and the
PSE can only understand it well enough to design it, but not well enough to
get rid of the legacy code. But old applications can be redesigned ab initio
with the vast excess of computing power (advanced beyond what was necessary to
create them in the first place).
> You don't find any monoliths in nature.
Nor does one find Web browsers. I fail to see your point.
> > Two things to consider:
Why can't all five billion copies fail in the same way, then?
> > 1) There's a lot of duplicated processing in the human race. Is it really
> > necessary to have five billion copies of the walking algorithm?
>
> Yes, unless you want that a communications glitch with the central
> server makes us all temporarily handicapped.
> > 2) Ideal efficiency requires that there be only one Post-Singularity Entity,
> > among all the races of all the Universes.
>
> Sounds Tipleresque.
If you want to be really Tipleresque, you can hypothesize a few nano-replicant or descriptor-pattern harvestors sent back in time to surreptitiously record our minds at death, a la Spider Robinson, which actually might not take much more effort than a cryonic revival. The key question is not what is possible but what the PSE(s) will find worthwhile.
> But efficiency for what end?
Ya got me. "We never really do learn all the rules of anything important, after all." (_One For The Morning Glory_, John Barnes).
> If the goal is not
> well-defined or requires complex information top-down solutions like
> you propose tend to be inferior to bottom-up solutions, even if they
> involve a high amount of redundancy and diversity.
"That had been one of his earliest humiliations about the Beyond. He had looked at the design diagram - dissections really - of skrodes. On the outside, the thing was a mechanical device, with moving parts even. And the text claimed that the whole thing would be made with the simplest of factories, scarcely more than what existed in some places in the Slow Zone. And yet the electronics was a seemingly random mass of components without any trace of hierarchical design or modularity. It worked, and far more efficiently than something designed by human-equivalent minds, but repair and debugging - of the cyber component - was out of the question."
(Logical flaw alert: One of the Powers would have noticed, however.)
Anyway, the point is that "top-down" and "bottom-up" represent two design methods forced by two limitations on intelligence, two possible styles out of a vast space, not two ends of a continuum. Evolution has unlimited local optimization, but cannot perceive global patterns. Humans can consciously design and improve architectures, but are hard-pressed to write a few lines of code, and for large projects simply cannot devote the attention necessary to optimize everything in assembly language. The ideal solution, of course, avoids *all* pattern, top-down or bottom-up.
A more interesting question is what sort of sub-optimal solutions might be created by the requirement not to spend more computing power optimizing an algorithm than that algorithm will consume if unoptimized. Why spend a million generations evolving a piece of code that only needs to run once?
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.