From: Anders Sandberg (asa@nada.kth.se)
Date: Sat Nov 21 1998 - 10:56:57 MST
"Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
> *I* cannot ignore legacy systems; non-modular design makes changes hard for us
> and evolution. In both cases, the dependency trees have gone beyond where a
> human mind could keep track of it.
Do you really think posthumans can ignore legacy systems and design
everything from scratch whenever they need it? That sounds very
uneconomical. And for any mind, there is a limit to how much
interdependencies that can be managed, and since highly interconnected
systems tend to have an combinatorial explosion of dependencies it is
not that unlikely that even posthumans will have trouble manageing
unstructured code.
> Here's an interesting scenario: Envision
> a Singularity sphere expanding at lightspeed; the outermost fringes will
> always be less advanced, hardly just past Singularity, than the center.
>
> Now envision a lightspeed-unlimited Singularity, with the same problem, but in
> software. The outermost frontiers are always at the limits of design, and the
> PSE can only understand it well enough to design it, but not well enough to
> get rid of the legacy code. But old applications can be redesigned ab initio
> with the vast excess of computing power (advanced beyond what was necessary to
> create them in the first place).
Where does the vast excess of computing power come from? Note that it
seems to be used up for all sorts of things, so unless the entities
involved in the Singularity are extremely uniform there will be a huge
demand for various uses, ab inito design will be just one of them, and
likely less economically interesting as (say) storing descendants or
useful programs (not to mention images of scantily clad jupiter
brains).
> > You don't find any monoliths in nature.
>
> Nor does one find Web browsers. I fail to see your point.
Monolithical systems, where everything needs to be just in its right
place, with no redundancies and a central design doesn't occur in
nature. Organisms are actually modular (even if they also occur a
tight web of evolved interconnections and tricks between the moduled),
distributed and often highly redundant. Monoliths seem to be too
brittle and expensive to maintain to function well in a changing,
imperfect world where computing resources are in demand.
> > > 1) There's a lot of duplicated processing in the human race. Is it really
> > > necessary to have five billion copies of the walking algorithm?
> >
> > Yes, unless you want that a communications glitch with the central
> > server makes us all temporarily handicapped.
>
> Why can't all five billion copies fail in the same way, then?
Because they are independent systems, not a single master program run
on the ISO locomotion server.
> Or in other words, what makes you think that redundancy is the best way to
> deal with the kind of flaws (if any) that exist in Jupiter brains?
I am not saying it is the best way of dealing with flaws of JBs, I'm
saying that redundancy makes a system more robust against local
damage.
> Perhaps
> there will be redundancy, but without duplication - multiple, separate
> algorithms.
Not unlikely. Isn't this why we want to have individuals?
> Given that the number of humans keeps changing, we are not likely to be
> exactly at the ideal redundancy level right now.
Ideal for what purpose?
> > But efficiency for what end?
>
> Ya got me. "We never really do learn all the rules of anything important,
> after all." (_One For The Morning Glory_, John Barnes).
As I see it, there is a built in bias in reality towards efficiency of
survival ("the survivors survive" - and pass on their
information). But if survival in a post-Singularity world becomes a
question of software and pattern survival, we can expect to see
strategies at least as diverse as the current memes to develop.
> > If the goal is not
> > well-defined or requires complex information top-down solutions like
> > you propose tend to be inferior to bottom-up solutions, even if they
> > involve a high amount of redundancy and diversity.
>
> "That had been one of his earliest humiliations about the Beyond. He had
> looked at the design diagram - dissections really - of skrodes. On the
> outside, the thing was a mechanical device, with moving parts even. And the
> text claimed that the whole thing would be made with the simplest of
> factories, scarcely more than what existed in some places in the Slow Zone.
> And yet the electronics was a seemingly random mass of components without any
> trace of hierarchical design or modularity. It worked, and far more
> efficiently than something designed by human-equivalent minds, but repair and
> debugging - of the cyber component - was out of the question."
> - _A Fire Upon The Deep_, Vernor Vinge
>
> (Logical flaw alert: One of the Powers would have noticed, however.)
>
> Anyway, the point is that "top-down" and "bottom-up" represent two design
> methods forced by two limitations on intelligence, two possible styles out of
> a vast space, not two ends of a continuum. Evolution has unlimited local
> optimization, but cannot perceive global patterns. Humans can consciously
> design and improve architectures, but are hard-pressed to write a few lines of
> code, and for large projects simply cannot devote the attention necessary to
> optimize everything in assembly language. The ideal solution, of course,
> avoids *all* pattern, top-down or bottom-up.
Ideal for what? Data compression, perhaps, but not for maintaining and
extending the system.
(Good Vinge quotation by the way, I think I'll use it in my Java
course to explain why object oriented programming is a good idea)
> Intelligence-conserving programs won't be "top-down" or "bottom-up"
> sub-optimal, I can tell you that much. Top-down design focuses on
> architecture and tends to be fragile and break in interesting ways; bottom-up
> design focuses on a lot of highly-optimized modules, multiple redundancies,
> and legacy code that can derive from thousands of major generations past.
> Perhaps intelligence-conserving programs will focus around the innermost
> loops; the advantages and disadvantages of this approach, I really can't guess.
Most likely optimal programming methods are problem dependent; it
would surprise me much if there was a single general problem solving
method that converted a arbitrary specification "this is what I want"
to an optimal design.
> With respect to the actual Singularity, rather than playing with ideas, I
> should note that all of these speculations just became possible in the '80s,
> and that it may use ideas that are only the fads of this generation rather
> than the underlying principles of reality. Our fascination with bigger and
> better software may be as silly as the Victorian fascination with bigger and
> better steam engines.
Good point!
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:49 MST