From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Nov 21 1998 - 22:53:22 MST
Anders Sandberg wrote:
>
> "Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
>
> > *I* cannot ignore legacy systems; non-modular design makes changes hard for us
> > and evolution. In both cases, the dependency trees have gone beyond where a
> > human mind could keep track of it.
>
> Do you really think posthumans can ignore legacy systems and design
> everything from scratch whenever they need it? That sounds very
> uneconomical. And for any mind, there is a limit to how much
> interdependencies that can be managed, and since highly interconnected
> systems tend to have an combinatorial explosion of dependencies it is
> not that unlikely that even posthumans will have trouble manageing
> unstructured code.
I did address that later on in the post, with intelligence-efficient programming.
> > Here's an interesting scenario: Envision
> > a Singularity sphere expanding at lightspeed; the outermost fringes will
> > always be less advanced, hardly just past Singularity, than the center.
> >
> > Now envision a lightspeed-unlimited Singularity, with the same problem, but in
> > software. The outermost frontiers are always at the limits of design, and the
> > PSE can only understand it well enough to design it, but not well enough to
> > get rid of the legacy code. But old applications can be redesigned ab initio
> > with the vast excess of computing power (advanced beyond what was necessary to
> > create them in the first place).
>
> Where does the vast excess of computing power come from? Note that it
> seems to be used up for all sorts of things, so unless the entities
> involved in the Singularity are extremely uniform there will be a huge
> demand for various uses, ab inito design will be just one of them, and
> likely less economically interesting as (say) storing descendants or
> useful programs (not to mention images of scantily clad jupiter
> brains).
I can't say that I believe in the scenario of a Singularity as a collection of
individuals. Goals converge at sufficiently high intelligence levels, just
like pictures of the truth.
What I'm pointing out is that there won't be "bottleneck" legacy systems. If
old code is central, it will be redesigned. Where does the vast excess of
computing power come from? I'm assuming a constantly expanding pool. In our
days, there really is a lot of inertia that derives from human nature, even
after all optimization is accounted for. I do think things will be faster.
One interesting notion is that even if there's a completely unlimited amount
of computing power, aleph-null flops, the Singularity still might not be
autopotent. The extent to which current computing power could be extended,
controlled, consolidated would still be finite. There would be software
inertia even in the absence of hardware inertia, and there would still be hard
choices for optimization. One would no longer conserve resources, or even
intelligence, but choices.
It's not until you start talking about time travel (which I think is at least
80% probable post-Singularity) that you get real "inertialess" systems. I
cannot even begin to imagine what this might look like from the inside. It is
incomprehensibility squared.
> Monolithical systems, where everything needs to be just in its right
> place, with no redundancies and a central design doesn't occur in
> nature. Organisms are actually modular (even if they also occur a
> tight web of evolved interconnections and tricks between the moduled),
> distributed and often highly redundant. Monoliths seem to be too
> brittle and expensive to maintain to function well in a changing,
> imperfect world where computing resources are in demand.
I think this confuses the disadvantages of human design with the disadvantages
of intelligent design in general. Remember, evolved systems fail too. Humans
go insane. It's just a different kind of crash. And, even using your
assumptions, I'd rather suffer a general protection fault and be rebooted from
backups by an exoself using up 1% of resources, than go insane while using 75%
of resources for redundant neural processors.
> > Why can't all five billion copies fail in the same way, then?
>
> Because they are independent systems, not a single master program run
> on the ISO locomotion server.
Are PSEs more likely to suffer from asteroid strikes or Y2K? If every date in
the world ran through a single module, we wouldn't have this problem. Yes, I
know we would have other problems, but the operative word is "we".
Five billion identical systems can fail in perfect synchronization. Just
watch. One solves this problem with robust architectures, not duplication of
weak architectures.
> > Perhaps
> > there will be redundancy, but without duplication - multiple, separate
> > algorithms.
>
> Not unlikely. Isn't this why we want to have individuals?
But they wouldn't be individuals - suppose Anders Sandburg has stripped from
him every algorithm he has in common with any other member of the human race,
and is given immersive perception and control of a fleem grobbler, one of the
five major independent fleem-grobbling systems. Is he still Anders Sandburg?
Go ask John K Clark. (Eliezer Yudkowsky doesn't care.)
> > Given that the number of humans keeps changing, we are not likely to be
> > exactly at the ideal redundancy level right now.
>
> Ideal for what purpose?
Maximizing output, minimizing chance of failure. Running the PSB
(Post-Singularity Benchmark). Simulating the complete expansion and collapse
of the Universe star by star. Take your pick.
> > > But efficiency for what end?
> >
> > Ya got me. "We never do learn all the rules of anything important,
> > after all." (_One For The Morning Glory_, John Barnes).
>
> As I see it, there is a built in bias in reality towards efficiency of
> survival ("the survivors survive" - and pass on their
> information). But if survival in a post-Singularity world becomes a
> question of software and pattern survival, we can expect to see
> strategies at least as diverse as the current memes to develop.
I'm not at all sure that evolution and singletons are compatible. Evolution
relies on differential chances of survival, but with everything being
determined by intelligence, the human-understandable attributes like
"survival" might be determined. Even if one concedes internal differences,
the externally observed behavior of every Singularity might be exactly the
same - expand in all directions at lightspeed, dive into a black hole. So
there might not be room for differential inclusive reproductive success.
Internally, the evolution you propose has to occur in defiance of the
superintelligent way to do things, or act on properties the superintelligence
doesn't care about. I'm not too sure about either. Evolution is a very
powerful thing from our perspective - almost as powerful as human
intelligence, and far older and more cunning. But we have no reason to
believe that it can walk all over SIs.
> > "That had been one of his earliest humiliations about the Beyond. He had
> > looked at the design diagram - dissections really - of skrodes. On the
> > outside, the thing was a mechanical device, with moving parts even. And the
> > text claimed that the whole thing would be made with the simplest of
> > factories, scarcely more than what existed in some places in the Slow Zone.
> > And yet the electronics was a seemingly random mass of components without any
> > trace of hierarchical design or modularity. It worked, and far more
> > efficiently than something designed by human-equivalent minds, but repair and
> > debugging - of the cyber component - was out of the question."
> > - _A Fire Upon The Deep_, Vernor Vinge
> (Good Vinge quotation by the way, I think I'll use it in my Java
> course to explain why object oriented programming is a good idea)
For humans, it sure is. But...
"A programmer with a codic cortex - by analogy to our current visual cortex -
would be at a vast advantage in writing code. Imagine trying to learn
geometry or mentally rotate a 3D object without a visual cortex; that's what
we do, when we write code without a module giving us an intuitive
understanding. An AI would no more need a "programming language" than we need
a conscious knowledge of geometry or pixel manipulation to represent spatial
objects; the sentences of assembly code would be perceived directly - during
writing and during execution."
-- From "Singularity Analysis", http://pobox.com/~sentience/sing_analysis.html
Who needs a Power to get a skrode? The first programming AIs will likely be
that incomprehensible to us mere humans. You know how much trouble it is to
get an AI to walk across a room? Well, that's how hard it is for an AI to
teach a human to write code.
OO programming is there for a reason, and that reason is transforming the raw
environment of assembly language into "objects" and "behaviors" comprehensible
to our cognition. But OO may make about as much sense to an AI, as repainting
the Earth in regular patterns and basic shapes that combine to form computer
programs would make to us. Different ontologies, different rules.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:49 MST