Re: SI: Singleton and programming

From: J. Maxwell Legg (income@ihug.co.nz)
Date: Sun Nov 22 1998 - 07:03:48 MST


"Eliezer S. Yudkowsky" wrote:
>
> Anders Sandberg wrote:
> >
> > "Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
> >
> > > *I* cannot ignore legacy systems; non-modular design makes changes hard for us
> > > and evolution. In both cases, the dependency trees have gone beyond where a
> > > human mind could keep track of it.
> >
> > Do you really think posthumans can ignore legacy systems and design
> > everything from scratch whenever they need it? That sounds very
> > uneconomical. And for any mind, there is a limit to how much
> > interdependencies that can be managed, and since highly interconnected
> > systems tend to have an combinatorial explosion of dependencies it is
> > not that unlikely that even posthumans will have trouble manageing
> > unstructured code.
>
> I did address that later on in the post, with intelligence-efficient programming.
>
> > > Here's an interesting scenario: Envision
> > > a Singularity sphere expanding at lightspeed; the outermost fringes will
> > > always be less advanced, hardly just past Singularity, than the center.
> > >
> > > Now envision a lightspeed-unlimited Singularity, with the same problem, but in
> > > software. The outermost frontiers are always at the limits of design, and the
> > > PSE can only understand it well enough to design it, but not well enough to
> > > get rid of the legacy code. But old applications can be redesigned ab initio
> > > with the vast excess of computing power (advanced beyond what was necessary to
> > > create them in the first place).
> >
> > Where does the vast excess of computing power come from? Note that it
> > seems to be used up for all sorts of things, so unless the entities
> > involved in the Singularity are extremely uniform there will be a huge
> > demand for various uses, ab inito design will be just one of them, and
> > likely less economically interesting as (say) storing descendants or
> > useful programs (not to mention images of scantily clad jupiter
> > brains).
>
> I can't say that I believe in the scenario of a Singularity as a collection of
> individuals. Goals converge at sufficiently high intelligence levels, just
> like pictures of the truth.
>
> What I'm pointing out is that there won't be "bottleneck" legacy systems. If
> old code is central, it will be redesigned. Where does the vast excess of
> computing power come from? I'm assuming a constantly expanding pool. In our
> days, there really is a lot of inertia that derives from human nature, even
> after all optimization is accounted for. I do think things will be faster.
>
> One interesting notion is that even if there's a completely unlimited amount
> of computing power, aleph-null flops, the Singularity still might not be
> autopotent. The extent to which current computing power could be extended,
> controlled, consolidated would still be finite. There would be software
> inertia even in the absence of hardware inertia, and there would still be hard
> choices for optimization. One would no longer conserve resources, or even
> intelligence, but choices.
>
> It's not until you start talking about time travel (which I think is at least
> 80% probable post-Singularity) that you get real "inertialess" systems. I
> cannot even begin to imagine what this might look like from the inside. It is
> incomprehensibility squared.

been there, done that... (don't ask about Wavelink - just yet.)

>
> > Monolithical systems, where everything needs to be just in its right
> > place, with no redundancies and a central design doesn't occur in
> > nature. Organisms are actually modular (even if they also occur a
> > tight web of evolved interconnections and tricks between the moduled),
> > distributed and often highly redundant. Monoliths seem to be too
> > brittle and expensive to maintain to function well in a changing,
> > imperfect world where computing resources are in demand.
>
> I think this confuses the disadvantages of human design with the disadvantages
> of intelligent design in general. Remember, evolved systems fail too. Humans
> go insane. It's just a different kind of crash. And, even using your
> assumptions, I'd rather suffer a general protection fault and be rebooted from
> backups by an exoself using up 1% of resources, than go insane while using 75%
> of resources for redundant neural processors.
>
> > > Why can't all five billion copies fail in the same way, then?
> >
> > Because they are independent systems, not a single master program run
> > on the ISO locomotion server.
>
> Are PSEs more likely to suffer from asteroid strikes or Y2K? If every date in
> the world ran through a single module, we wouldn't have this problem. Yes, I
> know we would have other problems, but the operative word is "we".
>
> Five billion identical systems can fail in perfect synchronization. Just
> watch. One solves this problem with robust architectures, not duplication of
> weak architectures.
>
> > > Perhaps
> > > there will be redundancy, but without duplication - multiple, separate
> > > algorithms.
> >
> > Not unlikely. Isn't this why we want to have individuals?
>
> But they wouldn't be individuals - suppose Anders Sandburg has stripped from
> him every algorithm he has in common with any other member of the human race,
> and is given immersive perception and control of a fleem grobbler, one of the
> five major independent fleem-grobbling systems. Is he still Anders Sandburg?
> Go ask John K Clark. (Eliezer Yudkowsky doesn't care.)
>
> > > Given that the number of humans keeps changing, we are not likely to be
> > > exactly at the ideal redundancy level right now.
> >
> > Ideal for what purpose?
>
> Maximizing output, minimizing chance of failure. Running the PSB
> (Post-Singularity Benchmark). Simulating the complete expansion and collapse
> of the Universe star by star. Take your pick.

This is where PCP for motion and reversible computing come in. Personal
Construct Psychology and the repertory grid technique using The Ingrid
Thought Processor could handle everything from inverse kinematics to
anticipation, from muscles to love.
>
> > > > But efficiency for what end?
> > >
> > > Ya got me. "We never do learn all the rules of anything important,
> > > after all." (_One For The Morning Glory_, John Barnes).
> >
> > As I see it, there is a built in bias in reality towards efficiency of
> > survival ("the survivors survive" - and pass on their
> > information). But if survival in a post-Singularity world becomes a
> > question of software and pattern survival, we can expect to see
> > strategies at least as diverse as the current memes to develop.
>
Gee, I hope my website and my Ingrid freeware development survive me so
I can pass on my information.

> I'm not at all sure that evolution and singletons are compatible. Evolution
> relies on differential chances of survival, but with everything being
> determined by intelligence, the human-understandable attributes like
> "survival" might be determined. Even if one concedes internal differences,
> the externally observed behavior of every Singularity might be exactly the
> same - expand in all directions at lightspeed, dive into a black hole. So
> there might not be room for differential inclusive reproductive success.

I have long since devised a different reproductive strategy from all
others I know. My strategic timetable is in my resume.

>
> Internally, the evolution you propose has to occur in defiance of the
> superintelligent way to do things, or act on properties the superintelligence
> doesn't care about. I'm not too sure about either. Evolution is a very
> powerful thing from our perspective - almost as powerful as human
> intelligence, and far older and more cunning. But we have no reason to
> believe that it can walk all over SIs.

Your last sentence seems to contradict the one before as it is
determined by my linguistic parser to be a reason. I don't of course
agree with you. Do you see the SI as not coming from a (ek)human. I
don't because I'm chauvinistic. Please explain?

Nonetheless I'd prefer to believe in Voodoo and the law of the jungle if
I wanted to outsmart evolution, which is probably another contradiction.
Am I wrong in thinking that a parasite cannot nurture and support the
future growth of a system whilst the presently decaying specific members
rot? My knowledge of cellular biology isn't that good, but I'm sure I've
seen in the jungle a tree that had grown down across a stream an up the
other side with a parasitic vine supporting a new growth on the other
side with the tree part that was in the stream rotten away.

I hope so because I this is a good uploading tactic. Now with the
preceding analogy,

I go into virtual reality (augmented with artificial consciousness ->
SI)
and come back into the world (the other side of the stream)
after some disaster (rotten tree part in stream)
and repair it (new rooted growth of original tree).

In the following open directory look at page05.gif (7c) and for the
bar-coded sketch diagram of the parasite at.

http://homepages.ihug.co.nz/~income/ekus/page99.gif

As an aside, the reason why the present day elite can't see The Ingrid
Thought Processor in their plans is because I believe they are destined
to rot away. I, on the otherhand can see a parallel version of Ingrid
starting as early next year and extending the tree of life as I put the
rest to fate.

>
> > > "That had been one of his earliest humiliations about the Beyond. He had
> > > looked at the design diagram - dissections really - of skrodes. On the
> > > outside, the thing was a mechanical device, with moving parts even. And the
> > > text claimed that the whole thing would be made with the simplest of
> > > factories, scarcely more than what existed in some places in the Slow Zone.
> > > And yet the electronics was a seemingly random mass of components without any
> > > trace of hierarchical design or modularity. It worked, and far more
> > > efficiently than something designed by human-equivalent minds, but repair and
> > > debugging - of the cyber component - was out of the question."
> > > - _A Fire Upon The Deep_, Vernor Vinge

I'd design it so it had human level communication abilities to
collaborate on repair. Again I come back to radical constructivism and
PCP for motion.

>
> > (Good Vinge quotation by the way, I think I'll use it in my Java
> > course to explain why object oriented programming is a good idea)
>
> For humans, it sure is. But...
>
> "A programmer with a codic cortex - by analogy to our current visual cortex -
> would be at a vast advantage in writing code. Imagine trying to learn
> geometry or mentally rotate a 3D object without a visual cortex; that's what
> we do, when we write code without a module giving us an intuitive
> understanding. An AI would no more need a "programming language" than we need
> a conscious knowledge of geometry or pixel manipulation to represent spatial
> objects; the sentences of assembly code would be perceived directly - during
> writing and during execution."

This is my planned hibernation level but the sentences, I use, are
geometric manifolds in haptic space. If you can ever get hold of it,
read a book called The Blind Geometer.

> -- From "Singularity Analysis", http://pobox.com/~sentience/sing_analysis.html
>
> Who needs a Power to get a skrode? The first programming AIs will likely be
> that incomprehensible to us mere humans. You know how much trouble it is to
> get an AI to walk across a room? Well, that's how hard it is for an AI to
> teach a human to write code.

Not if the human being, being taught, is the one birthing the AI.

>
> OO programming is there for a reason, and that reason is transforming the raw
> environment of assembly language into "objects" and "behaviors" comprehensible
> to our cognition. But OO may make about as much sense to an AI, as repainting
> the Earth in regular patterns and basic shapes that combine to form computer
> programs would make to us. Different ontologies, different rules.

Is the weather a program then? Is this referring to the AI using a
superposition of CAs mapped in such a way as to cause an hallucination,
or a gestalt of an altogether different thought pattern than what the
individual CA's function was at the time. Carrying that, it follows that
a predictable but unlikely set of quatrains, as in Nostradamus, etc.,
helps in achieving the AI's human level artificial consciousness.

Are you suggesting that OO can be used at one level within the AI, but
that at a superposition of many OO, the AI was/is conscious and that
this consciousness or intelligence isn't artificial? Again I come back
to questioning the need for a global theoretic control and communication
function that needs to allow the designer to talk to the awakened AI.

Sorry for the idiosyncratic language, and I don't blame you for not
understanding; - I'm just tossing out a germ of a redundant idea.

http://come.to/ingrid



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:49 MST