>H: Upgrading

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Mar 26 1999 - 12:20:36 MST


Billy Brown wrote:
>
> Eliezer S. Yudkowsky wrote:
> > I guess the question everyone else has to ask is whether the possibility
> > that late-term Powers are sensitive to the initial conditions is
> > outweighed by the possibility of some first-stage transhuman running
> > amuck. It's the latter possibility that concerns me with den
> > Otter and Bryan Moss, or for that matter with the question of whether the
> > seed Power should be a human or an AI.
>
> So, remind me again, why exactly are we so worried about a human upload?

Well, we aren't. I think human uploading will take a transhuman - just
to straighten out the architecture change, if nothing else; William H.
Calvin, a neurologist, thinks that the uploadee would run into all kinds
of problems. One imagines that simulating neurons down to the last
decimal to avoid the problems would take about a thousand times as much
computing power as was intrinsically necessary. And of course, I think
it will take exotic hardware. But the main argument is simply that
uploading is such advanced software and hardware ultratechnology as to
make it easily possible for one guy in a lab to either eat the world or
write a seed AI.

> The last time I looked, our best theory of the human brain had it being a
> huge mass of interconnected neural nets, with (possibly) some more
> procedural software running in an emulation layer. That being the case, a
> lone uploaded human isn't likely to be capable of making any vast
> improvements to it. By the time he finishes his first primitive neurohack
> he's going to have lots of uploaded company.

I think it's even worse than that; I don't think our neurons are doing
anything as inefficient as an emulation layer. I think the brain's
low-level algorithms will look like Vinge's description of a skrode -
unchunkable without a specialized domdule. Like trying to understand
the "spirit" of a piece of art by examining it pixel by pixel, only more so.

We would get there eventually, of course. I would concentrate on
decoding interfaces between brain modules rather than the modules
themselves, with the hope of being able to post-process information, add
a neural-understanding module to the "bus", or even replace modules
entirely. Presumably, although not necessarily, the interfaces will
have a simpler format. I would also try and reprogram the visual cortex
to model neurons instead of photons, since the visual cortex is
relatively well understood and chunkable. To enhance general smartness,
once armed with a neural domdule, I would look for modules based on
search trees and add power. And of course, I would make backups, and
operate on a duplicate rather than myself.

But your logic about having lots of company doesn't necessarily hold.
My computer is faster than it has RAM - and, relative to computers, the
human brain uses a *lot* more RAM relative to processing power. (Also,
don't forget that uploading requires incredibly sophisticated
nanotechnology.) So once uploaded successfully, and even before any
functional improvements were made, the first upload would be running at
days per second, or even years per second.

> I think a seed AI closely modeled on the human brain would face similar
> problems. What gives Ellison-type architectures the potential for growth is
> the presence of a coding domdule, coupled with the fact that the software
> has a rational architecture that can be understood with a reasonable amount
> of thought. Any system that doesn't have the same kind of internal
> simplicity is going to have a much flatter enhancement curve (albeit still
> exponential).

Very true. (But don't leave out *documentation*, particularly
documentation of the interface structure; so that new modules can be
designed even if the old ones, not being self-documented, can't be understood.)

So why am I worried about human uploads? I'm not. I'm just making the
point that, *given* the uploads Otter wants, he has no chance of being
the first in. It's a general point that applies to imperialist
nanocrats as well, to name a higher line of probability.

-- 
        sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/singul_arity.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:24 MST