Re: Paths to Uploading (was: RE: clones/'perfect'-mates/self-image)

From: Eugene Leitl (root@lrz.uni-muenchen.de)
Date: Sat Jan 02 1999 - 07:22:37 MST


On Fri, 1 Jan 1999, Billy Brown wrote:

> You guys have some interesting thoughts about uploading, but I think we're
> still talking past each other a bit when we get down to actual scenarios. I

I think this is only natural. At the end of the millennium, we have surely
a lot more hard data at our disposal than Lem & Co-visionaries in 1960's,
but nevertheless this is still highly speculative terrain. If at all,
uploading is thought to become feasible in a few decades, where future
histories are already very warped by the curvature of the nearby
prediction horizont.

> [premises of some assumptions self-contradictory?]
>
> To do an upload requires advanced computers, advanced sensors, and knowledge
> about how the brain works. To get a reasonably upload scenario you have to
> project advances in all three of these fields at the same time, and see what
> you come up with.

Our ability to model dynamics of (macro)molecular systems at low to medium
energies far outstrips our capabilities to create, and, especially, to
mass-produce them. Due to the bootstrap bottleneck and the intrinsic
simplicity of optimal hardware as artefacted by basic physical laws
constraints the design space could be already very well sampled prior to
the advent of the very first assembler. If analogies to compiler bootstrap
are valid, the second-generation assembler (in extreme case, only few
fabrication hours away) will be already very useful, and due to the
immediate availability of very formidable computational resources as
product of the second-generation assembler, the almost immediately (years
to months) avaliable third-generation systems should be truly optimal. Of
course heavy regulation (nanotechnology simulation software and assemblers
to be declared munitions, with a simultaneous implementation of an
executive strong enough to enforce that) could delay that, which may or
may not result in a prognosis from beneficial to the catastrophic.

I doubt that new sensorics is at all necessary for a feasible destructive
scan: recently available methods as cryo AFM of freeze-fracture vitrified
tissue cryosections already allows imaging at near-molecular or molecular
resolution, and in principle, the technique should be scalable to imaging
in the bulk through introduction of abrasion, automation and massive
parallelism to attain adequate processivity. I think problems like
creation and tight integration of sufficiently dense and fast memories for
interim voxelset storage and algorithms for the processing of the latter
into the scanning pipeline are significantly more complicated.

The current state of the art of computational neurobiology is not exactly
negligeable, and due to the advent of sufficiently large computer
performance fully bottom-up automatical knowledge extraction should become
feasible, using 'ab initio' methods utilizing total genome sequence data,
accurate structures from protein folding prediction and abovementioned
molecular-resolution maps of vitrifed animal cells. The same applies to
top-down approaches with multiple-million channels microelectrode arrays
for in vivo recording and manipulation, and sufficient computational
performance for their analysis as made possible with the advent of
molecular manufacturing of any flavour.
 
> Now, the traditional proof-of-principle for uploading is obviously never
> going to actually be used. It assumes no knowledge at all about how the

Oh, perhaps the Cyberworm gang will eventually produce a killer demo good
enough to warrant further funding, in a really focused program. If it
wasn't for difficulties to patch-clamp the tiny critters, C. elegans is
the prime candidate for a POP.

> brain works, which results in enormous computation requirements. Unless you
> think the Omega hardware will be built tomorrow, and everyone in the biotech

For the reasons I mentioned above, I do indeed think that the Omega
hardware will become avalable relatively early, i.e. in a few decades, if
things will indeed pan out as expected (but nobody never expects the
Spanish Inquisition, of course).

> industry is about to jump off a cliff, that doesn't make sense.

The whole of the humanity could jump off the cliff in a hard-edge
Singularity if somebody is foolish enough to create the boundary
conditions for a SI before we can do uploads on a broad scale. You need a
lot less ops and knowledge to grow an alife Golem with excellent
juggernaut potential. (Yes, Dr. Scott, an accident has made it happen).

> A simulation at the cellular level, relying purely on advanced knowledge of
> biochemistry, lets you reduce the computational burden by several orders of
> magnitude. It still isn't very likely, however, because it matches a modest

I agree with you that such a model is extremely valuable, especially for
bootstrap purposes, as the lowest, or second-lowest tier in an
automagically progressive learning hierarchic simulation. (I wouldn't want
to define a framework for such a tour de force in software design though,
perhaps no one who goes on two legs could).

> increase in medical knowledge with a fantastic improvement in computers and
> sensor technology.

For about 10-20 k$, using off-shelf commodity components, you could now
build a conventional MD system capable of probing the dynamics of
biological system roughly one million atoms large in a time window few ns
long. Even if lacking breakthrough to reduce the computational task for
PFP, forcefields will grow a little faster and great deal more accurate in
the coming decade or two, while Moore's law should not yet run into
saturation yet, particularly if development of molecular circuitry will
start early enough to become smoothly available when progress in
semiconductor photolitho will fail suddenly, having run out of steam.

The problem with imaging is less a sensor problem than a problem of
scaling existing technologies (nanorobotics/abrasive AFM, SNOM, vacuum
sublimation, excimer and plasma etch) into micro (MEMS) and meso
(diamondoid systems) domain, using massive parallelism and automation.
In a sense, the task is much tougher than simply coming up with new
sensors.

> A much more probable scenario would project medical advances forward until
> there is hardware fast enough to run the sim, and sensors good enough to
> gather the data. That implies at least a moderately good understanding of

The impetus for new imaging comes from basic research, some of which is of
course medical. Medical funding could increase dramatically once the
potential of nanomedicine is fully understood by the mainstream. Hardware
good and fast enough will come from the mainstream, probably specifically
from the multimedia demands, and then perhaps consumer and service
robotics, the next big thing after industry automation.

> the brain - something better than just an understanding of biochemistry, but
> probably not good enough to just model the brain's data processing.

I don't quite follow you here. If you can model the neural tissue in
machina, all you need is too watch the movies and to abstract. The process
of abstraction can be made automatical. What is the problem, then?

ciao,
'gene



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:42 MST