From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Thu Jul 08 1999 - 00:57:57 MDT
Billy Brown writes:
> You've obviously never actually written a large-scale simulation program.
Guilty as charged. Ask me again in a few months, though.
(I've used a number of packages, though. They are not all that intimidating).
> Even the simplest (from a programming perspective) approaches to uploading
> will require at least hundreds of millions of lines of extremely complex
> code. That would be for a brute force molecular simulator, which would need
It certainly won't involve any such thing as "hundreds of millions of
lines of extremely complex code" -- what for?. Artificial reality
renderers with VRML7? Navier-Stokes surf? Finite-element skyskrapers?
Alias Wavefront Construction, Ltd? Industrial Light & Magic
Landscaping? Get real.
Nobody will be programming as we know it 20..30 years downstream
anyway. I'm sure that one needs not to submit fitness functions in a
formal language. May RSI go the way of the silicosis. Good riddance.
> far more power than you estimate (~10^30 MIPS, if I remember right).
I've never said you'd need to run an upload at molecular dynamics
level -- that would be utterly impractical.
The code complexity is not a function of system size. Large-scale
isn't counted in number of lines, obviously. With embarrasingly
parallel codes you just add nodes -- the code remains the same.
For instance, a MPI cellular automaton code distributed over
homogenous nodes aligned on a 3d lattice scales _strictly
linearly_ to infinite amount of nodes (and comfortably fits onto
a single page of C -- see "How to Build a Beowulf", MIT Press).
It is just that hardware reliability becomes an issue with more
than a few hundred/thousands nodes. With spatial mapping of the
simulation volume to nodes the failure of a single node would
be equivalent to a pinpoint hemorrhage. With cells made from
molecular components radiation damage would do no worse than it
does to the real thing. The pattern restructures itself
dynamically to compensate for damage. After the cumulative
damage renders further operation uneconomical swap out the
state, install a new module and load it back.
> Higher-level simulations can greatly reduce the computer power needed (as
> well as the volume of data), but they do so at the price of increasing
I can't agree fully, but can't disagree either. HashLife sure runs
many orders of magnitude faster/can handle much larger universes than
the same thing coded by zombies. Hashlife needs big lookups, which
makes grains large -- which doesn't pay in a relativistic
context. Also, Life rules are special. It's not an EoC automaton.
> program complexity. At the opposite end of the spectrum you arrive at
> software that elegantly models the abstract processes of the mind, and would
> be so big that it could never be written by humans (maybe 10^13 LOC?).
What makes you think you can collapse what the brain does in "elegant
models"? The brain isn't elegant, it's messy. There are lots of grand
claims coming from Moravec/Minsky that the wetware is really
inefficient, and you can collapse lots of circuitry into a few lines
of code. Well, neuroscience doesn't seem to think so. Let's upload a
nematode, and then we'll see how detail do we need. A 1 k DSP cluster
is probably sufficient for C. elegans.
> Of course, what all of this really means is simply that we software
> engineers need to find better ways of creating programs. Perhaps some
> combination of evolutionary programming, advanced languages and expert
> systems will allow us to write programs of the necessary scale without
> running into fatal reliability problems? At any rate, we need to get off of
> our collective butts and start looking for real solutions.
Agreed, but the awareness for the problem is very spotty at
best. Changes in the IT landscape take decades to propagate. People
are the bottleneck. Things will only start moving when you can factor
monkeys out of the equation.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:25 MST