RE: design complexity of assemblers (was: Ramez Naam: redesigningchildren)

From: Eugen Leitl (eugen@leitl.org)
Date: Tue Nov 26 2002 - 12:41:51 MST


On Tue, 26 Nov 2002, Ramez Naam wrote:

> Molecular modeling is a field I work in, and I'm not aware of any

It's one of my hats, too.

> current modeling methods that are able to handle 10^9 atoms.

It's not at all current, the landmark paper of a billion atoms is quite a
few years past (I think the figure has been several billion, but I can't
produce the ref off hand, as the paper is somewhere in a box in the
cellar). Sure, it's metal, and it's a short-range potential, but there's
no reason why electrostatics can't be done in constant time on a parallel
box with appropriate algorithms.
 
> Consider that proteins are often have on the order of 10^3 atoms, and
> we cannot do atomic level modeling of proteins - we're forced to use

Of course we can, and we do. The problem with proteins that they are
heavily solvated (the water is the culprit, not the protein itself;
protein in vacuo is dead easy), and the time domain is ~us to ~ms (some
processes occur on order of 10 s), and that the forcefields are not nearly
sufficiently accurate. That pretty much kills it, for time being. At least
until a 10^6 CPU model of Blue Gene lands, which should by 2006, or so.

However, neither of this is relevant if we're talking dry machine-phase
systems. They're dry, stiff, have simple trajectories with short cycles
which are not at all sensitive to forcefield accuracy. (Mechanosynthesis
is highly localized and blackboxable, so you can use hybrid methods).

> highly heuristic methods that are specifically written to handle only
> proteins, and that still have extremely high degrees of inaccuracy.

I don't see why one shouldn't be able to draft a sufficiently accurate
all-purpose force field which can perform at least as well as current ones
(say, GROMACS), it's just it's not relevant in practise. No one is
modelling hybrid systems. At least yet.
 
> I refer you back to my earlier posts for a discussion of the scaling
> laws and capabilities of various atomic level modeling methods, and

I have a number of comments on that thread, however I'm physically dead
and this is not the weekend. But I'm going to follow up on this.

> projections of how large a system they'll be able to model in 2050
> (none of them reaching the level of an assembler).

Arguably, we can already model large part of a typical assembler on a
large box. The crunch is not the issue, the methods are.

By 2050 we should comfortably model native time domain of ~10^10 atom
assembler in an interactive realtime model (ns system time in s of user
time).



This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:25 MST