Re: BIOLOGY: Mouse and Human Genome similarity

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Fri Dec 20 2002 - 12:14:51 MST


On Fri, 20 Dec 2002, Charles Hixson, in a very interesting observation wrote:

> Solution: Have a complex of cells that combines multiply-duplicated DNA
> strands with extra error checking. Instead of "I tell you twice!"
> (Paired nucleotides.), use an "I tell you 4 times!" (or whatever number
> is secure enough). This gives you a reference standard. You don't want
> to have too many of these cells, as they will be expensive. But they
> are your Read-Only library. If you have reason to suspect an error, you
> check it against the library.

It isn't well known but E. coli can replicate in 20 minutes -- but
the rate of copying of DNA polymerase cannot replicate a 4+ megabase
genome in 20 minutes. Thus to replicate at its fastest rate E. coli
must produce multiple genome copies. So in its fastest growth mode
E. coli must always be having 2-4 genome copies per cell! It is
suspected that one of the reasons for the radiation resistance
in Deinococcus radiodurans is involved in its very slow growth
rates and/or the fact that it grows in tetrads (groups of 4).
So it may either be maintaining multiple genome copies in each
cell (perhaps more than E. coli) or it might even have a mechanism
of passing genome copies between cells. In any case it seems to
have a very good mechanism for either the repair of double strand
breaks and/or "homologous recombination" (where you do the repair
based on similar sequences -- something assisted if you happen to
have multiple copies of your genome on hand).

> Also you make sure that messages are
> signed on the way two and from the library cells, so that both the
> sender/recipient and the reference target can be verified.

There isn't an example of this in biology that I'm aware of.
Certainly if we find one it will be a significant discovery!

> How to
> correct the problem is a bit more difficult, but fortunately evolution
> has long been working on that. You might also need to give each cell a
> URI (how many bits would be needed?)

I'm too lazy right now to look it up -- but the cell numbers are
documented in Nanomedicine VI. The number that I think I use
for #'s of bacteria on or in the human body is 40 trillion --
the number of human cells in the body is less (most of them
are red blood cells).

> You will notice that 1) many of these techniques are in use in the
> internet, and 2) they are much easier to design than to evolve. I could
> easily design a technique for assigning each cell a URI. Evolving it is
> quite a different matter.

Well that is because design is "intelligent" while evolution is not.
That was my point in the panspermia discussion about preserving the
pre-evolved information across interstellar distances -- that may be
much more valuable than having to create it all over again.

> [Remember: the URIs need to be signed by a validating
> third party, so each cell division would require a midwife]).

Now you are really going over the top from the perspective of how
biology currently works (not to say that there couldn't be biology
out there someplace that works this way -- we just haven't discovered
it yet.)

> The fact that this is a simple design problem doesn't make it as simple
> thing to evolve. E.g., URIs don't become useful until after utilities
> are built that assume their existence.

Quite true -- currently (though this is speculative) it seems likely that
the protein turnover times are not tuned for the optimal rate. I.e. proteins
degrade over time, particularly due to deamidation, and there appear to be
"molecular clocks" that "label" the proteins and signal that some proteins
are likely to have become defective and should therefore be recycled. But
the probability that this has been optimized for 30,000 genes in most higher
organisms genomes seems *highly* unlikely. The usefullness of URIs comes
long after you manage to optimize the basic maintenance of the code
(which is why there are 130+ proteins involved in DNA repair -- and
the protein maintenance -- which probably involves several hundred
more proteins). So the scenario that Charles is proposing is *very*
far along in the evolutionary tree unless we impose the designs
(derived from computer science) on biology.

Robert



This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:49 MST