Computronium Limits [was ASTRO: Dark Matter problem gets worse!]

From: Robert J. Bradbury (bradbury@www.aeiveos.com)
Date: Mon Jan 31 2000 - 06:30:11 MST


On Mon, 31 Jan 2000, Charlie Stross wrote:

> I'm currently participating (in a desultory sort of way) in a thread
> on rec.arts.sf.written, about Moore's law and ways round it.

The easiest way to move "through" Moore's law, is to move from 2D
to 3D construction. That extends things quite a bit in terms of
computational throughput per gram or volume of matter. Of course
you have to then go to reversible computing and start dealing
more "seriously" with heat dissipation (which is the big problem
for MBrains).

> [ First, the kick-off -- how to make smaller atoms ]
>
> ><nigel.arnot@kcl.ac.uk> wrote:
> >
> >>Therefore, the scaling law is reaching its limits. Extrapolating the
> >>current rate of progress, we hit the wall circa 2015. Economic factors
> >>may get in the way before then, but even if they don't you can't engineer
> >>smaller atoms -- ever!

Well, according to my charts we don't "really" hit the limits until
2030-2040 (when we are dealing with single atom layers/wires). But
if we get nanoassembly around 2010, 2015 might be right.

> >Start by replacing silicon doped with impurities with carbon doped with
> >impurities. Smaller atoms!

There is work being done on using diamond as a semiconductor material.
The problem is "building" the diamond without nanoassembly.

> >
> >If that isn't enough, you need to replace the electrons in your carbon
> >atoms' outer orbitals with muons. Buggered if I know how you'd stabilize
> >them for long enough, but their relatively high mass (several hundred times
> >that of an electron) would shrink the orbitals right down so that the
> >density of your muonized-carbon crystals would rival that of degenerate
> >matter -- but it would be structurally and electronically similar to
> >carbon.

Hmmmm.... One of our physics experts is going to have to explain this
to me before I agree. Using "classical" physics, the orbit of the
orbiting "particle" depends largely on the mass being orbited
(considering the nucleus the "sun" and the electrons the "planets").
I would agree that larger masses (muons vs. electrons) should "orbit"
closer, but to my knowledge electron/muon "orbits" do not depend on gravity.
Any coments???

> >
> >Note that this entails waving only one magic wand -- long-lived muons --
> [snip]
 
> >From: shocklee@princeton.edu (Paul D. Shocklee)
> >
> >You can stabilize muons by Fermi blocking. (This is how neutrons in
> >neutron stars are stabilized.) All this requires is that the muons be
> >confined inside a degenerate Fermi gas of electrons, with a Fermi level
> >higher than the kinetic energy of an electron produced in muon decay.
> >[snip2]
> >So, short of a neutron star (or stable strange matter), you're never going
> >to stabilize muons in a lab.
Well, I just happen to have Chapter 18 from Xenology sitting on my
desk [Alien Weapons...] and according to *it*, you can stabalize
Muons and Pions by increasing their energy. Bump a 1 MeV pion
(decays in 1m) to 10TeV and its travel time is 540 km. My
suspicion (since the text isn't quite clear) is that this comes
from increasing the velocity (perhaps slowing down decay times
at near c velocities?). So we come down to making the muons/pions
orbit the nucleus *really* fast (which I guess means you want to make
the nucleus really heavy). I would guess that gets into an entirely
new set of problems.

Alternatively you could use electrons and positrons orbiting each
other. I believe this have been constructed, though I'm unsure
of the dimensions. Presumably it would be much smaller than atoms.

Ultimately though I think you want to get rid of the nucleus entirely
and do your computing with things that are massless (photons) or
nearly so (neutrinos). Thats the only way you are going to get
the density *really* high and the communications delays very low.

Unfortunately posts by Eugene seem to suggest to me that controlling
this stuff is going to be difficult.

> Now, back to Charlie:
>
> Which leads me to daydream aloud on the Extropians list:
> ...
> Lightspeed signal propagation delays should be a lot less significant,
> too, everything being so much closer together. (As far as circuit design
> goes, it's a bit like the consequences of raising the speed of light.)

True. But the ultimate minimization of the propagation delays would
utilize light as the information carrier.

>
> I don't have adequate references to hand, but would a red dwarf or black
> dwarf also suffice?

A red dwarf is a very small, long-lived star with normal matter so it
clearly doesn't qualify. The literature is fuzzy re: "black dwarfs"
which may be either "brown dwarfs" (unignited stars) or burned out
"white dwarfs" (remnants of stellar evolution). The brown dwarfs
are largely normal H/He and the post-white dwarfs are layered stellar
cores, probably with a lot of normal C/O and maybe some N/Ne/He with
an H atmosphere, but in both cases they have a fairly normal (nondegenerate)
atomic structure.

So, I would say, no, they do not suffice.

> If so, that environment might be accessible to assembler-built
> tools controlled by an MB. Instead of needing to employ starlifting to
> get extra material for use in an MB, the MB could directly colonise the
> stellar remnant.

There is a good argument that white dwarfs are the "gold" in the hills
of universe (due to the high C concentrations). The problem is that
you have to import H to provide energy. Same thing is true of
neutron stars... Gee that matter density is nice, but where do
you get the energy from?

I'm not sure if I asked or if anyone ever responded to, but the
question of what a photon does when it strikes a neutron star
remains an interesting question to me.... As does the problem
of the "lifetime" of neutron stars. While the neutrons in the
center may be constrained from decaying, I doubt the same is true
for neutrons on the surface. That would suggest that neutron
stars should evaporate.

> The point here is that an MB that extends all the way down _into_ the
> surface of a black dwarf has access to vastly more structured matter
> for computation purposes -- 10 to 1000 Jovian masses doesn't seem
> unreasonable -- and also doesn't suffer from the same sort of lightspeed
> propagation lag as an MB with dimensions measured in AU's.

You *always* have the lightspeed limit, but I agree that removing the
star at the center of an MBrain and replacing it with computronium
produces the highest density computing capacity. The problem then
becomes how do you pump energy in and heat out?

As noted from Anders recent Jnl of TH paper, in the Appendix, under
Jupiter Brains...
  "The main limiting factors in this design is the availability of
   carbon, the material strength of diamond and the need of keeping
   the system cool."

The carbon can be harvested from white dwarfs, the material strength
of diamond can be solved using momentum support (from "The Cuckoo")
but the problem of removing the heat and keeping the system cool
*doesn't* go away. (And Anders carefully avoids the cooling costs...)
Also, if you remove sources of internal power generation, you have the
problem of beaming power "into" the computronium.

> So it can think _faster_ than a normal MB, and probably has more
> resources at its disposal -- in return for being stuck at the bottom
> of a very deep gravity well.
>
> Is this feasible in principle, or am I missing something?
>
Nope, not missing anything. Being at the bottom of a gravity well
doesn't matter much to a being that presumably navigates itself
"en masse" around the galaxy and communicates with massless photons.

Anders does discuss the computational rates of nuclear matter
in his Neutronium brain and points out the problems of increasing
the matter density too much.

For reference purposes a Matrioska Brain of my design probably falls
between Anders' Dyson brain "Uranos" and Neutronium Brain "Chronos"
with an emphasis on power efficiency and heat removal (vis-a-vis
some of the optimizations Anders makes).

Charlie's suggested sun-less MBrain with comp-muon-onium is an enhancement
in the evolution of MBrains towards NBrains. We shall henceforth
call this a muonium-based MBrain (mMBrain)...

Finally, its worth noting that Xenology says the decay time of a muon
is 2.2 microseconds (at non-c speeds presumably) and you can do a lot
of computation in 2.2 mu-sec. The problem is to develop a system that
regenerates the mu-matter as fast as it decays.

Robert



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:26:33 MST