From: Robert J. Bradbury (bradbury@www.aeiveos.com)
Date: Thu Dec 02 1999 - 03:54:29 MST
On Wed, 1 Dec 1999, Jeff Davis wrote:
> For Robert Bradbury and others,
>
> I've been following the discussion about ETCs--enjoying it thoroughly--and
> just finished reading your (RB's) paper on Matrioshka
> brains--delightful--and have a coupla of questions.
Well, I'm flattered, but bear in mind that the paper is more a
collection of notes rather than a organized formal presentation.
> You (RB) have often mentioned the light speed limit on internode
> communication. The larger the MB gets the slower the entire brain thinks.
Yep, getting larger has diminishing returns on investment.
> Wouldn't that suggest a range of acceptable values centered around that
> point with the best (as judged by the SI community) trade-off between speed
> of thought--which I equate with rate of experience, how much living you
> accomplish per fixed unit of time--and complexity of thought-- which I
> think of as "depth" of experience (intelligence, sophistication?)
You probably have two interesting reference standards -- the thought
rate for "survival" in your environment, and the thought rate for
"survival" among your peers. I suspect the depth/complexity required
for survival is lower than that required for inter-social interactions.
However, there seems to be little *requirement* that intersocial interactions
are required for survival (if solitary-SIs can resolve the problem of
what to think about and/or how to keep themselves entertained).
>
> Even granting that different SI communities could have different "best"
> trade-off points, wouldn't any such point suggest a stabilization of
> consumption of local cosmic resources? (Of course, expansion and increased
> consumption would continue to the extent that SI communities "spawned" new
> SIs.)
Difficult to predict. The universe evolves at a very slow rate relative
to SI thought capacity. SIs also have the ability, in contrast to
bio-creatures, to put themselves into extended "suspend" mode
(where energy is required only to repair damage from cosmic rays
and similar hazards). One has to recontextualize "reproduction/spawning"
in light of the fact that this is the natural way of creating variants
on which natural selection can act. Once you have internal, virtual
simulations, the external reproduction activities may be irrelevant.
>
> Second, (and here I think I'm probably gonna put my foot in it) isn't there
> going to be a type of "computronium" which will operate in a
> superconductive regime?
Yep, absolutely, though the regime below the cosmic microwave background
temp. (as was discussed in the Drexler/Merkle paper on single electron
computations) may be highly undesirable from an energy efficiency
standpoint.
> Won't that "resistanceless" condition make it
> possible to function virtually without generation of entropy/waste heat (I
> have heard of a fundamental principle of computing/information theory that
> assigns a minimum entropy per op/(state change?), but I have also heard of
> some other theory of reversible or quantum computation which suggests a
> means to circumvent or drastically reduce this minimum entropic cost;
> though such theories are waaay over my head.)
These are the Bremermann & Bekenstein bounds. (If you want access to the
relevant papers, send me an off-list request.) The problem has relatively
little to do with "resistanceless" computation and much more to do with the
cost of erasing a bit. Interestingly enough, you can do "ultra-cheap" computing
so long as you don't erase bits. This is what gives rise to the reversible
computing methods (that multiple groups are working on).
Bottom line is that you pay the price somewhere. If you have really reversible
computing, you have to have more circuits (and pay the price in state storage
and propagation delays (as you unwind the calculations)). If you have non-reversible
computing, you pay the price in up-front erasure of bits (in which the heat
dissipation limits your power throughput).
>
> I suspect that you and Eric D. have already factored this into you
> thinking--I mean it IS obvious,...isn't it?)
Yep, Eric's computers are "reversible", meaning you don't pay the price
of erasing bits, but you do pay the very low price of "friction".
> With such a superconducting
> design I envision a spherically symmetric(for optimal density) array (I
> would call it a MB except that it's more like a solid planet than a
> rotating, concentric, orbiting array.
In a normal M-Brain, the SC levels are some of the very outermost levels
with temps below LN2 coolants (unless very-high-temp SC are discovered).
If you put "solar-power" into such a computer network, the radiators
have to be out beyond the orbits of the outer planets (so you are much
much bigger than a "planet"). Of course you can always reduce the
input power to some much smaller level and have collections of SC planetoids
orbiting a star that irradiate each other in their waste heat. This is
not a sub-optimial structure, if the nature of your problem is one that
requires a maximization of communication bandwidth (concurrent with
a substantial reduction in aggregate processing power). Since the
architecture of the human brain suggests that communication may have
greater importance than actual computation, this model is not completely
unrealistic.
> Hmmm. I guess you could have the
> full-star-surrounding collection of these, but the central brain must be
> kept superconducting cool.)
You have to keep all of them SC-cool, which will require fairly large
orbital distances from the star (depending on individual power consumption)
> To whatever degree there was heat generation, I
> would see the array as porous and immersed in liquid hydrogen or
> helium--the former is more abundant, the latter the natural by-product of
> the energy source that runs the system. The coolant would naturally take
> advantage of superfluidity to carry away the waste heat frictionlessly. (Is
> hydrogen capable of superfluidity?)
H doesn't do superfluidity, He does. I've considered this architecture.
It only works (for planetoid sized structures) when the power inputs are
much less than sol power level outputs.
If you go through Nanosystems (and fluid dynamics) very carefully, you
discover that circulating cooling fluid gets expensive. My indirect
conversations with E.D. on this subject seem to indicate that even
in a 1 cm^3 nanocomputer, consuming 10^5 watts, some large fraction
(10's of %) of the power are going into circulating cooling fluid.
In a planet sized computer, you would be putting 99.999+% of your
power into cooling circulation. You would have to have a *big*
benefit in reduction of communications costs to justify putting that
much power into coolant circulation.
There is a small caveat here -- E.D. is not assuming superfluid coolants.
He is however assuming *very* efficient phase change coolants. I know
of no analysis that combines the interesting properties of superfluid
He with solid->liquid H (phase-change). However, since H melts above
the B.P. of liquid He, it is going to take a strange pressure environment
for this, if it is even possible.
>
> As I was conjuring up this coldest of "dark matter" hydrogen(or helium)
> super-giant planets... [snip]
> then the pressure of the coolant at each layer could be isolated and
> prevented from building up to critical at the center. This might permit a
> larger upper bound on the size.
This is an interesting idea and one which I will need to think more about
(most of my M-Brain thinking has been confined to the transition stage
when civilizations will start out trying to figure out how to fully utilize
the energy their pre-existing star is generating).
My suspicion however is that you are still going to have a problem with
the power costs of circulating cooling fluid. The densest computer nodes
require external cooling, external cooling requires large radiators
which dictates large inter-node distances (increasing propogation delays).
You can compactify this somewhat, but you pay a price in the cost
of circulating the coolant from the internal nodes to the external
radiators.
>
> I would not mention this at all, except that all previous discussions of J
> and M brains have settled upon heat dissipation as one of the prime
> controlling factors in architecture. All the designs I've heard of have
> envisioned operation of the brain at blistering heats, mostly in close
> proximity to the power source/star.
Not strictly true. Ander's original paper distinguished between cold-slow
(error-free) and hot-fast (but error prone) architectures. The M-Brain
architecture leaves the star intact but adds layers that range from
very hot to very cold (limited by the matter available for radiator mass).
> This, with the single exception of your
> discussion of MBs in globular clusters, which envisions remote siting, and
> which would also be ideal for a superconducting-regime array (which would
> require its power absorption and waste heat emission be conducted entirely
> at its surface, whether in a GC or elsewhere.)
Yes, externally powered MBs have much more control over how much power
to absorb and radiate. They can form denser communications nodes at their
center (because there is no star there). The part I haven't looked at
is the cost of absorbing external power and "pumping" it to the innermost
nodes. There must be some cost to pumping power in and heat out, but
these may be offset by reductions in internal communications delays.
Ultimately I think the best architecture depends a lot on the type of problem
you are "thinking" about.
>
> Perhaps I'm missing something, but isn't the superconducting regime the way
> to go?
It is part of the equation. Since the real cost is the tradeoff between
communications delays in reversing computations and the heat removal from
irreversibly erasing bits, SC supercomputers don't buy you very much.
>
> How would this affect the crucial and fascinating question of the
> "visibility" of SI K1, 2, & 3 ETCs?
>
K1's are visible. K1's to K2s happen very quickly and get "colder"
and less visible. How cold depends on how much matter is available &
how old they are (older gets colder because cold outer layers get
dedicated to long-term non-computational storage). Complete K3's
that have converted galaxy masses into computers are relatively
small (compared to galaxies), but are still relatively cold.
Depending on their distance, they should look like very cold
point sources.
Robert
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:55 MST