From: John K Clark (johnkc@well.com)
Date: Sat Jun 27 1998 - 08:07:58 MDT
-----BEGIN PGP SIGNED MESSAGE-----
On Thu, 25 Jun 1998 Michael Nielsen <mnielsen@tangelo.phys.unm.edu> Wrote:
>The reversible architectures I am aware of have higher fundamental
>error rates than existing irreversible architectures, and will
>likely require considerable error correction. Error correction, in
>turn, involves the dissipation of heat, by elementary thermodynamic
>arguments. Essentially, error correction is a procedure for
>lowering the entropy of a physical system (the computer). The
>entropic cost is paid in heat dissipated into the environment.
There may be engineering reasons that with current technology reversible
computers would make more errors than the irreversible type, but I don't see
any fundamental physical reason why that should be so. Consider a computer
with the smallest possible memory, just one bit. The computer could be in two
states, zero and one. Now record something into the computers memory, for
example one. You have reduced the states the machine can be in, from 2 to 1
in this case, and because the entropy of an object is the logarithm of the
number of ways the parts of the object can be rearranged without changing its
macroscopic attributes, that means you have reduced the Entropy of the
machine also.
According to the second law of Thermodynamics you can locally reduce the
Entropy of something but it takes energy to do it. The absolute minimum
energy it takes to erase one bit of information and record something different
in its place is ln(2)kT , k is Boltzmann's constant 1.381 X10^-23 J/K, and
T is the temperature of the computer in degrees Kelvin. This is not a lot of
energy by everyday standards, but it is free energy that must be dissipated
as heat if you want to erase one bit of information. With reversible
computing, that is where the output uniquely determines the input, nothing is
erased in computation so you don't have this energy loss and a logical
operation can be performed with an amount of energy that is arbitrarily close
to zero.
Landauer, Bennett and Merkle have shown that with reversible computing the
amount of energy needed to make a calculation can be made arbitrarily small,
by slowing down the calculation a little. Even a small reduction in speed can
help a lot in energy saving, the power dissipation (per unit of time) falls
as the SQUARE of the speed. More recently in the June 28 1996 issue of
Science, Landauer shows (in a interesting but non practical way) how you can
also communicate and not just compute information with an arbitrarily small
amount of energy if you are willing to slow things down.
None of this is an important consideration to chip designers, today's logic
circuits are so huge, and they deal in such a tiny amount of information,
that the effect is utterly trivial. Chip designers can safely ignore this,
Nanotechnology engineers can not. In the early days there were some
embarrassing incidences where people thought they were designing a nano
computer, but when you looked closely at it and the amount of heat the thing
would give off if operated at the design speed, you realized that what they
were really designing was a first rate high explosive. They do a lot better
now.
John K Clark johnkc@well.com
-----BEGIN PGP SIGNATURE-----
Version: 2.6.i
iQCzAgUBNZR7wn03wfSpid95AQEStQTtGakBEFNT6zWsh8981GQUrQVCWYlTnQUP
u2TbxlvrjVddYNlVwHexB9gca5D01guOz59+xBO2Q3Ydl4CkrN06Xoj/iNR29Lwi
2xevGno7P+8tyB4DEQOjsAiuwNaRIC2ZAwQmNLlHxK/qwj0J1WkirnKFqZZ+fC9Q
kx2/WG8pTLolwo4QhCRtnSzO6DF06ocOOjlBYMWVKYZMUFy5uVg=
=so+J
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:14 MST