From: John K Clark (johnkc@well.com)
Date: Sat Sep 07 1996 - 22:50:17 MDT
-----BEGIN PGP SIGNED MESSAGE-----
On Fri, 6 Sep 1996 Eugene Leitl Eugene.Leitl@lrz.uni-muenchen.de Wrote:
>you refer to the estimations in the retina chapter of "Mind
>Childeren"
Yes.
>He [Moravec] made lots of arbitrary assumptions about neural
>circuitry collapsibility
I don't think they were arbitrary, but I grant you some things were just
educated guesses, it turns out that he almost certainly overestimated the
storage capacity of the human brain. As for it's information processing
capacity, he could be off by quite a lot and it would change the time the
first AI is developed by very little. The speed of computation has increased
by a factor of 1000 every 20 years. It might not continue at that frantic
pace but ...
>Considering about 1 MBit equivalent for storage for a single
>neuron
I think that's thousands of times too high.
>8 bits/synapse,
A synapse may well be able to distinguish between 256 strength levels,
perhaps more, Moravec said 10 bits/synapse, and in 1988 when he wrote his
book it was reasonable to think that if you multiplied 10 bits by the number
of synapses in the brain you could get a good estimate of the storage
capacity of the brain. It is no longer a reasonable assumption
The most important storage mechanism of memory is thought to be Long Term
Potentiation ( LTP). It theorizes that memory is encoded by varying the
strength of the 10^14 synapses that connect the 10^11 neurons in the human
brain. It had been thought that LTP could be specified to a single synapse,
so each synapse contained a unique and independent piece of memory, we now
know that is not true. In the January 28 1994 issue of Science Dan Madison
and Erin Schuman report that LTP spreads out to a large number of synapses on
many different neurons.
>a 10 k average connectivity
A consensus number, although I have seen estimates as high as 100 k.
>and about 1 kEvents/synapse,
Brain neurons seldom go above 100 firings a second, and at any one time
only 1% to 10% are firing in the brain.
>assuming about 100*10^9 neurons,
Sounds about right.
>Moreover, this is _nonlocal_ MOPS
I assume you're talking about long range chemical messages sent by neurons
to other neurons, that would be one of the easier things to duplicate.
The information content in each molecular message must be tiny, just a few
bits. About 60 neurotransmitters are known but only a few of those, such as
acetylcholine, are involved in long range signaling, even if the true number
is 100 times greater ( or a million times for that matter) the information
content of each signal must be minute . Also, exactly which neuron receives
the signal is not critical ( it relies on diffusion, a random process) and
it's as slow as molasses in February.
If your job is delivering packages and all the packages are very small and
your boss doesn't care who you give them to as long as it's on the correct
continent and you have until the next ice age to get the work done, then you
don't have a very difficult profession. I see no reason why simulating that
anachronism would present the slightest difficulty.
>I'd rather run at superrealtime, possibly significantly so.
>100* seems realistic, 1000* is stretching it. Nanoists claim
>10^6*, which is bogus.
I think 10^9 would be closer to the mark. The signals in the brain move at
10^2 meters per second or less, light moves at 3 X 10^8 and nano-machines
would be much smaller than neurons so the signal wouldn't have to travel
nearly as far. Eventually the algorithms and procedures the brain uses could
be optimized and speed things up even more, but this would be more difficult.
>no groundbreaking demonstration of such reversible logics
>has been done yet.
Not so, reversible logic circuits have been built, see April 16 1993 Science.
They're not much use yet because of their increased complexity, and with
components as big as they are now the energy you save by failing to destroy
information is tiny compared to more conventional losses. It will be
different when things get smaller. Ralph Merkle, the leader in the field is
quoted as saying " reversible logic will dominate in the 21'st century".
>It bears about that many promise as quantum cryptography and
>quantum computers, imo. (Very little, if any).
There is not the slightest doubt that quantum cryptography works, and not
just in the lab. Recently two banks in Switzerland exchanged financial
information over a fiber optic cable across lake Geneva using Quantum
Cryptography. Whether it's successful in the marketplace depends on how
well it competes against public key cryptography, which is easier to use and
probably almost as safe if you have a big enough key. I don't want to talk
about quantum computers quite yet, a lot has been happening in the last few
weeks and I haven't finished my reading.
>One tends always to forget that atoms ain't that little, at
>least in relation to most cellular structures.
It's a good thing one tends to forget that, because it's not true. An average
cell has a volume of about 3 X 10^12 cubic nanometers, that's 3 thousand
billion. Just one cubic nanometer of diamond contains exactly 176 carbon
atoms.
>>No known physical reason that would make it
>>[Nanotechnology] impossible.
>No known physical reasons, indeed.
I don't think there are any physical reasons why strong Nanotechnology is
impossible, that's why I don't put it in the same category as faster than
light flight, anti-gravity, picotechnology or time travel. If you disagree
with my statement then I want to know exactly what law of physics it would
violate, not all the reasons that would make it difficult. I already know
it's difficult, that's why we haven't done it yet.
>Just look at a diamondoid lattice from above, and look at
>the periode constant. When zigzagged, C-C bond are a lot
>shorter. Sterical things.
If something is pretty rigid, like diamond, it's steric properties are the
same as it's shape properties, at least to a first approximation. Often
steric difficulties can be overcome just be applying a little force and
compressing things a little. Naturally it is vital for a Nanotechnology
engineer to remember that no molecule is ever perfectly rigid and at very
short distances anything will look soft and flabby. Drexler is not ignoring
this, he spends a lot of time in Nanosystems talking about it.
>We want to know, whether a) a given structure can exist
It is possible that some of the intermediate states of the object you want to
construct would not be stable. I can see two ways to get around this problem.
1) Always use a jig, even if you don't need it.
2) Make a test. If you know you put an atom at a certain place and now it's
mysteriously gone, put another one there again and this time use scaffolding.
Neither method would require a lot of intelligence or skills on the part of
the Nanotech machines.
It's also theoretical possible that some exotic structures could not be built,
something that had to be complete before it's stable, like an arch, but
unlike an arch had no room to put temporary scaffolding around it to keep
things in place during construction. It's unlikely this is a serious
limitation, nature can't build things like that either.
>b) whether we can build the first instance of this structure
Assuming it can exist, (see above) the question of whether you can make it or
not is depends entirely on your skill at engineering and has nothing to do
with science.
>c) this structure is sufficiently powerful to at least a
>make a sufficiently accurate copy of itself.
Depends entirely on the particular structure you're talking about and on the
particular environment it is expected to be working in. Again, this is pure
engineering.
>That's a lot of constraints, and all of them physical.
No, none of them are physical, all of them are engineering.
>Claiming the problems to be merely engineering, is not good
>marketing, imo.
I wouldn't know, I'm no expert on marketing.
>There are excellent reasons to suspect this connectivity
> [of neurons] to be crucial
That's true, I don't think there is the slightest doubt. This vast
connectivity is the very reason why biological brains are still much better
than today's electronic computers at most tasks, in spite of it's appallingly
slow signal propagation.
> so you have to simulate this connectivity.
Obviously, and I see absolutely nothing in the laws of Physics that would
forbid Nano Machines from equaling or exceeding this connectivity.
>you can't do it directly in hardware
Why not? The brain has a lot of connectivity, but a random neuron can't
connect with any other random neuron, only the closest 10 thousand or so.
The brain grows new connections and I don't see why a machine couldn't do
that too if needed, but another way is to pre wire 10 thousand connections
and then change their strength from zero to a maximum value.
>you must start sending bits, instead of pushing rods
Eugene! I know you can't mean that. Using the same logic you could say that a
computer doesn't send bits, it just pushes electrons around, and the brain
doesn't deal in information, it just pushes sodium and potassium ions around
and Shakespeare didn't write plays, he just pushed ASCII characters around
until the formed a particular sequence.
John K Clark johnkc@well.com
-----BEGIN PGP SIGNATURE-----
Version: 2.6.i
iQCzAgUBMjJMeX03wfSpid95AQFutATsDH29j2/gNadLCqc5KIFsHJSUpJJbCdgm
42CDf4r1TamdpI60IrIRXSTr8sYJFT8xQqeoKzrW6TpanYgScA/kBdM2jrxE6ePQ
I89CDVu4N4J2GK4QA5lbLv9C9AIgDGaWVsDCFooPB5O3vMA0OQHlj+mDSrndAMKD
Gat1mqmnIejXZD72j62zYLXHp+SmzBPQJtzBH+43abHUwxRD28U=
=QZp6
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST