Re: UPLOAD: advocatus diaboli

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Wed Jan 08 1997 - 11:49:50 MST


On Tue, 7 Jan 1997, John K Clark wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
>
> On Mon, 6 Jan 1997 Eugene Leitl <Eugene.Leitl@lrz.uni-muenchen.de> Wrote:
>
> >While my estimate may be too high yours is certainly
> >drastically too low.
>
> Unless Long Term Potentiation turns out NOT to be an important part of
> Long Term memory, and that doesn't seem very likely to me, my estimate of
> .001 bit/synapse must be closer to the truth than yours of 50 bit/synapse.

Most of these 50 bits is connected-to-neuron-ID, one needs a lot of bits
to absolute-address 10^11 neurons. The delay might not be necessary for an
AI, yet it is certainly relevant for a realistic upload, so a few bits
should be reserved for the delay representation, either. The type of the
synapse, the strength, etc. Did I say 50 bits? Make that 64.
 
> The Article I referred to in the January 28 1994 Science is by Dan Madison and
> Erin Schuman, I would not be surprised if it earns them a Nobel Prize someday.

Thanks for the valuable pointer. Have been neglecting Nature much
too much lately.

> They found that when a synapse strengthens it's functional link to another
> neuron that synapse releases a chemical (nitric oxide) that diffuses to many
> other synapses and causes those synapses to be strengthened also. If, as most
> think, long term memory is encoded by varying the strength of the 10^14
> synapses that connect the 10^11 neurons then the conclusion is obvious.

But NO is not the only neurotransmitter/neuromodulator, there are
several (some say several 100) kinds of neuron classes, and many kinds
of synapse types, oodles of different neurotransmitters. NO's diffusion
range is also certainly limited. Does it really affect hundreds of
synapses?`
 
> Apparently I'm not the only one who thinks so. Terrance Sejnowski of the Salk
> institute, one of the best neural modeler's in the world, certainly believes

Sejnowski is certainly very well known. But this doesn't mean NO
diffusion-induced potentiation is the silver bullet, to explain away
every single mystery.

> that this reduces the storage capacity of the brain. In an editorial in the
> same issue of Science that Madison and Schuman announced their results he
> says "The individual synapse cannot be the computer bit of the brain. Instead
> of thinking of a synapse as representing a piece of information, you can now
> begin thinking of a population of potentiated synapses acting together".

That would be very good news, if true. Otoh, it would require translation
of the scanned biological circuitry in some other coding, with a more
suitable mapping to the underlying hardware. It _is_ a straightforward
thought, yet it makes matters much more complicated. A yet another
nontrivial pass applied onto the data set...
 
> >Evolution doesn't produce complex circuitry for no good
> >reason, if a simpler one would have been sufficient
>
> I profoundly disagree. The chance that Evolution would just stumble across
> the simplest solution is astronomically small. The winner in the battle of

Yes, but generating _many_ subsystems when a single one would have been
sufficient... The evolutionary distance should be, at least intuitionally,
larger.

> Evolution is not the one who has the perfect solution, just the one that
> finds a solution that is better than the competition. Evolution is slow and
> stupid, but I seem to remember you and I going down this road before.
´
Yeah, right ;) However, if evolution does indeed occur at such a grand
scale as biology, cosmogony, and coginitive science (Edelman's Darwin
mind), and even now GAs can produce super-human-level quality digital
designs, it might be not that slow (in its digital incarnation) nor that
stupid?
                       
> >What is the purpose of a discrete, modular, pretty precise
> >structure if all it is good for is storing just .001 bits.
>
> There is no purpose, Evolution just screwed up. You're not the first to be

Evolution has no defined purpose, yet it artefacts a consistant higher
complexity trend, a longer taxa longevity, a higher cerebralization
coefficient, etc. After all, the step from archaebacteria to us meek mehums
is somewhat large, isn't it? The random-walk-back-to-the-wall argument may
seem plausible at first, howeve I don't think that's all. There is surely
some fiendish subtlety we haven't seen yet.

> disappointed by Natures ineptitude as seen in these new findings. Roger Nicoll,
> a neuroscientist at the University Of California was yet another scientists
> quoted in the same issue, he was rather blunt: "Very Provocative. Nature has
> gone to elaborate lengths to create a structural edifice that can give you
> synapse specificity. To then just degrade the process and let it spread
> around a bit, makes it seem like Nature blew it somehow".

It may be diffusion defines neighbourhood relation for conformal maps
(sombrero function), maybe it's for something utterly different. I know
that additional synapses sprout when the dynamic range of a single one is
exceeded, and they also mature, but diluting this complex machinery? I
dunno.
                           
> >Ask Joe Strout, he burns any number of MFlops for hours to
> >simulate just _one_ biologically realistic neuron, and far
> >from running in realtime.
>
> The complexity of an individual neuron is irrelevant, the Madison and Schuman
> findings are about redundancy. The point is, it may not take any more computer
> power to simulate many neurons than to simulate one.

Well, he doesn't do single units alone, he does (small) nets also.
                       
> >We don't know whether it [Nanotechnology] violates the laws
> >of physics. It may, it may not.
>
> Nanotechnology is just the ability to move atoms with tolerances that are

"Move" is not move. Hauling heavy noble gas atoms on a cold Ni surface/reversibly
poking holes in molybdenum disulfide surface is one thing, writing
diamondoid billion-atom-carbon-structures from active species is another
matter entirely.

> very small by everyday standards but still larger than the minimum tolerance
> allowed by Heisenberg. We've already moved atoms around with a STM, so what

Oh jeez, I was referring to the advertised positional mechanosynthesis tip
accuracy of 100 pm, which is an intrinsical property of the assembler arm
stiffness. I'd rather have 10 pm, since I regard 100 pm precision much
too coarse to generate almost-perfect-diamondoid lattice. Apart from that,
it's reversibility, a minimum reaction set (which set is minimal?), etc.

> Law of Physics could Drexler's Nanotechnology be violating? I think it must
> be the same one a 1000 ton airplane would be violating.

The stuff sure scales differently at these scales.
  
> >>John:
> >>It's your responsibility to prove it's impossible
> >>not mine to prove it's possible.
>
> >Eugene:
> >Wrong. Science works differently. _You_ have to prove it,
> >not vice versa.
>
> Yes, if you say a perpetual motion machine or The Lorrey drive is impossible
> it's your responsibility to prove they are impossible, and you can quite

No, basicaly if they violate a very basic tenet of science you can just
lean back and let the proponents produce the evidence. The more
outrageous the claim, the more solid the evidence.

> easily do so by pointing out that one violates the law of conservation of
> energy and the other violates the law of conservation of momentum. If you say
> that a 1000 ton airplane is impossible you have to prove that there is a new
> law of Physics that places an absolute limit on the weight of flying machines.
> I don't think you can do that.

That a wrong comparison. Mechanosynthesis didn't have its Wright bros.
yet.
  
> >No man-made mechanosynthesis works. A STM demonstration of
> >basic set of mechanosynthesis reactions, validating computer
> >runs is sufficient for _me_. We don't have such evidence
>
> It's true we can't make molecules to order with a Scanning Tunneling
> Microscope (STM), or if we can we can't find them yet, detecting the product
> of such a reaction is probably more difficult than causing it. However there

I doubt it, since STM has atomic resolution, and we already know where we
caused the reaction (e.g. hole in the surface). Finding the point of
interest should not be a problem. Caused a defined process on an atomic
scale, merely by bobbing the tip, and applying the voltage pulse is much
harder. I've watched an STM experiment which plucked Au clusters from the
gold tip, it wasn't trivial, and the clusters were not well defined.

> is no reason to think a STM can only do Physics and not Chemistry, if it

The distinction between chemistry and physics is highly arbitrary.
Quantum chemistry is pure physics, e.g. Mechanosynthesis is basicaly
physics/computational chemistry, pretty obscure disciplines, but that's
not a reason to my objections. Lack of evidecence/simulation validization
is.

> turned out to be true that would prove the existence of some mysterious new
> physics we know nothing about. It would have to be mysterious indeed because
> we already know a STM can break a chemical bond.

Yes, but can it generate/break is specifically, allowing us a minimal,
yet all-purpose mechanosynthesis reaction set?
 
> In the June 16 1995 issue of Science it's shown that if electrons of the
> correct energy are shot at an atom from the tip of a STM the atom will
> resonate and the resulting vibration will break the chemical bond. According

So far, nothing outrageous.

> to the researchers the procedure is somewhat faster than they expected and
> it does not require any exotic conditions such as very low temperature.
> J.W.Lyding, one of the authors of this report, is quoted as saying " We'd
> like to make small, electronic devices on the nanometer scale".

I doubt this means mass-production. The whole idea of nanotechnology is
that it can autoreplicate. Limited manipulation capability is almost
worthless, if we have to do it with macroscopic nonautonomous devices.
Perhaps I should elaborate: doubtlessly STM can do atomic manipulation.
Question is: is the capability sufficient for autoreplication/all purpose
mechanosynthesis? Nobody knows.
           
> Also, as you know Carbon tubes of nanometer diameter are pretty easy to make,
> recently it's been found that molten vanadium oxide can form a coating on
> these carbon tubes, the carbon can then be dissolved away using conventional
> chemical techniques leaving pure vanadium oxide tubes of nanometer diameter.
> Vanadium oxide is a powerful catalyst for many chemical reactions, so it
> should be possible to use them as tiny test tubes, Chemistry done very small.

Yes. The Heckl group now considers mounting nanotubes (not bucky) on tip
of the STM needle, possibly using photochemistry to activate precursors,
etc. But it's still not simple.

> You could also use them as molds for all sorts of different materials.

Interesting tech, certainly.

ciao,
'gene
 
>
> John K Clark johnkc@well.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:43:58 MST