Re: :Weird mysterious ineffable stuff

From: John Clark (jonkc@worldnet.att.net)
Date: Tue Sep 21 1999 - 22:35:55 MDT


Eliezer S. Yudkowsky <sentience@pobox.com> Wrote:

>Penrose & Hameroff claim that microtubules have several precisely
>tuned characteristics needed for quantum coherence.

Yep. that's what they claim.

> *I* don't think anything ineffable will show up until we start recording
>individual neurons

We've been doing that for years.

> *and* we know what to look for.

And that's the problem. If a theory doesn't have experimental conformation
then it at least needs to suggest an experiment, it needs to tell us what questions
are important. Penrose & Hameroff haven't done that and until they do they're
just producing hot air not science.

>Heat and noise is only a problem for macroscopic, crystalline
>(non-stochastic) quantum coherence. Remember that NMR-based QC (*not*
>the one we've been discussing lately) that would operate on a cup of coffee?

The idea behind the NMR approach is that instead of making one Quantum
Computer you make trillions of them. Nearly all will be unable to maintain quantum
coherence and so produce a random output but a tiny minority will survive
and produce the correct answer. The trouble is that as the number of Qbits
increases the number of surviving computers drops exponentially and so
does the signal to noise ratio. Even under the best conditions after about
10 Qubits the signal is going to be drowned out by noise, and a neuron is
far from the best condition.

You'd need 40 Qbits to equal a supercomputer, 100 to rule the world and
1000 to rival God. A 10 Qbit Quantum Computer would be a fun toy but
it's too small to be useful, certainly too small to do all the wonderful things
Penrose wants it to.

>> Me:
>> If it were true you'd think people would be good at solving some of the
>> known non computational problems,

>Oh, nonsense. That's like saying people should be good at calculating
>the output of neural nets.

But people are good at calculating the output of neural nets, if they observe their
operation for a while, at least sometimes. I'm a neural net and I'm pretty good at
predicting what another neural net that inputs handwriting and outputs ASCII will do.
For that matter, I'm much better than random chance at predicting what a neural net
such as yourself will output. On the other hand people are not good at solving any non
computational problem, not a single one, not ever. Why?

>If you had to make an AI with humanoid intelligence, using the same
>number of computational elements as are in the human brain, with the
>same degree of programming intelligence as the neural-wiring algorithm,
>it would be necessary to use ineffable computing elements.

Because otherwise you'd have to vastly increase the computational
horsepower of the computer to do the same thing and that wouldn't be
easy or cheap (or even possible I strongly suspect). Intelligence needs qualia.

>as far as I know, I was the first one to make the evolutionary
>argument for emotions being easier than intelligence.

If you publicly did it before Feb 8 1995 then you beat me to it.

>What do emotions have to do with qualia?

Quite a lot actually, in fact it's the entire ballgame. Qualia without emotion
makes as much sense as emotion without qualia.

         John K Clark jonkc@att.net



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:14 MST