Re: :Weird mysterious ineffable stuff

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Sep 22 1999 - 10:36:45 MDT


John Clark wrote:
>
> Eliezer S. Yudkowsky <sentience@pobox.com> Wrote:
>
> >Penrose & Hameroff claim that microtubules have several precisely
> >tuned characteristics needed for quantum coherence.
>
> Yep. that's what they claim.

Yep, that's why I used the word. The only reason I believe them, at
all, is because quantum coherence is a probable prerequisite for any
number of ineffabilities, not just the ones they propose.

> > *I* don't think anything ineffable will show up until we start recording
> >individual neurons
>
> We've been doing that for years.

In such a way as to notice subtle, rather than gross, stochastic
effects? In such a way as to be able to fully understand the algorithms
producing their output? In such a way as to preserve quantum coherence?
 This is what I mean by knowing what to look for.

Let me rephrase - when we know enough about neurons to fully understand
the actual computations they're performing, or at least all the output
factors that could be conceivably be involved in such a computation - in
short, when we can record the *subtle* effects - *then* we'll start
noticing the inexplicable stuff.

I'm not sure I credit the stochastic dogma of neural nets. It's such a
tremendous amount of wasted computational power. It seems to me, on
purely evolutionary grounds, that every little quaver will be exploited.
 Neurons are not remotely the simplified little blobs of "neural nets",
or the flashes of excitation we record.

> > *and* we know what to look for.
>
> And that's the problem. If a theory doesn't have experimental conformation
> then it at least needs to suggest an experiment, it needs to tell us what questions
> are important. Penrose & Hameroff haven't done that and until they do they're
> just producing hot air not science.

I don't think so - they're pointing at neurons solving NP problems, and
conditions for quantum coherence. I'm not usually interested in
defending those two, since I think so much of their hypothesis is wrong,
but they've done a decent job of moving an incredibly abstract argument
- one often treated as being entirely philosophical - into the realm of
things that can, at least in concept, be settled experimentally. That
deserves respect.

> You'd need 40 Qbits to equal a supercomputer, 100 to rule the world and
> 1000 to rival God. A 10 Qbit Quantum Computer would be a fun toy but
> it's too small to be useful, certainly too small to do all the wonderful things
> Penrose wants it to.

Irrelevant. If you can establish that a subtle effect would establish a
lot of 5-QB computations all over the brain, with a tweak that comes
with no major evolutionary costs, that'd be enough to win the day for
ineffability. It's not the intrinsic impressiveness needed for the
evolutionary argument, just cost-benefit.

Besides which, I think the concept of a neural QB is fundamentally
misguided. The brain is not a quantum Von Neumann processor. The way
it would probably have evolved, some quantumoid feature of a neuron
would have resulted in a slightly more efficient computation. Who needs
anything more? In particular, who needs explicit qubits?

> >> Me:
> >> If it were true you'd think people would be good at solving some of the
> >> known non computational problems,
>
> >Oh, nonsense. That's like saying people should be good at calculating
> >the output of neural nets.
>
> But people are good at calculating the output of neural nets, if they observe their
> operation for a while, at least sometimes. I'm a neural net and I'm pretty good at
> predicting what another neural net that inputs handwriting and outputs ASCII will do.

That's because you and the neural net are optimized for the same task.
Can you think of any NP-complete tasks that people are optimized for?
That the neural qubits could plausibly be performing? If not, your
argument just confuses low and high levels.

And remember, my evolutionary argument is that the ineffable effects
evolved as a speedup of existing (presumably P) computations, not in
response to a new environmental problem requiring ineffable computing.
And I'm not advocating the NP-quantum-computing version of ineffability
anyway; that's Penrose's kit.

> >If you had to make an AI with humanoid intelligence, using the same
> >number of computational elements as are in the human brain, with the
> >same degree of programming intelligence as the neural-wiring algorithm,
> >it would be necessary to use ineffable computing elements.
>
> Because otherwise you'd have to vastly increase the computational
> horsepower of the computer to do the same thing and that wouldn't be
> easy or cheap (or even possible I strongly suspect). Intelligence needs qualia.

Maybe. I doubt it was that much of a speedup except on a few particular
problems, and even if it was an order of magnitude, I'd still bet on a
seed AI's efficiency/intelligence/power curve being enough to make it on
available hardware. First because the AI is performing different tasks
on a different level of implementation; second because a seed AI should
be able to outthink the neural-wiring algorithm.

Remember, I started out as a Strong AIer. I still am a Strong AIer at
heart, just one who's been forced to believe in noncomputability.
Obviously, I tend to minimize the effect of ineffability. If it's such
a big deal, why do all our visualizations still obey the Turing formalism?

> >as far as I know, I was the first one to make the evolutionary
> >argument for emotions being easier than intelligence.
>
> If you publicly did it before Feb 8 1995 then you beat me to it.

That beats my priority. I concede origin.

> >What do emotions have to do with qualia?
>
> Quite a lot actually, in fact it's the entire ballgame. Qualia without emotion
> makes as much sense as emotion without qualia.

I utterly disagree. I think these two are as separate as intelligence
and qualia, or intelligence and emotion. That is, entirely separate in
concept and ideal, totally intertwined the way humans do it, but the
latter is a design flaw.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:14 MST