Re: Jaron Lanier Got Up My Shnoz on AI

From: J. R. Molloy (jr@shasta.com)
Date: Fri Jan 18 2002 - 18:32:22 MST


From: "John Clark" <jonkc@worldnet.att.net>
> Let me get this straight, first you say "intelligence is not sentience"
> and then you say we know other people are sentient because
> " We can confirm this by interacting intelligently with other people"
> and you see no contradiction in that?

There is no contradiction in confirming sentience via interacting with others.
The operative word is *interacting* -- the level of intelligence is secondary.
(Likewise, we dismiss solipsism via peer review.)

> >>Me:
> >>I think general problem solving is indistinguishable from
> >>intelligence and intelligence inevitably implies consciousness.
>
> > (((((((( LOL ))))))))))
>
> Why?

Because intelligence implies "consciousness" in the same way that fire implies
"phlogiston."

--------------------------

More from: John Clark
> 1) What problem can be solved by a parallel computer but can not
> in principle be solved by a very fast serial computer ?

Sounds like a question for a parallel computer.

> 2) is there any reason to think that an AI would have to run on a serial
computer?

There's probably less reason to think that an AI would have to run on a serial
computer than to think that it would have to run on a parallel computer, since
natural intelligence runs on parallel computers.

> >My old z80 is slower by a magnitude than a P4, but is not a jot more
> >or less sentient!

> You seem very sure, how did you find out?

That's easy: They both failed to claim that they are sentient.

------------------------

From: "James Rogers" <jamesr@best.com>
> Again, the computational process is irrelevant. Equality of outcome
> demonstrates equivalence.

While that is true for fully optimized systems (and I find your analysis thus
far exquisitely incisive), I'd add that equality of outcome does not
demonstrate equivalence when particular systems are deliberately impeded or
hindered. For example, a human who is drugged may do math no better than a
horse, but that doesn't mean the human brain cannot excel the equestrian brain
in mathematics. So, when it comes to low standards, equality of outcome may
actually indicate that someone is manipulating the results and/or the
performance of the processes in question.
When AI demonstrates human-competitive original thought, then it will no
longer be mechanical. It will be A-life.

--- --- --- --- ---

Useless hypotheses, etc.:
 consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, cryonics, individual
uniqueness, ego, human values, scientific relinquishment, malevolent AI,
non-sensory experience, SETI

We move into a better future in proportion as the scientific method
accurately identifies incorrect thinking.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:11:47 MST