Re: Jaron Lanier Got Up My Shnoz on AI

From: J. R. Molloy (jr@shasta.com)
Date: Sun Jan 13 2002 - 12:01:21 MST


From: "Colin Hales" <colin@versalog.com.au>
> He thinks it's
> religion to believe AI is possible, I think it's religion to hope that it
> can't.

It looks as though Lanier confuses intelligence with sentience. We already
have AI, as reported by John Koza almost two years ago in _Genetic Programming
and Evolvable Machines_, Volume 1, Number 1/2 (ISSN: 1389-2576).
Self-awareness, or sentience, is an epiphenomenon that emerges from massively
parallel computational complexity such as the human brain engenders. If
artificial sentience (AS) emerges, it may complicate the business of machine
intelligence by causing autonomous robots to waste computing time in
contemplation. So, I think you're right to suggest that robots will be
designed to remain zombies... super-intelligent, but not self-aware, because
there's no money in creating machines that enjoy life as much as
contemplatives do.

------------------------

From: "Jacques Du Pasquier" <jacques@dtext.com>
> (By the way, in that same answer, Dennett states that expecting to get
> soon physical immortality through cell repair is a foolish
> technocratic fantasy that bizarrely underestimate the complexities
> of life.)

Thank you for mentioning that.
I didn't know Dennett was so very deserving of my respect and admiration.
Sounds like a real extropic scientist.

--- --- --- --- ---

We move into a better future in proportion as the scientific method
accurately identifies incorrect thinking.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:11:37 MST