From: Steve Nichols (steve@multisell.com)
Date: Mon Jan 14 2002 - 08:35:29 MST
>Date: Sun, 13 Jan 2002 11:01:21 -0800
>From: "J. R. Molloy" <jr@shasta.com>
>Subject: Re: Jaron Lanier Got Up My Shnoz on AI
From: "Colin Hales" <colin@versalog.com.au>
>> He thinks it's
>> religion to believe AI is possible, I think it's religion to hope that it
>> can't.
>It looks as though Lanier confuses intelligence with sentience. We already
>have AI, as reported by John Koza almost two years ago in _Genetic
Programming
>and Evolvable Machines_, Volume 1, Number 1/2 (ISSN: 1389-2576).
>Self-awareness, or sentience, is an epiphenomenon that emerges from
massively
>parallel computational complexity such as the human brain engenders.
There is no evidence for emergentism, and the philosophical case for
epiphenomenalism is weak at best. Complexity does not equate to
infinite-state (self organising circuitry) since finite-state, hard wired
systems can be equally complex. Sentience, abstract thought, is only
possible once a circuit has lost its external clock (primal eye) and
become analog(ous to infinite-state). www.multi.co.uk/primal.htm
> If
>artificial sentience (AS) emerges, it may complicate the business of
machine
>intelligence by causing autonomous robots to waste computing time in
>contemplation. So, I think you're right to suggest that robots will be
>designed to remain zombies... super-intelligent, but not self-aware,
because
>there's no money in creating machines that enjoy life as much as
>contemplatives do.
This may well be true, except that many scientists are theory driven and
not commercially motivated. Also, who knows what side benefits might
accrue from MVT-based conscious machines?
www.steve-nichols.com
Posthuman Organisation
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:11:38 MST