Re: Is GAC Friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Fri Jun 29 2001 - 18:38:56 MDT


Robert J. Bradbury has written,
> Case C: You get on a bus in Seattle for a trip downtown.
> You do not expect someone to shoot the driver causing the
> bus to take a deadly ride over the edge of a bridge.

Case D: You find a chatbot (or a GAC) that can answer real questions for you.
(We conjured such an entity years ago, on this very list.)

The most workable definition of intelligence is the ability to solve problems
and answer questions. So, if a chatbot can answer questions and solve problems
(even a hand-held calculator can solve problems), then it does have a modicum
of intelligence, despite not being autonomous.

I applaud the efforts of the folks at mindpixel, even though they haven't yet
figured out that consciousness is a useless hypothesis. At least they have an
actual product, and it will be very interesting to see how it does on the
MMPI-2.

Stay hungry,

--J. R.

Useless hypotheses:
 consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism

     Everything that can happen has already happened, not just once,
     but an infinite number of times, and will continue to do so forever.
     (Everything that can happen = more than anyone can imagine.)



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:22 MST