From: J. R. Molloy (jr@shasta.com)
Date: Fri Jun 29 2001 - 19:53:10 MDT
> Of course, a sufficiently broad and competant simulation should be able to
> pass a Turing Test (and game AI/chatbots manage it every day)...so a
> question: what's the list bias on what a sufficient test for sentience is?
Well, you could ask the agent under investigation, "What is a sufficient test
for sentience?" and see what it comes up with.
> Many people I've spoken to seem to have an aversion to -- hypothetically --
> declaring a Turing-Test-passing entity sentient if they can examine and
> understand its algorithms. If they can't understand how it works, they're
> happy with that.
Right, that sums it up nicely: If we can understand how it works, then (to
some people) it's not human (nor a human-competitive intellect). Then again,
suppose you could understand how the human brain works, but nobody believed
you? Of course you don't have to build a 777 jet airliner to prove that you
know how it works, but then no one person _does_ understand completely how a
777 airliner works. It takes thousands of people to build such a complex
machine.
Likewise, I think it will take tens of thousands of people to build an
artificial human-competitive brain, because such a machine far exceeds the
complexity of a jet airliner.
Stay hungry,
--J. R.
Useless hypotheses:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism
Everything that can happen has already happened, not just once,
but an infinite number of times, and will continue to do so forever.
(Everything that can happen = more than anyone can imagine.)
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:22 MST