From: Mike Lorrey (mlorrey@datamann.com)
Date: Sat Jun 30 2001 - 09:36:51 MDT
"J. R. Molloy" wrote:
>
> If GAC helps to develop AI more consistent with your expectations and
> requirements, then all is well and good. But it doesn't hurt to be careful. So
> why not test the beast to see how friendly it is _before_ it even comes close
> to AI. After all, if we can't test a chatbot, how can we ever hope to test a
> full-blown AI.
Not a bad idea at all. Of course, I'm of the opinion that if human level
AI is achievable, it is just as likely to be less difficult to acheive
than many people think, just non-obvious. While at Extro5, I was baiting
Eli a bit when I said,"Any sufficiently complex chatbot is
indistinguishable from a human," but this statement was only in the
context of the perspective of the outside observer. Whether we can make
a program that perceives itself like a human perceives themselves is the
real challenge, which may just be an illusion. Of course, Eli is right
that 'any old sort of complexity is useless', I think that it would be
proper to develop AIs of lower intelligence first, with fewer logical
capabilities, and test them, evolving eventually to human level and
beyond. Lets say a current state-of-the-art chatbot like GAC is
somewhere around the intelligence of a dumb parrot.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:22 MST