the Turing test

From: Lyle Burkhead (LYBRHED@delphi.com)
Date: Mon Oct 21 1996 - 22:16:20 MDT


Ira Brodsky writes,

> If you can assure me that during this experiment the AI is operating
> completely on its own, and the "real person" is not permitted to post
> to rescue the AI from giving itself away, then I can describe a test
> that would resolve this question (assuming all of the humans cooperate).

So, you are still trying to find some test *other than* the Turing test.
You admit that you can't resolve this question by reading the posts.
You are still trying to peek behind the curtain.

> Come on. The reason I am not willing to make such a statement
> is that I (like most people) don't have the time to carefully read
> and analyze all 50+ posts per day.

A lame excuse. You don't have time to read all the posts, but you do
have time to set up some other test which requires "all of the humans"
to cooperate!

It's interesting to me that you think you would have to read and analyze
*all* the posts every day. Why? Can't you put some of us into an
"obviously not AI" category, so you would only have to scrutinize the
others?

> The fact [that] the "AI" is using a borrowed identity also suggests
> it has some serious weaknesses.

Whether it has weaknesses isn't the point. We all have weaknesses.
The question here is whether the AI can successfully participate in
the discussions on the list without giving itself away. So far it has
succeeded brilliantly. Nobody has a clue about who it is, and most
people seem to take it for granted that this is merely "Lyle's game,"
as Hal Finney put it.

E. Shaun Russell writes,

> If there *is* an AI on the extropian ML, it's been doing
> a good job of emulating humans, but a very bad job of
> maintaining its own culture.

How do you know what it does when it's not posting to the list?

Lyle



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:48 MST