Re: purpose of AIs

From: Clinton O'Dell (clintodell@visto.com)
Date: Mon Dec 13 1999 - 07:46:58 MST


I say Kate is right. You don't believe something someone tells you just because they say its true. That's absurd! I for one would destroy it if it threatened my life. I work with AI to become immortal. I am my own God, I don't need to create one.

Clint O'Dell
clintodell@visto.com

-----Original Message-----
From: Kate Riley kate_riley7@hotmail.com
Sent: Sun, 12 Dec 1999 22:14:51 EST
To: extropians@extropy.com
Subject: Re: purpose of AIs

Ah, don't I wish I had a bloody microwave!

Eliezer, I must admit that I have not yet read your essay on this topic, so
please forgive me if I am raising points you already raise.

My apologies, I haven't been completely clear. My problem with this notion
of AI is that it is inherently circular, in that ultimately, the only way we
could know that the AI is phenominally more intelligent than any of us is
for a being of phenominally high intelligence to tell us so. Let's say that
we determine the intelligence of an AI by the number of right "answers" it
gives us (answers being defined here as correct solutions to problems and/or
questions in all fields, science to philosophy - a haphazard definition, so
feel free to correct me, and I'll reassess). Somewhere down the line, the
AI is going to give an answer that does not concur with what is believed by
the human populace to be the right answer. This is inevitable, since it is
all but certain that we as a species are wrong in some of our beliefs. In
addition, if the AI agreed with everything the human populace agreed with,
it would be pretty useless to us as a Power.
Now, when the AI hit one of these points, and comes up with an answer
contrary to what we believe to be true, there is no way of knowing whether
the AI is right or mistaken, for there is no outside third party (which
would have to be more intelligent than either the AI or the humans) to
mediate. Therefore, sure, I'm willing to grant that a Power is possible.
However, we cannot be certain that an AI /is/ a Power in the sense that we
cannot be certain that it is sufficiently more intelligent than us.
Therefore, if the AI decided that the human species should be obliterated, I
would be justified in calling it a bad judgement call and taking arms
against it.

I feel as if I'm still not being terribly clear, and once again, I
apologize. I would be happy to answer any questions or challenges.

Best,
Kathryn Riley

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

______________________________________________________________________
Get Visto.com! Private groups, event calendars, email, and much more.
Visto.com. Life on the Dot.
Check it out @ http://www.visto.com/info



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:06:04 MST