Re: purpose of AIs

From: Carol Tilley (tilley@worldnet.att.net)
Date: Sun Dec 12 1999 - 22:43:35 MST


From: Kate Riley
> ....My problem with this notion
> of AI is that it is inherently circular, in that ultimately, the only way
we
> could know that the AI is phenominally more intelligent than any of us is
> for a being of phenominally high intelligence to tell us so.

Kate, I know that there are many posters to this list that are phenomenally
more intelligent than myself. And there is even a higher tier composed of
phenomenally higher intelligences here that would be happy to tell me that
this is the case. Testing, prediction and retrodiction are a few of the
tools that could confirm this 'suspicion' I have about their superior
intelligence. You view the argument as being circular. I see it as more
'spiral' -like in representation.

> ....if the AI agreed with everything the human populace agreed with,
> it would be pretty useless to us as a Power.

I do not see AI and Power as equivalent. As Eliezer pointed out, the
Cro-Magnons were the last enormous evolutionary step in the hominid
progression. Achievement of AI is one of the possible approaches that could
culminate in the next step. And from there, exponential/geometric evolution
might progress to this 'Power'....or it might not. Hominid evolution has
been rather boringly flat, as of late, it's time to take that next leap,
dontcha think?

> Therefore, if the AI decided that the human species should be obliterated,
I
> would be justified in calling it a bad judgment call and taking arms
> against it.

"True warfare in which large rival armies fight to the death is known only
in man and in social insects." Dawkins: The Selfish Gene. One can only hope
that the AI has not 'inherited' this aspect of socialization that we seem
unable to
overcome.

Ct



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:06:04 MST