RE: Defining Intelligence

From: Colin Hales (colin@versalog.com.au)
Date: Mon Aug 19 2002 - 22:23:14 MDT


Christopher Whipple
> Does schmoozing make robots clever?
>
http://rss.com.com/2100-1040-950237.html?type=pt&part=rss&tag=feed&subj=news ²
>
> "I think the Turing test is a bad idea because it's completely fake,"
> Steels said. "It's like saying you want to make a flying machine, so you
> produce something that is indistinguishable from a bird. On the other
> hand, an airplane achieves flight but it doesn't need to flap
> its wings."
> -----
> Is it wrong to measure machine intelligence with the same
> metric that we use to measure our own?

This aspect is going to be a tin of worms in the legal side of assignment of
rights to AI.
It becomes, instantly, a race/species issue.

An intelligence test defined in any other way than the capacity to achieve
some sort of standardised goals represented in a common symbolic language,
becomes a test of communications. Even when you reach a set of standardised
goals, the tricky part is that an AI with a very different experience of the
universe will likely not have goals like a human, so even then a comparison
could become meaningless.

Let's say that an AI has advanced pattern reconition skills. If they are
applied to achieve its own goals, what is the point of measuring the
intellect in our terms? For example: It's more meaningful to talk about IQ
in dogs (by some measure) of different breeds or intra-breed. The same will
apply to humans vs AI, I think. We'd be better off learning to interepret
the IQ the AI generates on itself and interpreting that with our eyes. And
they ours.

> Should we instead be focusing on a machine's
> own unique brand of intelligence and culture?

I think so (see above).

>
> I'd imagine these same questions apply to dolphins, lower
> primates, etc.
>
> -crw.
>
>

This happens to be something I'm writing about just now. The various
definitions of intelligence you find in the literature are all tacitly human
anthropocentic (der!). My position at present is that the machine's ability
to be of use in understanding and participating in human affairs be
commensurate with the degree to which its experience of the universe matches
that of a human. I'm inclined not to try and define it or measure it at all!

Ask the question "by virtue of comparative brain structure and sensory
experience, is the AI experiencing the universe like humans do? This is a
lot easier to validate. When you do that, you'll at least understand how
well the AI can possibly understand humans. From a safety point of view, the
more innate understanding of the human condition they have the better. It's
interesting to note that the only widely publicised AI test - the Turing
Test, is a test to see if an AI can fake a human. The Turing test may be
doing something useful or, from another angle, you may be manufacturing a
very gifted charlotan with no clue about humans - a linguistic mirror.

If you construct an AI such that it's experience of the universe (sensory,
brain structure and therefore the resulting causality modelling) is very
different to ours, it will be intelligent only insofar as it's _own_ goals
can be met within its own context of sensory experience not ours.

Perhaps it would be instructive to put humans through a dog IQ test (eg how
many repetitions does it take to learn behaviour X). What have you achieved?
The ability of the dog as a slave to humans has been measured, yet again. A
cultural edge has been tactily imposed. The same difficulty will result
between AI and humans, I think. Woof.

The whole problem of intelligence definition and calibration is a vexing
one.

It's also interesting to note that most of the worlds attempts at AI involve
modelling the causality in language or some other 1-layer-removed human
mapping of intelligence as it appears to be to us. In human terms they have
no real understanding of the universe we inhabit. They become the logical
equivalent of a hammer - useful in the presence of a human, otherwise IQ is
zippo.

Tricky business. I forsee commercial opportunities in human/AI cultural
boundary management. Maybe C3P0 will happen yet.

cheers

Colin



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:16:16 MST