From: Charles Hixson (charleshixsn@earthlink.net)
Date: Tue Sep 24 2002 - 10:52:16 MDT
spike66 wrote:
> ...
> I am another one who has been totally baffled
> most of my life by the human emotional operating
> system. A very complicated thing is this, but
> let me propose an idea. One can clearly state
> without being condescending, that one is going
> meta, which is to say, one wishes to have the
> readers of an idea temporarily turn off their
> emotions, and look at an idea the way a human
> level AI would see it.
A basic problem here is identifying the constructs that a human level AI
would use. In my schema the AI could respond from one of four basic
positions (purpose, desirability, modeling, or logic), each modulated by
one of two secondary positons (desireability could be modulated by
either modeling or purpose, e.g., but not by logic) [there would also be
tertiary and quaternary modulations, but those are fixed by the first
two choices]. That yields 16 basic positions as equally likely starting
states, although any particular AI would probably soon favor a
particular position, as I tend to favor logic modulated by modeling
(modulated by purpose). There is also intraversion vs. extroversion,
yielding 32 basic starting positions. Perhaps there are more, but so
far I haven't seen the theoretical need for them. (These will certainly
be elaborated greatly as the AI experiences, of course.)
But I suspect that in this sense the AI would behave on an intellectual
level equilvalent to the ways that people behave.
> We have seen posters clearly become upset by one
> idea or another, but as an exercise, let us try
> (I dont even know if it is possible) to switch off
> the emotions. Let us try to view a human level
> social problem the way a mathematician would view
> a number theory problem.
This, however, is not how a real human level AI would act. This is more
like a movie stereotype (and I've always found those unconvincing).
This is a pure position of logic-modeling without the sub modulations
of purpose or desireability, and I doubt that such could ever become
intelligent. (Great assistant draftsmen, however. Hook it up to a
model for dynamic flows, and it could even provide reasonable critiques
of a proposed design. [Even that, though, might require at least the
ability to evaluate on the basis of either desireability or purpose, if
not both.])
> An AI would not know or
> care what pain is, for instance, other than to
> observe that sentients avoid it, like a mathematical
> function appears to avoid an asymptote.
Not exactly true. An AI wouldn't have a body, so it wouldn't experience
body sensations. But it would need to be sensitive to its environment,
and react appropriately to that. And a sensation essentially analogous
to pain would be needed to avoid the AI damaging the container within
which it resided. The AI would experience this as a extremely low level
of desirability, and combine this with some model indicating what was
being avoided. (Sensations are a necessary component of any even
vaguely intelligent program, much less an AI. Shrdlu, e.g., was
sensitive to what was typed on the keyboard.)
> We know TBC is a touchy subject. It gets all
> tangled up with racism and we already know racism
> is a hell of a problem for humanity. I am amazed
> we are able to handle it for any length of time.
> Let us try to go meta and view it the way an
> intelligent machine would.
>
> spike
The AI might well have a very different set of touchy areas, but to
assume that they wouldn't exist is unreasonable.
-- -- Charles Hixson Gnu software that is free, The best is yet to be.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:15 MST