PSYCH: Re: Emotion Selection

From: Freeman Craig Presson (dhr@iname.com)
Date: Sun Jul 04 1999 - 11:22:56 MDT


On 3 Jul 99, Anders wrote:
[...]
> What is needed is to balance the emotions, which provide us with
> motivation and creates a mode of thinking, with rational evaluation of
> situations. Just like we need to learn how to learn, think rationally and
> act, we need to learn how to manage our emotions. They work great in
> certain respects, but not all.

This and the suggestion of refining emotions (so that, by analogy, you get a
specific warning message instead of "General Protection Fault" :-) sound
really useful to me.

Meditation, again, is useful for the former. I haven't been doing it much
lately, but when it was a regular habit, I had the experience several times
of sitting down to meditate when I was emotionally stirred up. Eventually, I
would get to a place where I had a calm, still perspective, but could still
see my stirred-up "stuff." It would be as if, instead of having the stuff
inside my body-mind, I was holding it on my lap. Once you've done this, by
whatever method, it becomes possible to do it at will, at least until
something comes along to kick you into an even deeper morass of emotions.

Actually, there are two general flavors of meditation: stillness and insight.
I am talking above about the effect of stillness, and insight "should" be
helpful in the refining process. but I suspect it really only gets us to the
point of correctly identifying the source of whatever emotion is "up" at the
time, rather than necessarily developing more acute ones at a low level.

I'd love to know about working shortcuts here, since for most people, enough
meditation practice to be useful for this amounts to years.

Some formulas I have encountered and partially tested are:

- All fear comes from a threat to the ego;
- Anger is derivative of fear;
- Depression is the result of repressed fear (this may well not be the only
source of depression, or perhaps some cases are just very subtle and
indirect).

Anders again, different post:
> (Hmm, what are the proper english words for these two states? In Swedish it
> is likely "avund" and "missunsamhet" - trust the Swedes to have subtle
> nuances for envy :-)
I think the main english word for "missunsamhet" is "covetousness," from "to
covet" as in "Thou shalt not covet thy neighbor's wife's ass."

Greg Burch wrote:

> Almost all emotions serve vital functions in mental life. They serve as
> motors and gauges. I think transhumanists should aim to refine our
> emotions, but not to do away with them. (I also think that the idea of a
> truly sentient AI without emotions is like imagining an automobile without
> a motor or instruments.)

I have a simple example germane to the parenthetical above. "To be able to
play chess is a sign of a fine mind. To be able to play it well is a sign of
a misspent youth." There were no human-strength chess computers during my
misspent youth, but as they became available and I played against them, I
noticed that I would lose more games than I "should" to them (when I was
doing a lot of this, I was at USCF 2000 strength, +/- 50 points, and the
highest ratings claimed by commercial computers were around 2100).

Introspection provided a compelling answer: I wasn't treating the process of
playing chess with a computer as a real game. The reason? *The computer
didn't give a damn whether it won or lost.* I see a chess game as a contest
between two sentients; me sitting at my PC moving pieces on a virtual board
connected to, say, gnuchess, _looks_ like a game of chess, but doesn't have
any contest aspect. I can use exactly the same interface to connect to a
chess server and play a human; in that case, there is a contest, we both
"care" about the outcome (or agree to for the moment) and indeed, I play up
to my (declining) abilities in that case.

It seems like even Kasparov had trouble taking Deep Blue seriously at first,
but he learned quickly. He, of course, was playing publicly and had the
computer's developers very much in the game, and besides, he has a bigger ego
than I do :-)

My example addresses roughly half of Greg's comment, actually. I would say
that the lack of expressions of emotion would be one way an AI could fail a
Turing test, just as it makes gnuchess fail my chess-opponent Turing test
(gnuchess doesn't talk back at all, so it's clearly not trying to pass the
test); and OTOH, giving the AI some basic drives and a sense of self may well
be necessary for it to be able to formulate its own goals and act on them
(nothing new here, really).

-- fcp@traveller.com (Freeman Craig Presson)



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:23 MST