From: Scott Badger (wbadger@psyberlink.net)
Date: Tue Jun 23 1998 - 17:46:47 MDT
To Anders and Den:
Just a couple points that came to mind on this topic. I think perhaps that
the term *emotion* is being used a bit loosely here. It's true that people
commonly use phrases like *I feel motivated* or *I feel like watching TV*.
If a thought just seems to pop into my mind (i.e. I want ice cream), I will
probably say *I feel like some ice cream* but it's really a cognition, not
an emotion. The basic emotions are typically thought to be anger,
exhiliration, sadness, or fear. But even these basic emotional states have
questionable objective validity.
The most widely supported theory of emotion suggests the following (1) we
see the bear; (2) we cognitively interpret the experience as threatening;
(3) we become physically aroused; (4) we cognitively assign a label to the
aroused state based on the context, our personal history, whether anyone
else is watching, etc.; and (4) the label we choose (i.e. the meaning we
assign to the arousal) subsequently dictates our behavior (i.e. we run, cry,
wrestle, etc.). The objective quality of these aroused states does not vary
across the different emotions, though the subjective interpretation of them
does. The chemistry is the same. We construct the rest. The level of
arousal we experience springs from the value and relevance we assign to the
experience.
I think Anders made a reference to DeMato's (sp?) book, Descartes Error, and
his assertion that emotions are necessary for good decision making. That
may be true but what aspect of emotions? Can't they get in the way of good
decision making as well? I think so. Emotions seem to be signals that what
we're experiencing is likely to have either a positive or negative impact on
our life. I don't see why AI's couldn't use the assignment of value and the
determination of relevance to make good decisions without the need for
aroused states. It's strikes me as a largely vestigial mechanism. Even so,
I certainly don't want to lose my capacity for positive exhiliration (i.e. I
love laughing). I would hope that part of the transhuman condition would
involve greater control over, rather than the elimination of emotional
states.
Once their programming abilities outstrip ours, won't it be up to AI's
whether they want to have emotions or not? I mean they'll probably figure
it out before we do, don't you think? If they choose to productively
interact with human-types, they'll probably want to program themselves for
behaviors that at least give the appearance of emotional reactions. If I'm
interacting with an AI and I'm depressed about something, our interaction
will be more productive if the AI can accurately identify my emotional state
and react empathetically (there, there, Dr. Badger...it'll be alright).
Eventually, I may forget that it's just a programmed response (like most of
mine are).
Scott Badger
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:13 MST