Re: AI's and Emotions

From: Anders Sandberg (asa@nada.kth.se)
Date: Wed Jun 24 1998 - 06:06:36 MDT


"Scott Badger" <wbadger@psyberlink.net> writes:

> Just a couple points that came to mind on this topic. I think perhaps that
> the term *emotion* is being used a bit loosely here.

Yes, we better be more careful here.

> It's true that people
> commonly use phrases like *I feel motivated* or *I feel like watching TV*.
> If a thought just seems to pop into my mind (i.e. I want ice cream), I will
> probably say *I feel like some ice cream* but it's really a cognition, not
> an emotion. The basic emotions are typically thought to be anger,
> exhiliration, sadness, or fear. But even these basic emotional states have
> questionable objective validity.

Or rather, we certainly do experience them, but it is not certain they
are are basic. I'm fairly certain for example that what we commonly
call pleasure is composed of several subsystems (such as the "liking"
and "wanting" dopaminergic systems).

> The most widely supported theory of emotion suggests the following (1) we
> see the bear; (2) we cognitively interpret the experience as threatening;
> (3) we become physically aroused; (4) we cognitively assign a label to the
> aroused state based on the context, our personal history, whether anyone
> else is watching, etc.; and (4) the label we choose (i.e. the meaning we
> assign to the arousal) subsequently dictates our behavior (i.e. we run, cry,
> wrestle, etc.). The objective quality of these aroused states does not vary
> across the different emotions, though the subjective interpretation of them
> does. The chemistry is the same. We construct the rest. The level of
> arousal we experience springs from the value and relevance we assign to the
> experience.

Actually, there are some chemical differences in which neuromodulator
systems get activated - noradrenaline (arousal, aversive experience),
serotonin (safety?), acetylcholine (attention, arousal), dopamine
(motivation) and so on, but they are surprisingly small. Measured with
a polygraph, anger and happiness are indistinguishable.

> I think Anders made a reference to DeMato's (sp?) book, Descartes Error, and
> his assertion that emotions are necessary for good decision making. That
> may be true but what aspect of emotions?

Damasio.

I went for lunch with my research group, and we discussed this
problem. We ended up with the following conclusions:

In the brain the dopaminergic reward systems seem to be involved in
reinforcement learning. They seem to turn our sensory input and
deductions into a scalar reinforcement estimate, telling us how good
or bad we did, and directly influencing plasticity in the basal
ganglia to change our behavior to become better.

Since intelligent entities need to learn, and learn in an unsupervised
setting where the results might not even be present now but appear in
the future, it is hard to avoid some kind of reinforcement learning
with an internal reinforcement estimator that can do credit
assignment. This means the entities need to be able to evaluate
outcomes as good or bad, and to make estimates of how good the results
of different actions will be. I would say this correlates quite well
with what we would call pleasure or pain.

Damasio makes a good case that we use these valences to do 'alpha-beta
pruning' of our decision trees: ignore possible actions whose outcomes
are likely to be bad, concentrate on those that will lead to
good. Without this we would get trapped doing needless thinking about
bad ideas, beside the risk of choosing a bad action because we cannot
evaluate it as bad.

On the other hand, our emotions change the mode of our thinking. For
example, when aroused we tend to use behaviors that are strongly
learned instead of exploring new possibilities (this is true even for
high levels of happiness, apparently), when feeling content we are
less likely to act in ways that will involve risk, and so on. These
modes (moods?) doesn't appear necessary for rational thinking, so an
AI might do without them. The effect would likely be rather like a
caricature of Spock - constantly calm and efficient. But it is likely
that having cognitive modes is a good thing in many circumstances; in
a crisis situation it might be very useful not to be too calm, when
encountering something unexpected it might be useful to go into a
rapid analysis mode and so on.

So our consensus became that AI will most likely have at least
reinforcement emotions, and for efficiency reasons likely moods. On
the other hand, we have not yet agreed on what motivations need to be
included in an useful AI. A likely guess would be novelty detection
and approach, otherwise it would learn very badly and likely get stuck
in a rut.

> I don't see why AI's couldn't use the assignment of value and the
> determination of relevance to make good decisions without the need for
> aroused states.

Note that we do not need arousal for decision making, it just
influences it. But assignment of value, I would say that is the basis
of emotion.

> I certainly don't want to lose my capacity for positive exhiliration (i.e. I
> love laughing). I would hope that part of the transhuman condition would
> involve greater control over, rather than the elimination of emotional
> states.

Exactly! We humans already have an impressive control over our
emotions (most of them are actually caused at least in adults by
internal cognitive processes) and hardware connections between the
limbic system and the frontal cortex that are denser than among most
mammals. That is something we can develop further.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:13 MST