As such, not very likely. I have my own theories about how emotions work; it
has to do with the existence of cognitive objects called "goals"; most of the
interesting stuff comes from our two-tiered architecture, the pleasure/pain
system's centrality, and the coordination between goal-system and worldview.
Point being... no, this doesn't seem very likely. Aside from the subjective
view, the emotional processes are too complex to be purely reflexive or
behavioristic. The amount of grief parents experience over the death of a
child correlates roughly with the amount of effort wasted, from a reproductive
cost-benefit view. This is evolutionary, rather than conscious, in origin...
and demonstrates that evolution can control the emotions in detail.
Evolutionary psychology explains emotions far more complex and devious than
fright and flight.
The subjective aspect of emotions does come from an entire orchestra of goals,
viewpoint alterations, belief changes, and physiological responses which
stimulate further changes... but all the cognitive stuff is still there; it is
an integral part.
Finally, although this is getting into the hard problem of conscious
experience, I don't think the physiological stuff can explain the subjective
aspect. Is adrenaline conscious?
> If that were the case, then is it not also possible that Deep Thought and
> its successors have similar illusionary emotions? Suppose Deep Blue wins a
> match. Whilst all the while appearing the same, staid machine predictably
> pulling bytes from RAM and processing them, it deviates slightly and pulls
> a different set of bytes to usual. Nothing odd to the designer there. But
> then there is nothing odd about a little adrenalin escaping into our blood
> stream, and our heart rate going up in the example with the tiger.
Emotions aren't mysterious forces. They can't hide from us. Turing machines
are deterministic and wholly causal; any force operating on Deep Thought would
be explicable as the sum of its parts.
> When Deep Blue won the series recently, I wondered whether it felt a most
> primitive sense of achievement, in the same way that our own primitive
> ancestors might have felt a sense of achievement if they had lifted their
> first stone, and successfully hurled it at a passing antelope. Yes, the
> thing is just a mathematical algorithm. But so, apparently, are we.
It didn't. Deep Blue may have emergent chess stratagems from the complex
physical-process-level chess exploration algorithms. I don't think there was
any form of feedback for successful and unsuccessful moves. In training mode
- if it possesses such a thing - it might be said that it experiences
"pleasure" at a successful move. In truth, I think this is torturing the
metaphor. Except from a functionalist perspective, there wouldn't be much
internal difference between "pleasure" and "pain" - just negative and positive
numbers. If Deep Blue were reversed to make the worst possible move, it might
not require much reprogramming... i.e., the distinction between pleasure and
pain would be surface, rather than deep causal.
> I suspect that "emotions" have evolved because they are an evolutionary
> advantage. Deep Blue has more chance of survival if it wins matches, and
> hence suffers selection pressures of the same kind that our ancestors did.
> If each system is a logical parallel of the other, then are "emotions" not
> possible results in both cases?
Emotions did indeed evolve because they are evolutionary advantages. Although
Deep Blue's weightings for piece value may be "evolutionary" in some sense, I
don't think the term can really apply in the sense you use it. Linear numbers
aren't complex enough to "evolve"; evolving sequences of instructions, as in
the TIERRA environment, are another matter.
Deep Blue, again, is not that level of AI. It is a brute-force mechanism. I
don't believe it possesses evolvable heuristics as did EURISKO, or evolvable
concepts as did AM, or evolvable anything except weightings. And weightings,
not possessing much in the way of causal force - though this is true only for
Deep Blue and the like - are not emotions. At the very best, they could be
instincts or reflexes, possessing no cognitive correlates.
It is possible that the weightings of piece value have far and subtle
consequences in Deep Blue's Great Search Tree. Kasparov has stated that Deep
Blue had a new kind of intelligence. It could be that all the interacting
weightings had the same kind of emergent effect as all our interacting neural
weightings... although on a much smaller scale! It would take a lot of
looking, and specialized tools, to detect those patterns. The fundamental
questions are these:
1. Are the patterns complex? Are they non-linear, non-algebraic? Are they
divisible into sub-patterns? Are the patterns "causal" - composed of the
complex interaction of sub-patterns? Are they Turing complete?
2. Do the patterns evolve? Can they reproduce? Can they mix? Do they compete?
Although not an expert on search trees, I very much suspect that search trees,
or at least A* and variants, are not that complicated. They may be
Turing-complete in theory, although I actually doubt it... but that Turing
completeness may not be translatable into a level where patterns could
"evolve" and the best patterns would win games. A lot of technically Turing
complete processes' Universal Turing Machines are *very* far from the level in
which they are implemented.
It is possible. There are theories in which "hedonistic neurons" - neurons
that want to fire as often as possible - provide the basic organizing
principle for the brain. We are faced not merely with that, but with
requiring that the most often-firing neurons are the ones which have the best
solutions to some cortical-level problem. On the other hand, Deep Blue's
basic level is much more directly semantic than the basic neural level.
> If we accept that emotions are illusionary, then I doubt that the
> capability to experience emotion has a Boolean value. Probably, all systems
> have emotion, but today, most humans are capable of perceiving that emotion
> only in certain animals.
Again, I disagree. Once I argued over whether a thermostat had meaning. I
took the position that it was meaningless, because meaning was a far cognitive
function, or even a far physical function. Likewise, emotion as we perceive
it is a functional or surface matter, often depending on how fuzzy and lovable
something *looks*. Emotion, as that which produces the functions, is a
cognitive matter. The subjective aspect of emotions is a far cognitive
matter, impinging on things that, quite frankly, I don't think any mortal
being will ever understand. But we don't need the subjective aspect, any more
than we need conscious experience for human-equivalent intelligence.
> >Modern computers are emotionless, and utterly unintelligent. Their level of
> >"pattern" ranges from undirected physical processes such as Deep Thought, to
> >bacterium-level organisms such as a word processor, to the insect-level
> >mind-boggling complexity of Windows 95.
>
> You seem to be implying that insects don't have emotions. Have you never
> seen an angry wasp? The tone of its beating wings rises, it flies faster,
> and its sting hangs lower.
So if I have a program,
struct Wasp {
float wing_tone;
float air_speed;
float sting_angle;
};
and I adjust the three variables you mentioned, the program has emotions?
Likewise, if I videotape a wasp and play the videotape, does the videotape
have emotions? Certainly you'll ascribe emotions to the thing you see on
videotape, even though it's simply a recorded image.
It's not how it looks... it's the functional part.
> If I understand you correctly, then doesn't the fact that you don't see
> emotion in insects whilst I do, prove that beauty is in the eye of the
> beholder?
>
> Emotions in computers may be even easier than you intended to point out!
Computer programs work exactly the same, regardless of what we perceive in
them. The same, to some extent, might be said of reality... except that we're
a part of it. The way we ascribe emotions is a fruitful area of cognitive
science. It is not, however, a fruitful area of philosophy. Any emotions
that are observer-dependent, I am not interested in. Likewise for other AI
"achievements" that require me to believe the symbol G0025 represents a
hamburger merely because the string "hamburger" is attached.
Profitable computer emotions will require functional capability, not merely
surface resemblance. Rick Knight doesn't want to comfort a forlorn computer
program... well, that's not what I had in mind. More like a program that
could be frightened by an imminent crash and save your data, or could "want"
to send mail and take all necessary actions to do it. Once you have the goal
system for setting up those chains, doglike emotions are only a short step away.
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.