From: Damien Broderick (damien@ariel.ucs.unimelb.edu.au)
Date: Wed Feb 25 1998 - 10:16:17 MST
At 09:23 PM 2/24/98 -0800, John Clark wrote at random, that is, freely (by
his own account):
> A man, animal or machine has free will if it can not always predict
>what it will do in the future even if the external environment is constant.
<snips>
>If I had total self knowledge then I'd always know what I was going to do
>next and so I'd feel like a robot
John, I think this is a terrible confusion, although one that many people
hold to. Freedom of choice for humans does not mean acting at random like
the Dice Man; that would be psychosis, not freedom. I think your model
doesn't imply that anyway, but rather that our consciousness or ego is a
module (either executive or interpretative) with very restricted
information about the full state of the self. This means that when we opt,
we do so from many more `unconscious' motives and after many more
`unconscious' or `non-conscious' combinatory and analytical moves made in
parallel than can be registered by the conscious self, except via some kind
of sampled/gestalted `feeling tone' of satisfaction, flow, frustration,
confusion, guilt, etc. But most free choices, and the satisfactory feeling
that goes with acting in ways that are not plainly constrained to our
disadvantage, *do* have the characteristic of being consistent with our
acquired values. True, some of those values function as meta-values, so
that we can try to rewire others lower down the hierarchy, or debug values
and templates that were ported into us before we had enough nous to
evaluate and reject or modify them. But I think that the more we make our
values `our own', the more free we feel - that is, against your claim that
we would feel like robots.
I guess.
Damien Broderick
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:48:38 MST