From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Sat May 06 2006 - 13:48:18 MDT
Martin Striz wrote:
> On 5/6/06, Ben Goertzel <ben@goertzel.org> wrote:
>> Michael Vassar wrote:
>> > You *can't* rationally disagree with someone who you believe to be
>> rational.
>>
>> This is clearly untrue, for a variety of reasons.
>
> Rational people disagree all the time. Just trawl a few academic
> email lists or conferences. They can disagree when they have
> different starting evidence, or they weight the evidence differently.
> The difference between rational and irrational is not a qualitative
> one. People are rational to differing degrees and with respect to
> different things. No human is a neutral probabilistic inference
> machine.
>
> Martin
To presume that "Rational people disagree all the time. " you must
believe that rational people exist. In my entire life I haven't found
any. I've found some people who appear to make rational decisons in
some sub-field(s) most of the time. Now this does assume that we are
agreeing on the meaning of "rational", which I'm not certain. I'm
adopting the "recognition criterion" specified by the grandparent
(Michael) "rational truth-seekers always respond to disagreement by
updating their beliefs until they can not predict the direction of
future disagreement." Given that recognition criterion I feel safe in
saying "I don't know that I've ever met a rational person". (This does
require a bit of an extension, as it doesn't specify how to recognize
one in isolation...)
Personally I've decided that people generally make their decisions by
pattern matching. And that the pattern matching heuristics are
optimized for fast recognition rather than for accuracy. People train
themselves by exposing themselves repeatedly to patterns that they need
to match, attempting to match them, and applying feedback. This is a
very different process that most descriptions of "rationality", and
doesn't necessarily come up with the same result. E.g., in the
experiments talked about people were matching the "longest nearly
correct string" with that rule being applied both to the data they were
presented AND to the task description. Part of the task description was
elided. Whoops!, that part changed the meaning of the instructions!
Thus, I'm fairly certain that one could train people to not make the
specific mistake documented (with repeated exposure and feedback...in
lots of differing contexts). Unfortunately, there are LOTS of these
"glitches" in the way people think, because it ISN'T, basically, a
logical process. Logic is an abstraction from the patterns that people
have observed in each other. (Originally it was a kind of a
formalization of Greek grammar, but it's changed a LOT since them.)
The problem is that pattern matching is a fast action, and logic is a
SLOW action. People often, perhaps usually, need to make decisions on a
time scale that doesn't allow logic to decide things. At best it can
check that a limited portion of the answer is sensible. It's a problem
in error checking a real time control system. Optimization doesn't
yield correct answers, it yields answers that come "soon enough" and are
"sufficiently good". (Humans aren't even "almost optimal", but
evolution is headed towards an "almost optimal" solution for the current
environment. Unfortunately, the current environment isn't stable, so
you get a rather stocastic amplification being applied to the stocastic
process that is "random change".)
N.B.: I may sound authoritative, but these are just wild ass guesses.
To me they seem reasonable, and I don't see why they would be
wrong...but they've never worn a harness.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT