From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Dec 17 1998 - 00:07:18 MST
JKC has noted that evolution has put a tremendous amount of effort into
detecting and concealing lies, so a perfect truth machine is unlikely.
I say exactly the opposite: Evolution has put so much effort into lies
that there's probably a module of the brain devoted to lying (anyone
know if someone's looked for it?), and thus it might be very easy to
detect activity with an fMRI. Evolution baffles verbal and kinesic
perceptions, but would have absolutely no reason to defend against
neuroimaging. The inventor might not even need much cognitive science;
a neural net might be very easily trainable to decode "lying" brain activity.
Biofeedback could probably suppress the "lying" cues, but not the
resulting "subject is using biofeedback" cues.
Do I swear it would work? Of course not. But as a science-fictional
premise, it is completely plausible. What is not plausible, IMHO, is
that this invention would result in an age of world peace. Widespread
violent chaos followed by totally new forms of government would be my guess.
The effect would be to strengthen all forms of power. Democracies could
enforce honesty; dictators could enforce obedience. The democracies
would win, but first there'd be an interregnum in the democracy -
politicians and bureaucracies, faced with en masse unemployment, would
band together and do anything to hold onto power. (The modern U.S. is a
factionalized oligarchy with the demos holding the balance of power, and
the oligarchic factions competing to please the demos. The threat of a
truth machine might cause the factions to unite against the demos, like
term limits but more so.)
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:50:02 MST