JKC has noted that evolution has put a tremendous amount of effort into
detecting and concealing lies, so a perfect truth machine is unlikely.
I say exactly the opposite: Evolution has put so much effort into lies
that there's probably a module of the brain devoted to lying (anyone
know if someone's looked for it?), and thus it might be very easy to
detect activity with an fMRI.
Biofeedback could probably suppress the "lying" cues, but not the resulting "subject is using biofeedback" cues.
Do I swear it would work? Of course not. But as a science-fictional premise, it is completely plausible. What is not plausible, IMHO, is that this invention would result in an age of world peace. Widespread violent chaos followed by totally new forms of government would be my guess.
The effect would be to strengthen all forms of power. Democracies could enforce honesty; dictators could enforce obedience. The democracies would win, but first there'd be an interregnum in the democracy - politicians and bureaucracies, faced with en masse unemployment, would band together and do anything to hold onto power. (The modern U.S. is a factionalized oligarchy with the demos holding the balance of power, and the oligarchic factions competing to please the demos. The threat of a truth machine might cause the factions to unite against the demos, like term limits but more so.)
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.