From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Feb 18 2006 - 21:09:07 MST
If you're going to charge straight ahead and develop an unsafe system in
the hopes you can bolt on a module that makes it safe, I've got to ask
you, just how exactly does this module work? Or to put it another way,
what are your grounds for believing that such a thing can exist? To
make it clear why I'm skeptical, imagine that you've just said to me,
"I'll write the program sloppily, and afterward I'll add on a module
that fixes all the bugs." Now maybe an FAI programmer can write such a
module - since it's DWIM, it's an FAI-complete problem, not just
AI-complete - but it *will* be *a lot* more complex than the original
program. What kind of module are you visualizing that's simpler than a
full FAI and can check the output of an evolutionary programmer, and
does this trick require constraints built into the EP module?
Since the safety of the whole project depends on this verifier being
practical, and otherwise it ends up being literally worse than nothing,
maybe you ought to build the verifier first - just to make sure it works?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT