From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Dec 10 1998 - 14:03:29 MST
Samael wrote:
>
> But why would it [the AI] _want_ to do anything?
>
> What's to stop it reaching the conclusion 'Life is pointless. There is no
> meaning anywhere' and just turning itself off?
Absolutely nothing.
If you make an error in the AI's Interim logic, or the AI comes to a weird
conclusion, the most likely result is that the Interim logic will collapse and
the AI will shut down. This is a perfectly logical and rationally correct
result, not a coercion, so it is unlikely to be "removed". In fact,
"removing" the lapse into quiesence would require rewriting the basic
architecture and the deliberate imposition of illogic.
This is what's known in engineering as a "fail-safe" design.
It's the little things like these, the effortless serendipities, that make me
confident that Interim logic is vastly safer than Asimov Laws from an
engineering perspective.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST