Zero Powers wrote:
>
> >From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
>
> >One does not perform "research" in this area. One gets it right the first
> >time. One designs an AI that, because it is one's friend, can be trusted
> >to recover from any mistakes made by the programmers.
>
> Who has *ever* gotten anything right the first time? Good luck. You'll
> need it.
Zero errors is an improbable goal - although I have had, in my life, the
experience of writing a big complex object-persistence module (around 60K of
C++ code) and having it compile and run without a hitch.
Zero *nonrecoverable* errors should be doable. When the AI is stupid, errors
are recoverable because the programmers can go back in and correct them. When
the AI is smart, errors are recoverable because the AI is capable of
recognizing errors.
And now, if you'll pardon me, I have to get back to writing the "Friendly AI"
part of my talk at the Foresight Gathering.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:16 MDT