From: Thomas McCabe (pphysics141@gmail.com)
Date: Wed Nov 28 2007 - 19:15:05 MST
On Nov 28, 2007 8:50 PM, Harry Chesley <chesley@acm.org> wrote:
> Thomas McCabe wrote:
> >> First, to be useful, FAI needs to be bullet-proof, with no way for
> >> the AI to circumvent it.
> >
> > Error: You have assumed that the AI will actively be trying to
> > "circumvent" Friendliness; this is not accurate. See
> > http://www.intelligence.org/upload/CFAI//adversarial.html. In short, a
> > Friendly AI has the physical capacity to kill us, or to make an UFAI,
> > but it doesn't want to.
>
> Bad choice of words on my part. Circumvent implies intent. What I really
> meant was "no way for the FAI's friendliness goal to fail."
>
That's impossible, and not what we're trying for. See
http://www.vetta.org/?p=6, http://www.vetta.org/?p=6#comment-21.
> > That's what CEV is for, see http://www.intelligence.org/upload/CEV.html.
> > The idea is that you don't specify Friendliness content; you specify
> > the process to derive Friendliness content.
>
> Isn't that just an error-prone? Perhaps even more so, since it adds
> another layer.
>
>
It's easier than trying to program human morality directly. See
http://www.overcomingbias.com/2007/11/complex-wishes.html.
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT