Re: Revolting AI

From: Simon McClenahan (SMcClenahan@ATTBI.com)
Date: Wed Mar 06 2002 - 12:32:12 MST


----- Original Message -----
From: "Eugene Leitl" <Eugene.Leitl@lrz.uni-muenchen.de>

> Co-evolution of strategies implies unpredictability. Hence the
> requirements for docility and power are mutually exclusive. Tanstaafl,
> there's no power without the price, etc.
>
> Immediately following from this: if you've built a powerful AI, you've
> built a dangerous one.

I know you are skeptical of Eliezer's Friendly AI, but would you agree that
FAI is more plausible and desirable if we had a strategy for Friendly Human
Intelligence, with the intent to reduce the danger of a powerful AI evolving
with an unpredictable strategy?

cheers,
    Simon



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:48 MST