Re: Revolting AI

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Thu Mar 07 2002 - 03:23:34 MST


On Wed, 6 Mar 2002, Simon McClenahan wrote:

> I know you are skeptical of Eliezer's Friendly AI, but would you agree
> that FAI is more plausible and desirable if we had a strategy for
> Friendly Human Intelligence, with the intent to reduce the danger of a
> powerful AI evolving with an unpredictable strategy?

I'm not against AI per se. On the long run it's inevitable, and we'll have
to deal with it. I'm against trying to build a supercritical AI seed while
we're in window of high vulnerability. As long as we have considerable
amounts of slowtime humans tightly coupled to the native ecosystem at the
botton of this gravity well, we're extremely vulnerable.

There's a number of things which need to be done to make us less
vulnerable, some of them straightforward, some less so. Some of them are
protective (enhance people) some are offensive (addressing the scenarios
of emergence/creation of AI). I can give you a list.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:49 MST