Re: Let's hear Eugene's ideas

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Oct 05 2000 - 03:46:21 MDT


James Rogers wrote:

>
> I personally believe that a controlled and reasonably safe deployment
> scheme is possible, and certainly preferable. And contrary to what some
> people will argue, I have not seen an argument that has convinced me that
> controlled growth of the AI is not feasible; it has to obey the same laws
> of physics and mathematics as everyone else.

It would help to incrementally incorporate more and more AI into our
work and world. To do this peaceably you need to insure that people get
taken care of, have a way to stay sanely occupied and to have their
needs met, regardless of the growing number of more and more
sophisticated tasks done by AI/robotics. You need to show net benefits
to all, not just to a few.

> If our contingency
> technologies are not adequate at the time AI is created, put hard resource
> constraints on the AI until contingencies are in place. A constrained AI
> is still extraordinarly useful, even if operating at below its potential.
> The very fact that a demonstrable AI technology exists (coupled with the
> early technology development capabilities of said AI) should allow one to
> directly access and/or leverage enough financial resources to start
> working on a comprehensive program of getting people off of our orbiting
> rock and hopefully outside of our local neighborhood. I would prefer to
> be observing from a very safe distance before unleashing an unconstrained
> AI upon the planet.
>

I would prefer that we took the many thousands of small steps necessary
before every arriving at what is feared (labeled here "an unconstrained
AI") and had those steps shown to be of net benefit to all and safe
enough for the next step. AI as a field has been languishing since its
overhype in the 80s and 90s. This needs to end.

- samantha



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:25 MST