AI safeguards [Was: Re: Humor: helping Eliezer to fulfill his full potential]

From: Max More (max@maxmore.com)
Date: Mon Nov 06 2000 - 09:39:58 MST


At 06:24 AM 11/6/00, Brian wrote:

>P.S. we aren't the only ones out there... recently we have come across a
>competing AI project that we rate as having a significant (greater than zero)
>chance of "waking up"... and it is set for completion circa 2003 at the
>latest.
>I really think it would be good if there was an equivalent of Foresight for
>the AI area... Foresight for now is still so focused on nanotech they don't
>see the chance to expand into a more general Foresight organization covering
>all of Bill Joy's worries.

Brian, this is one of the very issues that I already have down as a
discussion area for the early-2001 ExI retreat. The topic as I've been
thinking of it is: "Machine intelligence: Threat or opportunity?" That
title is meant as an attractor for a range of possible discussions,
including competing scenarios for the developmental pathway and pace of AI
(including the Moravec runaway scenario vs. the Kurzweil integration
scenario); the feasibility of safeguards and how these might be
implemented; and how to promote the convergence of humans and their
technology to reduce the probability of the runaway scenario.

Although I'm not quite ready yet to unveil the current stage of planning
for the 2001 Retreat, I'm letting this part slip out since its pertinent to
your message. I can also say that we're aiming for February or March, and
the Retreat will probably be held in Las Vegas.

Onward!

Max
------------------------
Max More,
max@maxmore.com or more@extropy.org
www.maxmore.com
President, Extropy Institute. www.extropy.org
Senior Content Architect, ManyWorlds Consulting: www.manyworlds.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:51 MST