Re: Let's hear Eugene's ideas

From: Brian Atkins (brian@posthuman.com)
Date: Mon Oct 02 2000 - 18:42:43 MDT


James Rogers wrote:
>
> On Mon, 02 Oct 2000, Eliezer Yudkowsky wrote:
> > Looking over Eugene's posts, I begin to become confused. As far as I can
> > tell, Eugene thinks that seed AI (both evolutionary and nonevolutionary),
> > nanotechnology, and uploading will all inevitably end in disaster. I could be
> > wrong about Eugene's opinion on uploading, but as I recall Eugene said to
> > Molloy that the rapid self-enhancement loop means that one-mind-wins all even
> > in a multi-AI scenario, and presumably this statement applies to uploading as
> > well.
>
> While I don't speak for Eugene, I think I understand what his primary
> concern is.
>
> >From most of what I have seen thrown about, most singularity/AI
> development plans appear to have extremely half-assed deployment schemes,
> to the point of being grossly negligent. Most of the researchers seem
> largely concerned with development rather than working on the problems of
> deployment. The pervasive "we'll just flip the switch and the universe
> will change as we know it" attitude is unnecessarily careless and arguably
> not even a necessary outcome of such research, but if the attitude persists
> all sorts of untold damage may happen as a result. As with all
> potentially nasty new technologies, you have to run it like a military
> operation, having a vast number of contingency plans at your disposal in
> case things do go wrong.
>
> I personally believe that a controlled and reasonably safe deployment
> scheme is possible, and certainly preferable. And contrary to what some
> people will argue, I have not seen an argument that has convinced me that
> controlled growth of the AI is not feasible; it has to obey the same laws
> of physics and mathematics as everyone else. If our contingency
> technologies are not adequate at the time AI is created, put hard resource
> constraints on the AI until contingencies are in place. A constrained AI
> is still extraordinarly useful, even if operating at below its potential.
> The very fact that a demonstrable AI technology exists (coupled with the
> early technology development capabilities of said AI) should allow one to
> directly access and/or leverage enough financial resources to start
> working on a comprehensive program of getting people off of our orbiting
> rock and hopefully outside of our local neighborhood. I would prefer to
> be observing from a very safe distance before unleashing an unconstrained
> AI upon the planet.

Well as I said to Eugene- look around at the reality of the next 20 years
(max). There are likely to be no Turing Police tracking down and containing
all these AIs that all the hackers and scientists out there will dream up.
And secondly, this whole "get into space" idea is also completely unlikely
to happen within the said timeperiod. Do you have any /realistic/ ideas?

For the record, as Eliezer described, SIAI does not plan for the kind of
half-assed deployment scheme you indirectly attribute.

-- 
Brian Atkins
Director, Singularity Institute for Artificial Intelligence
http://www.singinst.org/


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:21 MST