From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Sep 01 1999 - 12:19:16 MDT
Bryan Moss wrote:
>
> Billy Brown wrote:
>
> > I am, however, very concerned about the potential for a future in which AI
> > turns out to be easy, and the first example is built by some misguided
> > band of Asimov-law enthusiasts.
>
> No potential. The difference between where we are now in software and where
> we need to be to make AI happen is equivalent to the difference between
> having classical and quantum computers on our desktops.
The gaps are the same size, yes. Quantum computers 2015 CRNS, on the
desktop 2020 CRNS, first transhuman AIs 2020 CRNS. But there's a very
key difference, which is that to *accelerate* AI all you need is a
desktop computer, a compiler, and an excellent mind. To accelerate
quantum computing you need a big expensive laboratory and a slightly
less excellent mind. (No offense, it's just that building a mind is
tougher than *anything*.)
More to Billy Brown's point, if you're running the project on an
open-source basis, than any acceleration of your own effort accelerates
all the others.
> Fortunately AI is "easy" to invest in - it's possible for an individual
> to become involved without furnishing a laboratory. AI stands alone,
> of all the Singularity technologies, in that it is possible for
> volunteers to assist; it is also the technology with the most
> immediate payback for the first steps on the incremental path, and,
> given the proper program architecture, the technology where
> multiple (hundreds or thousands) of efforts most easily combine. I
> believe it is thus the proper intervention point for the primary
> effort.
-- [Excerpt from a work in progress.]
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:59 MST