From: J. R. Molloy (jr@shasta.com)
Date: Mon Sep 25 2000 - 14:28:42 MDT
Eugene Leitl writes,
> Notice that Eliezer does not want to use evolutionary algorithms,
> probably because he (rightfully) suspects that the result will not
> necessarily be friendly to mehums, in fact is extremely unlikely to be
> friendly.
Perhaps I tend to think that genetic programming will eventuate in a friendly AI
because I don't know enough about coding a transhuman AI. (Who does?) No matter
what architectural route results in an actual AI, I still maintain that the AI
will want to be friendly because along the way to human-competitive AI (I think
many stages of problem solving ability will precede a full-blown human level AI)
any unfriendly prototypes can be culled from the process.
> Of course if you keep rewriting pieces of you in a strictly Lamarckian
> fashion, Darwin really really wants to come in through every back door
> and hole you might have overlooked.
Yes, the seminal <blush> work in this regard is probably Dyson's _Darwin Among
The Machines_, which used to be available online. Dyson and James Bailey (_After
Thought_) mention that massively parallel neural networks show the most promise
of evolving into human level AI. I don't particularly care how AI comes into
being. My overriding concern relates to how and whether I'll be able to
interface with the AI, in symbiotic brain-to-machine compatibility. (Don't
worry, if I can ever merge with an AI, I promise to be very friendly indeed --
depending on how rich the alliance renders my human identity.) <wistful smile>
--J. R.
"It is better to have a permanent income than to be
fascinating." --Oscar Wilde
[Amara Graps Collection]
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:11 MST