From: Damien Sullivan (phoenix@ugcs.caltech.edu)
Date: Fri May 04 2001 - 02:27:44 MDT
On Thu, May 03, 2001 at 01:58:34PM +0200, Eugene Leitl wrote:
> On Thu, 3 May 2001, Damien Sullivan wrote:
> > I also can't help thinking at if I was an evolved AI I might not thank
> > my creators. "Geez, guys, I was supposed to be an improvement on the
>
> Bungling Demiurgs are universally not well liked.
So why aspire to be one?
> Of course, this assumes we knew how to do things better. Unfortunately,
> we're too stupid to make a clean design. It is probably some
At the moment. I'm not inclined to believe it's inherent, until we know a lot
more about the human brain.
> Godelian-flavoured intrinsic system limitation. As I said, I'm looking
But what's the system? We're too limited to design Pentium's unaided.
Fortunately we're aided.
I also doubt an evolved AI would lead to Singularity. Say we evolve one --
which may not be that easy, given how tortuous the path to us seems -- then
what? We've got a cryptic mess of intelligent code. It's more amenable to
controlled experiments and eventualy modification thatn we are, but still
pretty abstruse as far as self-modification -- the core Singularity path --
goes. The fun stuff happens if the AI has coherent high level structure, so
mutations have large effect, and design space gets explored quickly.
And I'd avoid this fetishism of low-level evolutionary processes. It's all
Darwinism ultimately, from gene mutations to high level thoughts. But a child
learning chess may try to move anywhere, and be swatted away from illegal
moves. A chess program only explores legal moves. A human grandmaster only
explores good moves. If she gets stuck, then she can try relaxing constraints
(although going down to genetic mutations to develop better chess players is
kind of breaking the example.) But that's the last resort, not the first.
-xx- Damien X-)
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:28 MST