From: Brian Phillips (deepbluehalo@earthlink.net)
Date: Sat Jun 30 2001 - 19:06:24 MDT
After rereading part of General Intelligence and Seed AI
I thought better of my comments and would like to amend
them thusly.....
It sounds like I was being obtuse.
"1.1: Seed AI
It is probably impossible to write an AI in immediate possession
of human-equivalent abilities in every field; transhuman abilities
even more so, since there's no working model. The task is not
to build an AI with some astronomical level of intelligence; the
task is building an AI which is capable of improving itself, of
understanding and rewriting its own source code. The task is
not to build a mighty oak tree, but a humble seed.
As the AI rewrites itself, it moves along a trajectory of intelligence.
The task is not to build an AI at some specific point on the
trajectory, but to ensure that the trajectory is open-ended,
reaching human equivalence and transcending it. Smarter
and smarter AIs become better and better at rewriting their
own code and making themselves even smarter. When
writing a seed AI, it's not just what the AI can do now, but
what it will be able to do later. And the problem isn't just
writing good code, it's writing code that the seed AI can
understand, since the eventual goal is for it to rewrite its
own assembly language. (1). "
It's not a valid assumption to assume that a seed AI wouldn't
take advantage of all data known about human sentience
(i.e. approximations, however inaccurate, of a "working model")
to rewrite it's own code. On the contrary I can't think of any
reason not to use the Human Brain Project (sic) or anything
else we know about the "prototype" as part of the the
evolutionary process. I guess I fell into that old "us vs. it"
thought trap. But "it" would be "us" if it worked. Just the
larger set subsuming the smaller.
hmmm,
Brian
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT