From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Nov 24 1999 - 19:51:43 MST
Delvieron@aol.com wrote:
>
> This is not what I envision as a non-upgrading AI. First, a non-upgrading AI
> would have little or no conscious control over its own programming. It could
> respond to environmental stimuli, formulate behaviors based on its
> motivational parameters, and implement those behaviors. This is basically
> what humans do. Technically, such an AI could possibly learn about itself,
> if creative enough figure out a way to improve itself, then find some tools
> and do it (if it could remain active while making modifications). This would
> be no different than you or me. However, it might never do so if we program
> it to have an aversion to consciously tinkering with its internal functions
> except for repairs. This would be in my estimation a non-upgrading AI.
Human brains have millions of years of evolution behind them. The only
thing that makes it remotely possible to match that immense evolutionary
investment with a few years of programming is the recursive-redesign
capability of seed AI, reinvesting the dividends of intelligence. I
guarantee you that the first artificial intelligence smart enough to
matter will be a seed AI, because doing it without seed AI will take at
least another twenty years and hundreds or thousands of times as much labor.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:51 MST