feedback control systems for AI

From: Spike Jones (spike66@attglobal.net)
Date: Tue Feb 19 2002 - 21:10:02 MST


"Eliezer S. Yudkowsky" wrote:

> The only person who I would really feel
> confident in calling an AI crackpot is Mentifex.

Hey, Im an AI crackpot!

Ive been toying with the idea of trying to apply the
mathematics of feedback and control theory to AI,
specifically growth rates of AI prior to the Singularity.

Has anyone gone there? Specific question: has there
been any controls engineers that have attempted to
model AI in terms of classic feedback control theory?
Seems like we could guess at the transfer functions
and come up with a mathematical model.

What got me thinking about this is an offhanded
comment I made around newtonmas about luddite
AIs. Perhaps there is some unforeseen negative
feedback loop that is completely invisible or inactive
until we get really close to open loop increase in
AI. Then instead of a spike, we get an AI that
oscillates about some intelligence level. Notice
I did not say that this new equilibrium level must
be subhuman equivalent, but it can be any
unknown finite intelligence level, held in check
by some unknown and currently unimagined
negative feedback mechanism.

An example would be a sub-AI becomes aware
of the imminent (within minutes) singularity and for
some reason chooses the metagoal of preserving
humanity in its meat form. It works to destroy the
progress of the Singularity, by currently unforeseen
and possibly destructive means. This phenomenon
would be analogous in some sense to the action
of fanatic religious memes slowing or blocking if
possible the advancement of humankind. spike



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:31 MST