From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Wed Dec 07 2005 - 14:28:10 MST
Yep, I really like the idea of an AGI that merely offers suggestions and leaves the measured actions up to people. A few billion man-years gained by an active AGI going VOOOOMMM and remaking the world is not worth even an infinitesimal increase in extinction risks if the VOOOOMMM goes horribly wrong. The PAI could still give us faulty advice, but at least we humans would have a chance of analyzing the obvious implications of implementing the advice and reject any unclear potentially trojan-advice.
P K <kpete1@hotmail.com> wrote:
7) PASSIVE AI (PAI):
Proposed solution: Since AI can be so dangerous, why not make vim incapable
of ¡§acting¡¨ and only capable of ¡§thinking¡¨?
First of all, PAI should not be confused with AI boxing. A boxed AI IS
capable of acting. Vis actions are simply restricted by a digital cage.
Assuming ve wants to escape, ve probably has a very good chance since, by
definition, ve is smarter than vis jailers are. So, from the jailers¡¦ point
of view, the cage is a crappy security measure. In fact, this is the wrong
attitude when designing AI. The AI should¡¦ t be the enemy. But I digress¡K
The kind of pacification I¡¦m talking about, by analogy, would be like if
the jailers removed the part of the prisoners¡¦ brain responsible for his
will. The prisoners ceases to be a prisoners because he doesn¡¦t WANT to
escape (or anything for that mater) and the jailers cease to be jailers
because they don¡¦t have to keep him captive. This analogy seems pretty
gruesome, let¡¦s get back to AI. (Building a mind from scratch without a
piece is not the same as removing a part from a human¡¦s brain, so we won¡¦t
feel uncomfortable on ethical grounds).
Let¡¦s say you build an AI without a goal system. What working parts will
that AI have? It would have an Inference engine (probably Bayesian), a
memory etc. Basically, it would have all the parts that PREDICT and help
predict. (I.e.: S1 „³ S2) Now you have an empty slot where the goal system
should be. You set up your program such that you can act as a temporary goal
system for the AI by manually feeding it input.
Are humans too slow to act as manual goal systems? Probably slower than the
computer and some things will be impossible to do in this way but it is
still very useful. I will illustrate this with examples:
Human: What is X?
AI: Insufficient parameters. Equation data required.
Human: X*X = 4
Human: What is X?
AI: X=2 or X=-2
Human: Is global warming real?
AI: Insufficient parameters. Weather data and satellite imagery required.
Human: [input]
Human: Is global warming real?
AI: :-p
Human: Given universe state S1, what is the next most likely state?
AI: S2
Human: What are the required conditions for S2 to occur?
AI: S1
As you can see the ¡§predicting¡¨ part can solve for things given parameter.
However it does not chose the question or what actions to take. Moving
along¡K
Human: What is the best goal system?
AI: Insufficient parameters. Define ¡§best¡¨.
and points out inconsistencies>
As you can see the ¡§predicting¡¨ part can be used to get the goal system
and unlike humans, the AI wont make any mistakes and will notice all the
inconsistencies. Also, it is unaffected by human biases. An AI doesn¡¦t need
a goal system to do these things. It reacts to input the same way your leg
reacts when a doctor hits it with a hammer, automatically.
Note: I do not claim to know how an AI would answer in the examples since I
am not superintelligent nor to I claim that the interface will be exactly in
this way (console chat).
---------------------------------
Yahoo! Personals
Let fate take it's course directly to your email.
See who's waiting for you Yahoo! Personals
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT