From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Nov 30 2005 - 00:53:54 MST
On Wed, Nov 30, 2005 at 04:44:34AM +0000, H C wrote:
> >Quick tip #3: Search the archives/google for "ai box".
>
> If you are going to suggest readings, then I suggest you read everything on
> the Singularity Institute website.
I have, actually.
> As far as "AI boxes" go, yes the answer is deadfully obvious.
> However, perhaps in the future you might give me a little more
> credit, because on this occasion I wasn't referring to the classic
> problem.
>
> In this case, the programmer is capable of directly accessing and
> observing the unconciouss motivations, concsiouss intentions,
> thoughts, plans, etc, and is essentially left in complete control
> of any real-world effectual action. The AI must, as necessary for
> any action to be carried out, submit its actions, in algorithmic
> form (along with comments) to a panel of human judges.
As far as I can see, one of two things happen: either the AI gets
out anyways, or the process of it doing anything useful is so
incredibly slow that we might as well not have bothered.
In either case, I don't see a win; I want something free to
recursively improve quickly.
-Robin
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT