From: Thomas McCabe (pphysics141@gmail.com)
Date: Mon Nov 19 2007 - 16:00:35 MST
On Nov 19, 2007 4:48 PM, John K Clark <johnkclark@fastmail.fm> wrote:
> Roland has some ideas on how to make a slave:
>
> > To avoid any possibility of dangers we program
> > the OAI to not perform any actions other than
> > answering with text and diagrams(other media
> > like sound and video would be a possibility too).
> > In essence what we would have is a glorified
> > calculator. I think this avoids any dangers from
> > the AI following orders literally with
> > unintended consequences.
>
> If you are like most people there have been times in your life when a
> mere human being has talked you into doing something that you now
> understand to be very stupid. And this AI will be far smarter, more
> interesting, more likable, and just more goddamn charming than any human
> being you have ever or will ever will meet; Mr. AI will have charisma Up
> the Wahzoo, he will understand your physiology, what makes you tick,
> better than you understand yourself. I estimate it would take the AI
> about 45 seconds to trick or sweet talk you (or me) into doing exactly
> what it wants you (or me) to do.
>
> John K Clark
> --
> John K Clark
> johnkclark@fastmail.fm
>
> --
> http://www.fastmail.fm - mmm... Fastmail...
>
>
>
(sigh) The point of FAI theory isn't to figure out what the AGI
*should* do. It's to get the AGI to do anything at all besides random
destruction, and to do it predictably under recursive
self-improvement. If we can program an AGI to reliably enhance the
jumping abilities of lizards, and to continue following this goal even
when given superintelligence, the most difficult part of the problem
has already been solved.
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT