From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Fri Aug 05 2005 - 14:58:21 MDT
Carl Shulman <cshulman@fas.harvard.edu> wrote:
>> So I will argue that there are no plausible assumptions in which the
>> strategy of "create a superintelligent AI and keep it in a box" is
>> safe _and_ useful _and_ feasible. Therefore, however confident you are
>> of your ability to keep a box sealed, it doesn't make sense to set out
>> to create a superintelligent AI unless you have a plan for making sure
>> it will be Friendly.
>The box is an additional line of defense, in case one was >overconfident about
>that 'surefire Friendliness plan.' Here's my scenario:
>3. Ask the AI for techniques to verify that the plan to create Friendliness was
>guaranteed to work: advances in computer science, designs for nootropic drugs
>to enhance programmer intelligence, humanly comprehensible analyses of the
design etc.
You can't reference to the AGI in the process of assuring its friendliness. Nootropic drugs might lead to "unfriendly" humans, I'm not sure. I do think there are scenarios where having an effective box might be useful. If thousands of personnel are staffed in a crash Nanhattan Molecular Manufacturing programme, embedded spies would lead to mirror programmes around the world. In this scenario, some Nanhattans might be judged not to be suicidal, and the odds of simultaneous achievement of industrial MM capacity (and an ensuing suicidal arms race) by rival nations would not be known until after the fact. In this case, it would be nice to have an AGI with the potential for friendliness ready to go, in case the baddies win the MM race or in case WWIII seems inevitable. I would be curious to know of the gaps in our physics Ben Goertzel alluded to earlier; if a warehouse of AGI computers could be rendered inert with many sensors and a kill switch triggered at the 1st sign of CPU
temperature variation or power flux or whatever.
---------------------------------
Start your day with Yahoo! - make it your home page
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT