Re: Thinking about the future...

From: @smtplink.lse.ac.uk
Date: Wed Sep 04 1996 - 10:09:44 MDT


          Robin Hanson wrote:
>But realistically, any one AI probably won't be too far
>ahead of any other AI, so they can police each other.
          
          The issues are complex. To begin with, the claim that it is
          improbable that one AI would be far ahead of any other AI
          can be challenged. Suppose there are possible breakthroughs
          to be made in computer technology. Once an AI became
          sufficiently intelligent, it could think out a radical
          improvement to its design; which would make it more
          intelligent, allowing it to accelerate further; and so on.
          (I think this is Dan Clemmensen's view.) For example, the AI
          might be the first to make efficient use of nanotechnology.
          If nanotech has such potentials as Drexler thinks, access to
          a nanotech laboratory would be all the AI would need in
          order to take off. The contest would be over before anyone
          except the AI had realised it had begun.
          
          In the slower scenario I depicted in the last letter,
          however, it is unlikely that the AI would be alone. Its
          discoveries would be diffused and employed to build other
          AIs.
          
          But even in this case it is dubious that these other AIs
          would be an effective protection against possible malicious
          intentions of the first AI. They would, in effect, be its
          offspring, and could therefore be expected to inherit some
          of its basic properties, even values. If the first >AI is
          bad, it might influence the development of computers so that
          among subsequent machines a specific value set would be
          prevalent which would allow them to form a tacit consensus
          concerning the desirability of finally getting rid of the
          human pest.
          
          It is also worth considering what would make the grandpa >AI
          bad in the first place:
          
          1) Accident, misprogramming.
          
          2) Constructed by a bad group of humans, for military or
          commercial purposes. This group is presumably very powerful
          if they are the first to build an >AI. The success of this
          enterprise will make them even more powerful. Thus the
          values present in the group (community, company, state,
          culture) that makes the first >AI will not unlikely be the
          value set which is programmed into subsequent >AIs as well.
          
          3) Moral convergence. Sufficiently intelligent beings might
          tend to converge in what they value, possibly because values
          are in some relevant sense "objective". They just see the
          truth that to annihilate humans is for the best. (In this
          case we should perhaps let it happen?)
          
          Notice that in case (2) and (3) policing would not work even
          in the slow scenario and even if the first >AI itself had
          little influence on the construction of other >AIs.
          
          Nicholas Bostrom n.bostrom@lse.ac.uk



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST