At 05:35 AM 11/8/2001, you wrote:
>Is there a name for dangers or catastrophes which are brought about
>as a result of an attempt to anticipate and defend against them?
I don't know of a specific term for this. Self-fulfilling fears?
>I've thought of two examples of this, specific to 'ultratechnology'
>- one involving nanotech, the other AI.
>
>The nano example is simple: to be able to defend against the full
>range of possible hostile replicators, you need to explore that
>possibility space, and doing so only via simulation may simply
>be impractical (for mere humans, anyway). So one needs to conduct
>actual experiments in sealed environments - but once you start
>making the things, there's the danger that they'll get out somehow.
Yes, although the problem of constructing a sealed lab seems much easier
than the other problems involved in setting up a global nanotech immune
system, so I don't think one should worry too much about this possibility
at this stage.
>The AI example: this time one wants to be able to defend against
>the full range of possible hostile minds. In this case, making a
>simulation is making the thing itself, so if you must do so
>(rather than relying on theory to tell you, a priori, about a
>particular possible mind), it's important that it's trapped high
>in a tower of nested virtual worlds, rather than running at
>the physical 'ground level'. But as above, once the code for such
>an entity exists, it can in principle be implemented at ground
>level, which would give it freedom to act in the real world.
Here we are helped by the fact that superintelligence would assist us in
keeping the world safe from those hostile minds.
Nick Bostrom
Department of Philosophy, Yale University
New Haven, CT 06520 | Phone: (203) 432-1663 | Fax: (203) 432-7950
Homepage: http://www.nickbostrom.com
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:18 MDT