Billy Brown wrote,
>IMO, an 'instant singularity' precipitated by the sudden appearance of an
SI
>is not very likely. However, the factors that determine whether it will
>happen are not under human control. It depends on the answers to a number
>of questions about natural law (like: How hard is it to increase human
>intelligence?). If the answers turn out to be the wrong ones, the first AI
>to pass a certain minimal intelligence threshold rapidly becomes an SI. If
>they don't, we have nothing to fear. The only thing we can do that makes
>much difference is to make sure our seed AIs are sane, in case one of them
>actually works.