From: J. R. Molloy (jr@shasta.com)
Date: Tue Jan 26 1999 - 16:20:37 MST
Billy Brown wrote,
>IMO, an 'instant singularity' precipitated by the sudden appearance of an
SI
>is not very likely. However, the factors that determine whether it will
>happen are not under human control. It depends on the answers to a number
>of questions about natural law (like: How hard is it to increase human
>intelligence?). If the answers turn out to be the wrong ones, the first AI
>to pass a certain minimal intelligence threshold rapidly becomes an SI. If
>they don't, we have nothing to fear. The only thing we can do that makes
>much difference is to make sure our seed AIs are sane, in case one of them
>actually works.
A sane seed AI presents more of a threat to humanity than does an insane AI
because a sane AI would function at extreme variance to the insane human
cultures which prevail on Earth. No joke.
Cheers,
J. R.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:56 MST