Emergent Properties of SIs (was Re: Singularity: AI Morality)

From: Doug Bailey (Doug.Bailey@ey.com)
Date: Wed Dec 09 1998 - 14:03:57 MST


Eliezer Yudkowsky wrote:

> As for pulling this trick on genuine SIs:
>
> This would ENSURE that at least one of the SIs went nuts,
> broke out of your little sandbox, and stomped on the planet!
> This multiplies the risk factor by a hundred times for no
> conceivable benefit! I would rather have three million
> lines of Asimov Laws written in COBOL than run evolutionary
> simulations! No matter how badly you screw up ONE mind,
> there's a good chance it will shake it off and go sane!

The greatest potential for benefit and/or cost to our minds
as a result of the emergence of SI (whether weak or strong) are
the properties that will emerge from intelligent conscious systems
far more complex than our own minds. The only two possibilities
I can think of off the top of my head are Strong superintelligence
and some form of superconsciousness. However, there most probably are
emergent properties that are utterly beyond our minds to predict
or grasp.

While it is a conundrum to speculate in such a manner, it is hard
to substantiate that intelligence or consciousness would have
been predictable properties of the increasing complexity of
self-replicating systems. The conundrum is that nothing would have
been able to attempt such a prediction prior to the emergence of
these properties without tainting the predictive process.

Doug Bailey
doug.bailey@ey.com
nanotech@cwix.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:56 MST