Samael wrote:
> >The singularity only takes one AI that decides to extend itself rather than
Same problem. This only works if all AIs are inhibited fron extending their
"strong goals": This si very hard to do using traditional computers.
>
> From: Dan Clemmensen <Dan@Clemmensen.ShireNet.com>
> >Samael wrote:
> >>
> >> But why would it _want_ to do anything?
> >>
> >> What's to stop it reaching the conclusion 'Life is pointless. There is
> no
> >> meaning anywhere' and just turning itself off?
> >>
> >Nothing stops any particular AI from deciding to do this. However, this
> >doesn't stop the singularity unless it happens to every AI.
> >terminating.
> >
> >If you are counting on AI self-termination to stop the Singularity, you'll
> >have to explain why affects every single AI.
>
> I don't expect it will, because I expect the AI's to be prgorammed with
> strong goals that it will not think about.