From: Dan Clemmensen (Dan@Clemmensen.ShireNet.com)
Date: Mon Dec 14 1998 - 16:52:15 MST
Samael wrote:
>
> From: Dan Clemmensen <Dan@Clemmensen.ShireNet.com>
> >Samael wrote:
> >>
> >> But why would it _want_ to do anything?
> >>
> >> What's to stop it reaching the conclusion 'Life is pointless. There is
> no
> >> meaning anywhere' and just turning itself off?
> >>
> >Nothing stops any particular AI from deciding to do this. However, this
> >doesn't stop the singularity unless it happens to every AI.
> >The singularity only takes one AI that decides to extend itself rather than
> >terminating.
> >
> >If you are counting on AI self-termination to stop the Singularity, you'll
> >have to explain why affects every single AI.
>
> I don't expect it will, because I expect the AI's to be prgorammed with
> strong goals that it will not think about.
Same problem. This only works if all AIs are inhibited fron extending their
"strong goals": This si very hard to do using traditional computers. Essentially,
you will either permit the AI to program itself, or not. I feel that most AI
researchers will be tempted to permit the AI to program itself. Only one such
researcher needs to do this to break your containment system. Do you feel that
A self-extending AI must intrinsically have strong and un-self-modifiable goals
to exist, or do you feel that all AI researchers will correctly implement this
feature, or do you have another reason?
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:50:00 MST