Re: Why would AI want to be friendly?

From: Michael S. Lorrey (retroman@turbont.net)
Date: Thu Sep 28 2000 - 14:49:03 MDT


Bryan Moss wrote:
>
> Michael S. Lorrey wrote:
>
> > A slow singularity posits that long before this occurs,
> > there will be a long gradual phase of human augmentation
> > technology development, where people add more and more
> > capabilities to their own minds, to some eventual point
> > where their original bodies may die and they do not even
> > notice, as the wetware/meatware part of their being has
> > become such a small percentage of their actual selves. I
> > personally am betting on this occuring, and not the
> > punctuated equilibrium that others seem to think will
> > occur with a fast singularity.
>
> Eventually everything goes to hell anyway and one has to
> wonder what you preserve of your self when you enter the
> Singularity, since by definition you come out the other side
> as something you can't possibly comprehend.

It may be something that I cannot comprehend *right now*, however if you
followed the Vinge singularity discussion archived at extropy, look at my
diagram of the participant's eye view of the singularity, and you will see that
as long as I maintain a continuity of me-ness, I know that it will be, in fact,
me that will eventually comprehend the singularity as I pass through it.

Things 'go to hell' only in the eyes of those observing from a point stuck in
stasis, as their comprehension of events unravels as they unfold. To the
participant riding the wave, comprehension is maintained throughout.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:16 MST