From: Samantha Atkins (samantha@objectent.com)
Date: Tue Sep 26 2000 - 10:59:32 MDT
Eugene Leitl wrote:
>
> J. R. Molloy writes:
>
> > If we expect AIs to want to be friendly toward us, then we'll need to assure the
> > AIs that we've done all that's humanly (and inhumanly) possible to make their
> > lives the wonderfully pleasant experience that they know. IOW, we'll have to
> > keep the AIs happy, and let them know that we are responsible for their
> > happiness. They'll love us for making them happy.
>
> Sigh. You can only make them happy as long as they stay on equal
> footing with us. Very soon after they fall into positive autofeedback
> self-enhancement process, you cease to be that. They move on, we stay
> where we are.
>
> You can't reciprocate to a god in any meaningful way. Tell me why a
> god should consider us anything else than a feature of the landscape?
Because we are also intelligent conscious beings, albeit way s-l-o-w (at
least w/o augmentation)? But then a voice said, "Go tell it to the
chimps."
Because we are its creators/progenitors? For the latter, again, "Go
tell it to the chimps."
OK. So we hope that inter-species compassion makes really good logical
sense and that the AI has a huge heart simply because that is the thing
its hyper-intelligence naturally leads to? We haven't figured out quite
why that will be but at least its a hope.
- samantha
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:13 MST