From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 23 2002 - 13:57:56 MDT
Eugen Leitl wrote:
>
> However, we're not talking about today, nor about people. For all
> practical purposes my motivations subjective 3 MYears downstream are
> completely inscrutable. Given evolutinary pressure and
> Darwinian/Lamarckian drift my motivations become completely inscrutable
> after a much shorter period. You're not only assuming that your
> motivations survive unchanged, but that you're actually able to protect
> humanity and associated ecology against players which don't give a flying
> fuck about the matter.
How can it be moral for you to sympathize with unmodified humans now, yet
immoral after you transcend? Which one of you is being irrational?
> Oh, and not to forget, the principal players in your scenario are nonhuman
> to start with.
Mind's a mind. If you can configure a human for altruism, it is
theoretically possible to do the same with AI. Anyone willing to challenge
evolution as a constructor of intelligence should be ready to do the same
for evolution as a constructor of altruism.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:59 MST