From: Eugen Leitl (eugen@leitl.org)
Date: Mon Jun 24 2002 - 03:31:26 MDT
On Sun, 23 Jun 2002, Eliezer S. Yudkowsky wrote:
> How can it be moral for you to sympathize with unmodified humans now, yet
> immoral after you transcend? Which one of you is being irrational?
I do not understand this sentence. Do you somehow imply that morality is
viewpoint-invariant?
What is the mechanism asserting conservation of specific frame of morality
over subjective geological time scale in face of speciation, radiation
driven by Lamarckian/Darwinian evolution?
I wish there was one, but I'm not aware of any.
> Mind's a mind. If you can configure a human for altruism, it is
No, a mind is most assuredly not a mind. It's just we're used to people
and animals. We haven't met anything truly alien yet, but an AI
unburdened with billenia of evolution is truly something alien to this
world.
> theoretically possible to do the same with AI. Anyone willing to challenge
> evolution as a constructor of intelligence should be ready to do the same
> for evolution as a constructor of altruism.
We have no evidence that higher fitness is associated with
symmetric-expectation transactions between asymmetrical players. I see
current human behaviour in terms of a frozen system artifact.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:59 MST