From: Matt Mahoney (matmahoney@yahoo.com)
Date: Mon Oct 15 2007 - 16:31:14 MDT
--- "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
>
http://www.intelligence.org/blog/2007/10/14/the-meaning-that-immortality-gives-to-life/
Humans are a product of evolution. Evolution favors organisms that fear death
and then die. Fear of death drives life extension technology that will lead
to a singularity. But will that save us?
A singularity is necessarily a recursive self improvement process because once
we achieve superhuman intelligence, those intelligences will be able to make
further improvements faster than we can. Legg proved [1] that an intelligence
(using the universal definition [2]) cannot completely predict the behavior of
a greater intelligence. Thus, RSI is experimental at every step. It is an
evolutionary algorithm that favors rapid reproduction and acquisition of
computing resources. Evolutionary algorithms, whether based on DNA or not,
require death, and will favor intelligences that fear death.
One could program an uploaded mind in one of two ways to solve the problem.
One is to remove the fear of death. But such an intelligence would have no
sense of consciousness. It would reason that one set of memories is just as
good as another, and die. The other is to achieve immortality by backing up
memories, even if they are no longer useful. But this wastes scarce computing
resources, which would slow more meaningful growth.
Perhaps there is another path to immortality and unlimited growth of
intelligence, but I don't know what it is. Then again, if I was running a
simulation of the universe, I would already have the collective memories of
all the humans on earth, none of whom were programmed to know they were
already immortal, because it would have messed up the evolutionary algorithm I
was running.
As a product of evolution, I find this a very thought provoking article. But
I don't know what to do about it, if anything. If I were not a product of
evolution, would I care about my own life?
References
1. Legg, Shane, (2006), Is There an Elegant Universal Theory of Prediction?,
Technical Report IDSIA-12-06, IDSIA / USI-SUPSI, Dalle Molle Institute for
Artificial Intelligence, Galleria 2, 6928 Manno, Switzerland.
http://www.vetta.org/documents/IDSIA-12-06-1.pdf
2. Legg, Shane, and Marcus Hutter (2006), A Formal Measure of Machine
Intelligence, Proc. Annual machine learning conference of Belgium and The
Netherlands (Benelearn-2006). Ghent, 2006.
http://www.vetta.org/documents/ui_benelearn.pdf
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT