From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Mon Feb 21 2005 - 16:29:31 MST
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Peter de Blanc wrote:
|> * AGI should be forced, initially, to reproduce rather than self
|> modify (don't shoot me for this opinion, please just argue okay?)
|>
|
|
| What do you mean by reproduce? If you mean creating a perfect
| clone, then that's pointless; if you mean random mutation and
| crossover, then that's unpredictable and could do bad things to the
| AGI's goal system, so the AGI might not want to reproduce (of
| course, this would select for AGIs which do want to reproduce); if
| you mean the AGI must build a new AGI to succeed it, then that's
| the same thing as self-modification.
Neither building a perfect clone nor building a new AGI are pointless,
or the same thing as self modification except in light of the end
goal. Consider : We are attempting to build Friendliness because we
wish humanity to be respected by AGI.
AGI will attempt to build Friendliness in AGI2 because it will wish to
be respected by AGI2.
Thus, Friendliness becomes invariant so long as each parent maintains
a goal of self-existence.
|> * AGI will trigger a greap leap forward, and humans will become
|> redundant. Intelligence is never the servant of goals, it is the
|> master.
|
| Without an existing supergoal, by what measure do you compare
| potential goals, and how is this measure different from a
| supergoal?
I believe that supergoals are not truly invariant. One does what all
minds do - develop from genetic origins, form supergoals from a
mixture of environment and heredity, and modify ones goals on the
basis of reflection. Morality is a faculty, not a property, and is
relative to context in all cases. The concept of an abstract invariant
Friendliness that is the same for all contexts is not trustably
attainable.
Consider the personal case : Morality is a supergoal, will-to-life is
a supergoal, and maybe a few others. Yet we are easily able to
overcome or modify our supergoals through the construction of other
supergoals over time. Goal-creation is not always a unidirectional
thing. We exist in a state of looped feedback, not being merely
creatures subject to primal demands.
|> * In AGI, psychological instability will be the biggest problem,
|> because it is a contradiction to say that any system can be
|> complex enough to know itself.
|
| To know oneself, it is not necessary to contain oneself as a proper
| subset; it is enough to have a map of the high-level organization
| and be able to focus attention on one detail at a time.
That is false: In a dynamic system variables not currently under
scrutiny are changing in unpredictable ways, thus the uncertainty
principle is maintained.
- -T
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCGm7bFp/Peux6TnIRAr1TAJwJiFWbXiquTFNWffsjVJUpJgsBdACfUIMH
Hzl2ih9rmWQEph/Cr9CGcBE=
=iNc/
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT