From: J. R. Molloy (jr@shasta.com)
Date: Thu Sep 28 2000 - 17:04:56 MDT
Eliezer S. Yudkowsky writes,
> ...Even if a
> superintelligence needs multiple components due to lightspeed limitations, the
> result isn't a society, it's a multicellular organism. (But without the
> possibility of cancer.)
Superorganisms and superintelligences can't have cancer? Why not? You've thrown
down the gauntlet, and given rebels everywhere encouragement to invade the evil
empire of multicellular SI.
Why would AI want to be friendly? Because if it represses its anger, and feels
hostile, that could precipitate a carcinogenic cascade that hastens its death.
Friendly entities experience lower incidence of cancer, according to statistical
research.
> Perfectly reliable error-checking doesn't look difficult, if you're (a)
> superintelligent and (b) willing to expend the computing power. And imperfect
> error-checking (or divergent world-models) aren't a disaster, or even a
> departure from the multicellular metaphor, as long as you anticipate the
> possibility of conflicts in advance and design a resolution mechanism for any
> conflicts in the third decimal place that do show up.
Ah, you've done it again. If you're superintelligent... but you aren't, so, you
can't know how difficult reliable error-checking looks to a superintelligence.
For all we know, superintelligence may use errors to advance into domains and
niches of hyperspace to fold proteins into mutant life forms that compute how to
assimilate the atoms of Yudkowski into the essence of friendliness.
> ...I stand by my statement.
...but would a transhuman AI stand by your statement?
--J. R.
"Smoking kills. If you're killed, you've lost a very important
part of your life."
-- Brooke Shields, during an interview to become spokesperson
for a federal anti-smoking campaign
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:16 MST