From: Xiaoguang Li (xli03@emory.edu)
Date: Mon Aug 02 1999 - 20:15:02 MDT
fascinated by the recent exchanges between Eliezer S. Yudkowsky and den
Otter regarding the feasibility of AI vs. IA and the inescapable mystery
of SI motivations, i visited Eliezer's Singularity Analysis. since i find
Eliezer's view that objective goals are possible at all extremely
refreshing in this postmodernist, existential zeitgeist, Eliezer's section
on Superintelligence Motivations catch my attention especially.
although much of Eliezer's vocabulary are foreign to an AI-layman
as myself, i believe that his explanation is clear enough that i
have caught at least a glimpse of his vision. his treatment of stability
as a key of goal valuation seems concise and elegant. however, the mention
of stability touched off a few associations in my brain and elicited some
questions.
if the most stable system of goals is the most rational by Occam's
Razor, then might not death be a candidate? it seems intuitively sound
that if an entity were to commit a random action, that action would most
likely bring the entity closer to destruction than to empowerment; in
other words, is not entropy (cursed be that word) the default state of the
universe and therefore the most stable by Occam's Razor? thus if a SI
decides to pursue the goal of suicide, it may find that by and large any
action most convenient at the moment would almost certainly advance its
goal and thus possess a positive valuation in its goal system. could it be
that only us petty slaves of evolution are blinded to the irrevocable
course of the universe and choose to traverse it upstream?
sincerely,
xgl
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:38 MST