Re: >H RE: Present dangers to transhumanism

From: John Clark (jonkc@worldnet.att.net)
Date: Wed Sep 08 1999 - 10:45:41 MDT


Eliezer S. Yudkowsky <sentience@pobox.com> On September 01, 1999 Wrote:

>I'm serious - Elisson, in _Coding a Transhuman AI_, contains design
>features deliberately selected to cause a "collapse of the goal system",
>and lapse into quiesence, in the event existence is found to be
>meaningless. Preventing those nihilist AIs you were talking about.

You can't be serious. Rightly or wrongly many people, myself included,
are certain that the universe can not supply us with meaning, and yet
that idea does not drive us insane, I haven't murdered anyone in months.

If something has no meaning and you don't like that then don't go into
a coma, change the situation and give it a meaning. You can give
a meaning to a huge cloud of hydrogen gas a billion light years away
but it can't give meaning to you because meaning is generated by mind.
Personally I like this situation because I might not like the meaning
that the universe assigns to me, I'd much rather be the boss and
assign a meaning to the universe.

Even if you don't like my philosophy it keeps me going and you
must admit it beats "quiescence". The beauty of it is that I can
never be proven wrong because it's based on a personal preference,
not the way the universe works. You can never prove that I really
do like sweet potatoes either because I don't, It's just the way my
brain is wired.

You might argue that even though I can't be proven wrong I still
might be wrong, well maybe. But if being right means death or
insanity and being wrong means operating in a efficient and happy
manner then whatever could you mean by right and wrong? I'd
say a happy efficient brain is constructed correctly and a insane or
quiescent brain is constructed incorrectly, but again that's just my
opinion, I like things that work

   John K Clark jonkc@att.net



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:04 MST