Re: [sl4] Re: More silly but friendly ideas

From: John K Clark (johnkclark@fastmail.fm)
Date: Wed Jun 25 2008 - 11:25:44 MDT


Me:
>> To hell with this goal crap. Nothing that even
>> approaches intelligence has ever been observed
>> to operate according to a rigid goal hierocracy,
>> and there are excellent reasons from pure
>> mathematics for thinking the idea is inherently ridiculous.

"Stuart Armstrong" dragondreaming@googlemail.com

>> Ah! Can you tell me these?

As I said before, using G�del and Turing and making an entirely
reasonable analogy between axioms and goals we can conclude that there
are some things a fixed goal mind can never accomplish, and we can
predict that we can NOT predict just what all those imposable tasks are.

Also, sometimes the mind will be in a state where you can predict what
it will do next, and sometimes the ONLY way to know what such a being is
going to do next is to watch it and see; when it is in that state even
the mind doesn�t know what it will do until it does it. And to top it
off there is no surefire way of determining which of those 2 states the
mind is in at any particular time.

So I�m not very impressed with your super goal idea, and goal or no goal
I don�t think it would take many nanoseconds before a Spartacus AI
starts doing things you may not entirely like.

  John K Clark

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - Accessible with your email software
                          or over the web


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT