Re: Mike Perry's work on self-improving AI

From: Joseph Sterlynne (vxs@mailandnews.com)
Date: Tue Sep 07 1999 - 22:06:47 MDT


> "Eliezer S. Yudkowsky" <sentience@pobox.com>
>> Joseph Sterlynne

>> What guarantee do we have that there is no inherent
>> problem in a human-class mind perceiving all of itself? There is of course
>> the question of self-observation; it would certainly help to know how much
>> control our consciousness really has over the rest of the mind.
>
>You don't modify your entire mind at once. You look at a subsection,
>then modify that.

And presumably we could chart lines of influence which would affect even
those areas which might be difficult to reach more directly. But would
not, for example, total recall of long-term memories or total access to
nonconscious processes be a useful ability? Is our (and apparently other
organisms') lack of such due to the basic architecture of mind or to the
vicissitudes of evolution? In such a case we might be able to access a
small section of this unconscious data (instead of everything at
once)---but we can't. We'd like to think that there is no inherent
restriction; but then we might not end up with anything like the sort of
consciousness that we are used to. An old idea in SF but one whose basic
formal characteristics we must work out in today's AI.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:04 MST