From: Petter Wingren-Rasmussen (petterwr@gmail.com)
Date: Mon Dec 08 2008 - 00:46:05 MST
On Thu, Dec 4, 2008 at 1:16 PM, Stuart Armstrong <
dragondreaming@googlemail.com> wrote:
> > Not sure of what kind of methodology you think of when you talk about
> these
> > things - but as have been discussed regarding selfimprovement, an AI will
> > try to find ways to circumvent a hardwired thing like
> > self-improvement=nausea (or the AI equivalent).
> >
> > Taking human beings as an example:
> ...is a very bad idea.
Yes it is. I cant see any better examples around though, and my only idea of
prediction so far is looking at the past.
>
> The rules of an AI are similar - they are not regulations from
> outside, they are the AI. The AI is not some rebellious slave,
> constrained by the code - the AI is the code.
>
In the same way one might argue that I am my experiences, including Sunday
school and early upbringing.
It is definitely possible to override imprinting from ones parents, but a
lot harder to override the sex/food thing.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT