From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Thu Dec 04 2008 - 05:16:00 MST
> Not sure of what kind of methodology you think of when you talk about these
> things - but as have been discussed regarding selfimprovement, an AI will
> try to find ways to circumvent a hardwired thing like
> self-improvement=nausea (or the AI equivalent).
>
> Taking human beings as an example:
...is a very bad idea.
An AI will not be imprinted with something like
self-improvement=nausea. The AI will either want to self-improve, not
want to self improve, or be somewhere in-between, and will decide
based on this.
Why did Judaic rules fail to imprint? Because they are outside rules,
imposed by social pressure, on people with many different desires and
urges.
We follow rules that are part of us: we seek out sex and enjoy food.
Both are advantageous to us, and society, and yet no-one has tried to
impose outside rules to force us to do so. Why? Because their is no
need. The rules "seek out sex and enjoy food" are part of what (most
of us) are. Talking about evolution-imposed rules is a distraction -
the true difference are between external rules, imposed from outside,
and internal ones, coming from our genetic code.
The rules of an AI are similar - they are not regulations from
outside, they are the AI. The AI is not some rebellious slave,
constrained by the code - the AI is the code.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT