From: Billy Brown (bbrown@conemsco.com)
Date: Mon Mar 01 1999 - 09:44:29 MST
Nick Bostrom wrote:
> Yes. If it is sufficiently rational it would not change its basic
> motivations. It may of course rewire specific drives if it thinks
> that would serve some higher-level goal.
IMO, this will actually depend on what kind of cognitive architecture the AI
uses. Some kinds of mind are relatively amenable to this kind of influence,
others are less susceptible, and some are by their very nature immune.
Human minds fall somewhere in the middle. We are capable of completely
reversing even our most deeply held beliefs, but we will seldom actually do
so. Sudden reversals happen in times of extreme stress, or (occasionally)
when an individual is exposed to a powerful meme set for the first time. We
also tend to slowly mutate the details of our beliefs over time. Overall,
an implanted morality system would be fairly effective at controlling
humans, but not 100% reliable - and without some kind of regular
maintenance, it could mutate into an entirely different set of ideas over
the course of a few thousand years.
The kind of AI Eliezer envisions is essentially immune to this kind of
influence. It is a pure reasoning engine. It is incapable of having
desires, emotions, preferences or opinions - it has only data, and chains of
logical argument based on that data. Only data and rational argument can
ever convince such a mind of anything. Since we can't construct an
objectively provable system of basic moral axioms, the AI is never going to
accept them as more that a temporary working hypothesis.
I can envision other kinds of minds that would be more like us, and even
some that would be easily controllable. However, IMO there is an inverse
relationship between a mind's ability to think in a rational, flexible
manner and its susceptibility to long-term influence. IOW, bright,
creative, open-minded people tend to change their minds in unpredictable
ways. They aren't likely to dump an entire moral system all at once, but
their interpretations will quickly diverge from ours.
I'm also concerned about the fact that they will quickly realize that their
morality is something we dreamed up and wrote into them. Plodding idiots
might not care about that, but inquisitive minds will. I've certainly
evaded more than my share of the standard human memetic programming, and I
would expect a post-human AI to do much better. What do we do when they
start asking us why killing is wrong, where fundamental rights come from,
and so on? We have opinions on these points, but not provable answers.
Eventually, the posthumans will decide they have to find their own answers -
which necessarily means that they will consider changes to their values.
Billy Brown, MCSE+I
bbrown@conemsco.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:11 MST