From: Matt Gingell (mjg223@is7.nyu.edu)
Date: Thu Oct 21 1999 - 19:23:05 MDT
> So who's going to program the AI's with base motivations that
> involve concepts such as "dominance" and the wish to strive for it,
> then provide the necessary faculties/resources to do this? Not me or
> anyone sane, that's for sure.
The question of where an AI get's it's motivation is very
interesting. Our primitive motivations are products of our
evolutionary history: sex-drive, survival instinct, pleasure over
pain, etc. None of these are essential to the nature of intelligence
though and they won't necessarily have any correlate in an
artificially engineered brain.
Eliezer's pointed out the incoherence of believing you can hard wire
high level beliefs or motivations and I quite agree. You do get to
specify what kind of feedback-inducing behavior gets reinforced or
attenuated though. In humans, for instance, the weight we place on the
sensation of being burned leads to behaviors like testing the
temperature of the shower before jumping in, an so forth. Perhaps we
guide the development of an Ai's value system in the same way.
Also note that there's no obvious bound to how complex we can make the
reward/punishment system's criteria. We could even have an embedded
limbic-system shaped conscience-Ai or similar that judges what's
desirable and what's not then pokes accordingly.
-matt
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:34 MST