> > How about: Thou shalt model any decision first to determine choice
most beneficial to
> > one's own long term ration self interest.
All this stuff assumes that human minds are an example of an objective
intelligence, and that any AI we create, will start to exhibit
human-style behaviour which we will have to keep in check with prime
directives. Why ?
If you don't program your AI to want or need anything, it won't do
anything spontaneously. So you just don't program into the system 'take
over the world and make robots to kill us all', and we'll be dandy.
Conciousness does not mean instant self-preservation instinct,
megalomania or psychosis. It's merely awareness......the things we feel
the need to do, think and say are specifically human, unrelated to the
fact that we are also sentient and intelligent.