<< . (My actual model is that some
>computer
>nerd at MIT will do this while drinking Jolt cola at 2 AM when he should
>be studying
>for an english exam.) As a joke, the nerd will type in the command "make
>yourself smarter."
>The then-current rule set will be smart enough to act on the command but
>too stupid
>to get the joke.
> >>
In my limited understanding of how NN works, this command is neither a joke ,
nor a human try to make AI "unbenign"- but the current method of programming
- error correction for, example: improved pattern recognition. In essence it
is a neccessary part of building up sufficient intelligence to make an
"artificial" being or intelligence. This 'learn all you can command ' I
understood as a given, but was interested in exploring how *after* it gained
"consciousness - or replicated it's own AI's- which motivation AI would
use, that which *we* programmed, or it's *own* directives - as it "wakes up"
and perhaps questions its own "meaning of life" or existence and purpose (
assuming it could ever get that kind of consciousness).
Nadia