From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Thu Sep 28 2000 - 01:40:54 MDT
Franklin Wayne Poley writes:
> The issue here was whether a machine would be 'motivated' and my reply is
> that it will act according to its programming. If you program it to
Just as you act according to your programming. Can you use that phrase
to predict what you will or will not do? Can you use that knowledge to
absolutely 100% exclude an action or a sequence of actions?
"Computers only do what they've been programmed to do." Yeah,
right. And all matter does is follow Schroedinger's equation. Both
statements are true, and contain about the same amount of predictive
power. "Computers only do what they've been programmed to do" is as
applicable to a complex adaptive AI system (we don't have these yet)
as Schroedinger's equation is useful in building an animal cell model.
> simulate human motivation it will do so. If you program
> friendly/unfriendly AI that is what you will get. If you give your
You only have to codify Asimov's laws in a little Prolog. With a
little help from Santa's elves and a few goblins.
> machines lots of autonomy and genetic programs that take them where you
Useful AI has to be flexible. Flexible AI can be dangerous. Tanstaafl.
> can't possibly anticipate then that too will yield in accordance with the
> programming...and heaven help all of us. AI machines can get out of
> control just as any other machines can.
Then we agree. It doesn't take more than a bee's brain to drive around
a car, so I'm not worried about it getting airs and vapours and
driving off to plot with it's four-wheeled brethren. Anything much
smarter, and open-ended, and you're asking for trouble.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:15 MST