And so we have a logical argument for "intuition". Obviously the robot will use what knowlege it has to decide. And this information may be limited and faulty.
>But what is best? You have to supply the robot with valuations in
>order to have this kind of reasoning. Asimov's laws have the advantage
>of being clear what a robot may and may not do, and do not require
>open ended reasoning ("... but if I save him, what if he is a killer?
>But if he is a killer...").
>
The whole point of Asimov's Laws is that they are ultimately unclear.
>> I think a
>> robot could logically calculate that a person living is better than a
>> person dieing and by induction that 100 people living and only one
>> dieing is better than one person living and 100 dieing.
>
>This kind of reasoning was most likely too unconstrained for Asimov or
>his contemporaries - or anybody building a robot today. Imagine the
>litigation if your robot does something that leads to the death of
>somebody, and it is not possible to show that this was a clear logical
>results of the laws of robotics. People would feel much more at home
>with a robot that simply couldn't harm them due to the first law, than
>a robot that just *might* harm them because it had deduced that it was
>for the best due to some obscure twist of logic.
>
I belive it took 10,000 years to invent the Zero'th Law. And the Robots did
it, not the humans. How logical. Also, chaos theory hadn't been invented yet.
O--------------------------------O | Hara Ra <harara@shamanics.com> | | Box 8334 Santa Cruz, CA 95061 | | | | Death is for animals; | | immortality for gods. | | Technology is the means by |
O--------------------------------O