From: Dale Johnstone (DaleJohnstone@email.com)
Date: Tue Mar 27 2001 - 19:52:08 MST
Christian L. wrote (quoting Dale Johnstone):
>>List members do *not* get to define what is evil and what is banished.
>
>Oops. This has already been done:
>
>"To eliminate all INVOLUNTARY pain, death, coercion, and stupidity from
>the Universe."
No, you're getting rules for citizens and rules for the SysOp confused.
The distinction I'm trying to make is that we don't give the SysOp a list of all the nasty things to ban. We build a good mind and provide a rich education for it and eventually it *understands* what we mean. We don't want a stupid AI that mindlessly follows rules and lists - that's not even AI as far as I'm concerned.
>Even if it understands us and our desires, I don't see why it would
>automatically become our servant. We might understand the desires of an
>ant-colony, but if we need the place, we remove the anthill.
It is not our servant. There is no master-slave relationship. It has built-in Friendliness - a top level goal. That's why it wants to be Friendly. What it means to be Friendly is fleshed out by its education, just like a humans.
BTW the ant-colony isn't sentient, you can't use that in place of humans. Besides I wouldn't want to harm it anyway. Yes, I'm a vegetarian too.
>I assure you, I did understand it before. I just don't see the point in idle
>speculation about the actions of eventual SIs. It will do as it pleases. If
>we manage to program it into Friendliness, it will be Friendly. Maybe it
>will ignore humans. Maybe it will kill us. I don't know.
Ignoring humans is not Friendly - it won't do that.
Killing humans is not Friendly - it won't do that.
Helping humans is Friendly - it will do that.
Please have another go at reading Eli's Friendliness paper.
Regards,
Dale Johnstone.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT