From: Christian L. (n95lundc@hotmail.com)
Date: Wed Mar 28 2001 - 12:48:42 MST
In response to Dale Johnstone:
For starters, if some of my arguments can be rebutted by referring to
"Friendly AI", please point to specific chapters/sections. My easter holiday
is on the horizon, but until then I don't have time to read the paper. I do
not really feel as I have enough background information for a totally
informed discussion, but I think it is a bit rude to just say: "I will not
talk to you". So here goes:
Dale Johnstone wrote:
>
>
> >Even if it understands us and our desires, I don't see why it would
> >automatically become our servant. We might understand the desires of an
> >ant-colony, but if we need the place, we remove the anthill.
>
>It is not our servant. There is no master-slave relationship. It has
>built-in Friendliness - a top level goal.
My point is, this goal might be replaced during the AI:s self-modification.
I have seen some "failure of friendliness" scenarios in FAI, but I haven't
found one that addresses this. If you have, please point to it.
>That's why it wants to be Friendly. What it means to be Friendly is fleshed
>out by its education, just like a humans.
>
>BTW the ant-colony isn't sentient, you can't use that in place of humans.
Why not? The argument wasn't about sentience. If the AI turns out to be
selfish, it might not care if we are sentient or not.
>Besides I wouldn't want to harm it anyway. Yes, I'm a vegetarian too.
>
> >I assure you, I did understand it before. I just don't see the point in
>idle
> >speculation about the actions of eventual SIs. It will do as it pleases.
>If
> >we manage to program it into Friendliness, it will be Friendly. Maybe it
> >will ignore humans. Maybe it will kill us. I don't know.
>
>Ignoring humans is not Friendly - it won't do that.
>Killing humans is not Friendly - it won't do that.
>Helping humans is Friendly - it will do that.
Maybe I should have been clearer: If we manage to program it into
Friendliness, it will be Friendly. Then it would help humans.
If it turns out that our programming didn't make it Friendly, then it could
do the things stated above.
Everyone seem certain that the Friendliness programming will succeed. I feel
that this is quite uncertain.
>
>Please have another go at reading Eli's Friendliness paper.
Yes, I will. If I have made arguments that have been made a million times
before, and duly rebutted, please let me know. I will then stop posting on
Friendliness issues until I have read FAI. That would also give me time to
make some posts about the years leading up to the Singularity.
/Christian
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT