From: Matt Mahoney (matmahoney@yahoo.com)
Date: Fri Apr 04 2008 - 14:19:53 MDT
--- mwaser@cox.net wrote:
> > Aren't we jumping ahead? We have yet to solve the very non-trivial
> problem of
> > defining what "friendly" means.
>
> No. I defined it sufficiently for any INTELLIGENT system in the last
> e-mail. To repeat:
>
> So how about:
> Love one another OR
> Play well with others OR
> Help one another OR, at a minimum,
> Don't step on others
>
> > Such questions only seem to lead to endless debate with no resolution.
> How
> > can we ask what we will want when we don't know who "we" will be?
>
> Your problems arise because your ethics are unclear.
Yes, my ethics are unclear. They are the product of my upbringing and
culture, which in turn evolved to increase the reproductive fitness of my
tribe. It makes perfect sense to me why my culture should prohibit sexual
activity for purposes other than reproduction, and enforce traditional gender
roles for hunting and child rearing, and to sanction patriotism, war, and
subjugation and enslavement of foreign tribes. But when I asked my tribal
elders about the ethics of teleportation I just got blank stares.
> Clarify your ethics
> (your TRUE goals) and everything else becomes crystal clear. If your ethics
> depend upon *who*s and *we*s, you are lost. Your ethics need to be based
> upon "entities" and NOTHING else (and yes, by that I *DO* mean basically all
> *thinking* things to include animals and I do *NOT* mean in proportion to
> how much they think -- despite what you and other bigots think, *everything*
> is equal and, in the long run, stomping on someone/something else is only
> hurting yourself).
Forgive me, for I have sinned. I swatted a mosquito and deleted some files
while eating a tuna sandwich.
>
> > I prefer the approach of asking "what WILL we do?" because "what SHOULD we
> > do?" implies a goal relative to some intelligence whose existence we can't
> > predict.
>
> My, that's *very* Aleister Crowley of you (which is not to be taken that I
> disagree).
>
> > I believe AI will emerge . . . .
>
> I've seen and believe that I understand your beliefs. May I ask you to open
> yourself to the possibility that the dark, gloomy future that you portray is
> merely fearful conservatism and that the future could easily turn out to be
> a wonderful, glorious thing?
Or the possibility of both at the same time, depending on the goals of the
observer.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT