From: michaelroyames@hotmail.com
Date: Sun Jun 09 2002 - 14:43:28 MDT
Eugen Leitl wrote:
> I should probably bite the bullet and comment on
> http://singinst.org/CFAI.html (is that the proper version?).
> I probably shouldn't bother with Crit, and just annotate
> the thing in straight HTML.
>
Eugene: Yes <http://singinst.org/CFAI.html> is the proper version.
This thread has prompted the memory of my first reaction to the Sysop
scenario, a reaction that still hovers in my mind, one of extreme danger and
possible curtailment of freedom. However, both Eugen and Sylvia are writing
about Friendliness (capital F), an idea that is totally separate from the
Sysop scenario. It would seem to me that someone has mentally 'married'
these two ideas into one unit, and are commenting on that single unit rather
than on the separate ideas. This seems to have lead them to the incorrect
inference that Friendliness is something that will be imposed upon other
people. Whereas I understand Friendliness as a design requirement for an AI
intending to ensure (to the best of our ability) that ve won't impose anything
on anyone. Note the word 'impose' in my last sentence. The implication is
that a Friendly AI would not violate volition of humans - that does not mean
it wouldn't do anything, but that it wouldn't do anything without consent from
the humans involved.
I equate Friendliness with a real-world design that is to be implemented.
I equate the Sysop scenario as one possible future out kajillions of possible
futures, and a marvelous 'point of discussion' and a great plot device for a
novel... but, basically, something that is 'far out'.
Both ideas, Friendliness and the Sysop Scenario, are discussed in Eliezer's
writings - and are presented cheek-by-jowl in <http://singinst.org/CFAI.html>.
I think this may have been an unfortunate (and misleading) juxtaposition.
Even though in CFAI Eliezer indicates the speculative nature of the Sysop
Scenario, and I quote:
"Or it may be that all of these speculations [about the Sysop Scenario]
are as fundamentally wrong as the best speculations that would have been
offered a hundred, or a thousand, or a million years ago. This returns to
the fundamental point about the Transition Guide - that ve must be complete,
and independent, because we really don't know what lies on the other side of
the Singularity."
This is an easily forgotten paragraph at the end of a section, a section
that links the two ideas. My point is (to make it for the second time) that
someone who grasps the idea of Friendliness *and* the idea of the Sysop
Scenario, may not realize that one is a solid design, and the other is (pretty
wild) speculation.
Friendly AI = Specific design .
vs Sysop Scenario = One speculation out of many .
Michael Roy Ames
---- This message was posted by michaelroyames@hotmail.com to the Extropians 2002 board on ExI BBS. <http://www.extropy.org/bbs/index.php?board=61;action=display;threadid=52069>
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:41 MST