From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 08 2002 - 12:53:07 MDT
Eugen Leitl wrote:
>
> However (There Is Another System; Gort, Klaatu Barada Nikto) you're
> heading for ethical whitewater, as soon as a very small group codifies
> whatever they think is consensus at the time into a runaway AI seed, and
> thus asserts its enforcement via a despot proxy.
While I understand that you don't want AI efforts to succeed, Eugen, this is
no call to deliberately misrepresent what we are trying to do and why.
The question of how a seed AI team can avoid exerting undue influence over
humanity's future - and when you're talking about humanity's entire future,
ANY personal influence is "undue influence" - is one of the deep questions
to which Friendly AI is intended as an answer. It is, furthermore, an
answer that intrinsically requires exploring issues of Friendly AI. Even a
two-thirds majority vote of every sentient being on Earth could still make
moral errors or lack the entitlement to enforce its consensus view on
others. For as long as you assume an AI is a proxy, despotic or otherwise,
there will be no moral answers to how you can pass on morality to an AI. A
Friendly AI needs to embody those moral principles that govern what kind of
morality it is legitimate to pass on to an AI, not just the moral principles
its creators happened to have at the AI's moment of creation. This
inherently requires delving into questions of cognition about morality and
not just moral questions of the sort usually argued over. The question of
how to build an AI that is an independent moral philosopher is *not*
equivalent to, and in some cases barely touches upon, the question of what
sort of moral material a Friendly AI should initially be given as an
experiential learning set.
I've heard you sketching your scenarios for how a team of five uploads who
deliberately refuse intelligence enhancement will forcibly upload millions
or billions of other humans and scatter them over the solar system before
the Singularity begins. I will simply leave aside the ethical questions and
note that this is very, very hard, far too hard to be pragmatically
feasible. Take care that you don't shoot your own foot off by
misrepresenting Friendly AI for the sake of advancing what you see as your
*most* desirable future, because even from your viewpoint there will still
be a chance that events move through seed AI, and in that event you'd damn
well better hope that Friendly AI theory advances as far as possible. In
short, play nice and remember that your real enemies in this issue probably
don't subscribe to the Extropians list.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:40 MST