From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 29 2000 - 10:50:28 MDT
Eugene Leitl wrote:
>
> No, I would literally nuke individual facilities, if a convincing
> threat is present. I'd say that should be a standard operating
> procedure in responses to Armageddon class threats. Colour me Vinge's
> Peacer, I guess.
Are you so very, very sure that you know better than the people running the
facilities? If the people with enough understanding and experience to have
created an AI tell you firmly that they have devoted considerable time and
effort to Friendliness and they believe the chances are as good as they'll
ever get, would you really nuke them? Are you so much smarter than they are,
you who would not have written the code?
Thankfully, you will never have authority over nuclear weapons, but even so,
just saying those words can make life less pleasant for all of us. You should
know better.
> > You're talking about the "Artilect Wars" all over again. But this is a
> > dirty, Vietnam-style war.... where people defend their God-given "right" to
> > create God with a seething, righteous passion. Should be fun if you're into
> > war.
>
> I'm not. I don't like where the logics of it all is leading us. There
> must be a more constructive way out.
There is. One group creates one mind; one mind creates the Singularity. That
much is determined by the dynamics of technology and intelligence; it is not a
policy decision, and there is no way that I or anyone else can alter it. At
some point, you just have to trust someone, and try to minimize coercion or
regulations or the use of nuclear weapons, on the grounds that having the
experience and wisdom to create an AI is a better qualification than being
able to use force. If the situation were iterated, if it were any kind of
social interaction, then there would be a rationale for voting and laws -
democracy is the only known means by which humans can live together. But if
it's a single decision, made only once, in an engineering problem, then the
best available solution is to trust the engineer who makes it - the more
politics you involve in the problem, the more force and coercion, the smaller
the chance of a good outcome.
I'm not just saying this because I think I'll be on the research team. I'm
willing to trust people outside myself. I'd trust the benevolence of Eric
Drexler, or Mitchell Porter - or Dan Fabulich or Darin Sunley, for that
matter. I'm willing to trust the good intentions of any AI programmer unless
they specifically demonstrate untrustworthiness, and I'm willing to trust the
engineering competence of any *successful* AI team that demonstrates a basic
concern with Friendly AI.
The idea that "nobody except 'me' can possibly be trusted" is very natural,
but it isn't realistic, and it can lead to nothing except for endless
infighting. I know a lot of important things about Friendly AI, but I also
know that not everything I know should be necessary for success - my knowledge
indicates that it should be possible to succeed with a limited subset of my
knowledge. And scientists and engineers are, by and large, benevolent - they
may express that benevolence in unsafe ways, but I'm willing to trust to good
intentions. After all, it's not like I have a choice.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:17 MST