From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 29 2000 - 11:43:19 MDT
"Eliezer S. Yudkowsky" wrote:
>
> And scientists and engineers are, by and large, benevolent - they
> may express that benevolence in unsafe ways, but I'm willing to trust to good
> intentions.
This should have read "willing to trust their good intentions". I'm not
willing to trust *to* good intentions; it also takes competence. I'm not
screamingly enthusiastic about trusting the competence of whoever builds the
first AI, but it's not like I have a choice; trying to impose any system of
coercion on top only makes the situation worse. We are all free-enterprisers
here, I hope - we should know better than to propose something like that.
The only thing I can do to affect the situation is write up what I know about
Friendly AI, which I'm doing, and try to *be* the first, which I'm also
doing. So let's have no more talk of nuclear threats.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:17 MST