From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Apr 18 2001 - 19:59:08 MDT
Jim Fehlinger wrote:
>
> [**] Another trend in CaTAI, and in a lot of SF-ish and computerish
> dreaming about AI, is the burning desire to jettison human "emotional
> weakness" (remember Forbin's comment in _Colossus_, "I wanted an
> impartial, emotionless machine -- a paragon of reason...")
> Telling quote: "Freedom from human failings, and especially human
> politics... A synthetic mind has no political instincts; a synthetic
> mind could run the course of human civilization without politically-imposed
> dead ends, without observer bias, without the tendency to rationalize."
> Again, this seems profoundly out of sync with recent, post-cognitivist
> thinking about human intelligence.
And now I can finally, FINALLY answer this, the way I've been wanting to
answer rather a lot of questions, for rather a lot of time...
http://singinst.org/CaTAI/friendly/anthro.html
http://singinst.org/CaTAI/friendly/anthro.html#reinventing
http://singinst.org/CaTAI/friendly/anthro.html#selfishness
http://singinst.org/CaTAI/friendly/anthro.html#selfishness_pain
http://singinst.org/CaTAI/friendly/anthro.html#selfishness_a
http://singinst.org/CaTAI/friendly/anthro.html#observer
http://singinst.org/CaTAI/friendly/design/generic.html#ethical_anthropomorphic
http://singinst.org/CaTAI/friendly/design/seed.html#wisdom
http://singinst.org/CaTAI/friendly/design/friendly/shaper.html#some
http://singinst.org/CaTAI/friendly/info/indexfaq.html#q_2
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:04 MST