Re: benevolent or disinterested AIs

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jul 06 1999 - 13:56:53 MDT


Rak Razam wrote:
>
> My new flatmate is memtically blocked by the idea of Asimov's 3 Laws of
> Robotics as the be all and end all of programming to ensure a
> benevolent> read, 'slave' class of AI> how can you do that to your
> exo-somatic evolutionary offspring??? Where are all the Transhumanist AI
> positive phutures> don't we have to stop projecting our fears into
> negative dystopias> its all MORALS. Antediluvian human conceits. The law
> of robotics are humna/moral bindings. Where are all the postitive
> blueprints of tomorrows AI?

They're at "Coding a Transhuman AI":
http://pobox.com/~sentience/AI_design.temp.html

See specifically "Interim Goal Systems" and the Prime Directive for the
reasons why Asimov Laws are a really, really bad idea. Warning, 343K.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:24 MST