From: Jef Allbright (jef@jefallbright.net)
Date: Tue Nov 06 2007 - 13:06:15 MST
On 11/6/07, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> Joshua Fox wrote:
> > Under "I'm-sure-someone-must-have-done-this-before":
> >
> > What about the idea of an morality simulator. Just as computer models of
> > weather or car crashes -- however imperfect -- allow researchers to test
> > their assumptions, why not do this for morality?
>
> Because unless you narrowly restrict the available options to Tit for
> Tat like behavior, it's too hard. You can't get simulation of general
> consequentialist reasoning without general intelligence.
I would argue that it is practical to test and refine models of moral
reasoning using the best currently available computing elements, as
long at they are properly fed and allowed to sleep, etc. They
certainly wouldn't qualify as "general intelligence", but they do
perform quite effectively in this problem domain, even better within
an effective framework, and the architecture of the simulator allows
for augmentation or outright replacement of computing elements with
superior designs as the technology becomes available.
- Jef
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT