From: Joshua Fox (joshua@joshuafox.com)
Date: Mon Nov 12 2007 - 00:09:16 MST
Jef Allbright wrote:
> ...it's not about testing and refining for a particular desired outcome, but
> testing and refining the essentially scientific model ...
Right.
> > ..You would use various simulated worlds, ...
> I'm afraid you've begun to over-simplify.
Yes, all models simplify. I am describing the highly simplified
simulated worlds just to show that it is feasible to get started, but
I'd be delighted to see this get more complex.
> Any closed-form representation of morality is a recipe for failure in an evolving context.
Again, I'm suggesting the closed-form function only for a start, and
will be glad to see more sophisticated approaches.
> This gives rise to some well-known paradoxes of utilitarian ethical theory,
Great! I'd love to see paradoxes played out in a simulation. Already,
philosophers play out such scenarios as thought experiments, modeling
the world in their minds -- I am suggesting that the simulation be
done in a computer.
> what we really need is a model of their latent **values**, and the moral function will emerge ...
Yes, that could be a good way to do it.
Joshua
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT