From: Mark Waser (mwaser@cox.net)
Date: Sat May 10 2003 - 14:20:13 MDT
As usual, Ben has some great points . . . .
Ben > It's an interesting project, and a very large one...
I would agree that it is an immense project to complete but I think that
even the seed of it could provide a lot of insights and value. I also think
that getting up part of the system as a decision-making and
decision-documenting aid is eminently doable.
> What I envision coming out of the process you describe is a kind of
> practical formal deductive system dealing with morality. A priori value
> judgments may be provided as axioms, and the practical moral judgments
based
> on them emerge as derivations from the axioms.
You're pretty much dead on (except that it gets a lot more complicated when
the system isn't provided with clean input and needs to deal with
conflicting axioms, incomplete reasoning, and the rest of the real world)
> Of course, not all humans are going to accept that this is a morally
correct
> process to be using to deal with moral issues! Many religious folks will
> consider this process to be part of secular society and hence not morally
> valid....
Which is, of course, a view that should be expressible in the system as
complete and coherent. I don't know . . . if the system allows them to
express their views clearly and in a way that others are more likely to
accept, the honest ones are likely to like it.
At any rate, I don't think that I'd want to push it as a moral
decision-making system. I mainly used that as a hook for this group. It's
a group decision-making under uncertainty system which could be used to
solve a problem that a Friendly AI desperately needs solved.
Mark
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT