From: Jef Allbright (jef@jefallbright.net)
Date: Tue Dec 13 2005 - 22:08:14 MST
On 12/13/05, Tennessee Leeuwenburg <tennessee@tennessee.id.au> wrote:
> Jef Allbright wrote:
>
> >On 12/13/05, Michael Vassar <michaelvassar@hotmail.com> wrote:
> >
> >
> >
> >>The same confusion relates to the discussion of the categorical imperative.
> >>The categorical imperative simply makes no sense for an AI. It doesn't tell
> >>the AI what to want universally done. Rational entities WILL do what their
> >>goal system tells them to do. They don't need "ethics" in the human sense
> >>of rules countering other inclinations. What they need is inclinations
> >>compatible with ours.
> >>
> >>
> >
> >Let me see if I can understand what you're saying here. Do you mean
> >that to the extent an agent is rational, it will naturally use all of
> >its instrumental knowledge to promote its own goals and from its point
> >of view there would be no question that such action is good?
> >
> >If this is true, then would it also see increasing its objective
> >knowledge in support of its goals as rational and inherently good
> >(from its point of view?)
> >
> >If I'm still understanding the implications of what you said, would
> >this also mean that cooperation with other like-minded agents, to the
> >extent that this increased the promotion of its own goals, would be
> >rational and good (from its point of view?)
> >
> >If this makes sense, then I think you may be on to an effective and
> >rational way of looking at decision-making about "right" and "wrong"
> >that avoids much of the contradiction of conventional views of
> >morality.
> >
> >- Jef
> >
> >
> Perhaps I can simplify this argument.
>
> The Categorical Imperative theory is an "is" not an "ought".
>
> Cheers,
> -T
>
Huh? Thanks for playing.
Would you like to comment on the questions I posed to Michael?
- Jef
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT