From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Sat Feb 28 2004 - 22:48:53 MST
--- Tommy McCabe <rocketjet314@yahoo.com> wrote: >
You say that moralities 'consistent' with each other
> don't have to be identical. They do. Morality isn't
> mathematics. In order for them to be consistent,
> they
> have to give the same result in every situation, in
> other words, they must be identical. 'I like X'
> isn't
> really a consistent morality with 'Do not kill',
> since
> given the former, one would kill to get X. I don't
> like the idea of an AI acting like a human, ie, of
> having heuristics of 'Coke is better tha Pepsi' for
> no
> good reason. Of course, if their is a good reason, a
> Yudkowskian FAI would have that anyway. You may take
> the 'personal component of morality is necessary'
> thing as an axiom, but I don't and I need to see
> some
> proof.
O.K, 'conisistent with' wasn't a good word to use as
regards moralities. But I think you know what I
meant. Perhaps 'congruent with' would be a better
term.
I could define morality Y as being congruent with
moralitity X, if in most situations, Y did not
conflict with X. And if in the situations where Y did
conflict, X took priority.
So for instance, say morality X was 'Thou shall not
kill', and morality Y was 'Coke is Good, Pepsi is
Evil'. Y is congruent with X if a sentient can pursue
Y without conflicting with X (The sentient looks to
promote Coke, but without killing anyone).
The reason I think a 'Personal Morality' component is
neccessery, is that WE DON'T KNOW what the Universal
Morality component is. It might be 'Volitional
Morality', but that's just Eliezer's guess. FAI's are
designed to try to reason out Universal Morality for
themselves. Programmers don't know what it is in
advance. It's unlikely they'd get it exactly right to
begin with. So, in the beginning some of what we
teach an FAI will be wrong. The part which is wrong
will be just arbitrary (Personal Morality). So you
see, all FAI's WILL have a 'Personal Morality'
component to start with.
>
> "Well yeah true, a Yudkowskian FAI would of course
> refuse requests to hurt other people. But it would
> aim to fulfil ALL requests consistent with volition.
>
> (All requests which don't involve violating other
> peoples right)."
>
> And that's a bad thing? You really don't want an AI
> deciding not to fulfill Pepsi requests because it
> thinks Coke is better for no good reason- that leads
> to an AI not wanting to fulfill Singularity requests
> because suffering is better.
>
> "For instance, 'I want to go ice skating', 'I want a
> Pepsi', 'I want some mountain climbing qquipment'
> and
> so on and so on. A Yudkowskian FAI can't draw any
> distinctions between these, and would see all of
> them
> as equally 'good'."
>
> It wouldn't- at all. A Yudkowskian FAI, especially a
> transhuman one, could easily apply Bayes' Theorem
> and
> such, and see what the possible outcomes are, and
> their porbabilities, for each event. They certainly
> aren't identical!
>
> "But an FAI with a 'Personal Morality' component,
> would
> not neccesserily fulfil all of these requests. For
> instance an FAI that had a personal morality
> component
> 'Coke is good, Pepsi is evil' would refuse to fulfil
> a
> request for Pepsi."
>
> That is a bad thing!!! AIs shouldn't arbitrarily
> decide to refuse Pepsi- eventually the AI is then
> going to arbitrarily refuse survival. And yes, it is
> arbitrary, because if it isn't arbitrary than the
> Yudkowskian FAI would have it in the first place!
>
> "The 'Personal morality' component
> would tell an FAI what it SHOULD do, the 'Universal
> morality' componanet is concerned with what an FAI
> SHOULDN'T do. A Yudkowskian FAI would be unable to
> draw this distinction, since it would have no
> 'Personal Morality' (Remember a Yudkowskian FAI is
> entirely non-observer centerd, and so it could only
> have Universal Morality)."
>
> Quite wrong. Even Eurisko could tell the difference
> between "Don't do A" and "Do A". And check your
> spelling.
Sorry. What I meant was that the FAI can't
distinguigh between 'Acts and Omissions' (read up on
moral philosophy for an explanation).
>
> "You could say that a
> Yudkowskian FAI just views everything that doesn't
> hurt others as equal, where as an FAI with an extra
> oberver centered component would have some extra
> personal principles."
>
> 1. No one ever said that. Straw man.
> 2. Arbitrary principles thrown in with morality are
> bad things.
>
> "Yeah, yeah, true, but an FAI with a 'Personal
> Morality' would have some additional goals on top of
> this. A Yudkowskian FAI does of course have the
> goals
> 'aim to do things that help with the fulfilment of
> sentient requests'. But that's all. An FAI with an
> additional 'Personal Morality' component, would also
> have the Yudkowskian goals, but it would have some
> additional goals. For instance the additinal
> personal
> morality 'Coke is good, Pepsi is evil' would lead
> the
> FAI to personally support 'Coke' goals (provided
> such
> goals did not contradict the Yudkowskian goals)."
>
> It isn't a good thing to arbitarily stick moralities
> and goals into goal systems without justification.
> If
> there was justification, then it would be present in
> a
> Yudkowskian FAI. And 'Coke' goals would contradict
> Yudkowskian goals every time someone asked for a
> Pepsi.
But ARE all 'arbitary' goals really a bad thing?
Aren't such extra goals what makes life interesting?
Do you prefer rock music or heavy metal? Do you like
Chinese food or Sea food best? What do you prefer:
Modern art or Classical? You could say that these
preferences are probably 'arbitrary', but they're
actually what marks us out as individuals and makes us
unique.
If all of us simply pursued 'true' (normative,
Universal) morality, then all of us would be identical
(because all sentients by definition converge on the
same normative morality).
Now in the example of an FAI with the additional
'arbitary' goal 'Coke is Good, Pepsi is Evil') in most
situations this would not conflict with Volition. In
the specific circumstances where it did, Volition
could take precedence. You can then say that the
additional morality is 'congruent with' (does not
conflict with) Volition.
Why would an FAI refusing someone a Pepsi be bad? The
FAI would not stop anyone drinking Pepsi if that's
what they wanted. It would simply be refusing to
actively help them. 'Coke is good, Pepsi is bad'
would only contradict Volitional Morality if the FAI
actually tried to use force to stop people drinking
Pepsi. So long as the FAI continued to tolerate
people drinking Pepsi, there is no conflict with
Volition. You see the distinction between 'Acts and
Omissions'? Actively helping someone do x, is not the
same thing as simply tolerating someone doing x.
>
> "I've given the general solution to the problem of
> FAI
> morality. We don't know that 'Personal Morality'
> set
> to unity would be stable. Therefore we have to
> consider the case where FAI's have to have a
> non-trival 'Personal Morality' component."
>
> Non sequitur. That's like saying "We don't know if
> car
> A will be stable with 100% certainty, so we have to
> take a look at car B that has large heaps of trash
> on
> it for no good reason"
>
See what I said above. The programmers don't know in
advance what true (Universal) morality is. So some of
what a FAI learns in the beginning will be wrong (and
so all FAI's will start with some arbitrary 'Personal
Morality' components thrown in).
=====
Please visit my web-site at: http://www.prometheuscrack.com
Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT