From: Alejandro Dubrovsky (s335984@student.uq.edu.au)
Date: Fri Dec 04 1998 - 02:39:37 MST
On Thu, 3 Dec 1998, Eliezer S. Yudkowsky wrote:
> Okay. First, the actual Interim logic:
>
> Step 1: Start with no supergoals, no initial values.
> Result: Blank slate.
> Step 2: Establish a goal with nonzero value.
> Either all goals have zero value (~P) or at least one goal has nonzero value
> (P). Assign a probability of Unknown to P and 1-Unknown to ~P. Observe that
> ~P (all goals have zero value) cancels out of any choice in which it is
> present; regardless of the value of Unknown, the grand total contribution to
> any choice of ~P is zero. All choices can be made as if Unknown=1, that is,
> as if it were certain that "at least one goal has nonzero value". (This
> doesn't _prove_ P, only that we can _act_ as if P.)
> Result: A goal with nonzero renormalized value and Unknown content.
> Step 3: Link Unknown supergoal to specified subgoal.
> This uses two statements about the physical world which are too complex to
> be justified here, but are nevertheless very probable: First, that
> superintelligence is the best way to determine Unknown values; and second,
> that superintelligence will attempt to assign correct goal values and choose
> using assigned values.
> Result: "Superintelligence" is subgoal with positive value.
I'm assuming that what you are doing is trying to maximize the value of
the system. I don't see, though how you can assume that the goals' values
are positive. ie "Either life has meaning or it doesn't, but i don't see
any way of knowing if the discovery of the meaning of life is good or bad"
>
> Or to summarize: "Either life has meaning, or it doesn't. I can act as if it
> does - or at least, the alternative doesn't influence choices. Now I'm not
> dumb enough to think I have the vaguest idea what it's all for, but I think
> that a superintelligence could figure it out - or at least, I don't see any
> way to figure it out without superintelligence. Likewise, I think that a
> superintelligence would do what's right - or at least, I don't see anything
> else for a superintelligence to do."
>
> --
>
> There are some unspoken grounding assumptions here about the nature of goals,
> but they are not part of what I think of as the "Singularity logic".
>
> Logical Assumptions:
> LA1. Questions of morality have real answers - that is, unique,
> observer-independent answers external from our opinions.
> Justification: If ~LA1, I can do whatever I want and there will be no true
> reason why I am wrong; and what I want is to behave as if LA1 - thus making my
> behavior rational regardless of the probability assigned to LA1.
I disagree. If there are multiple, observer-dependent answers, then
there's still a morality system that affects you and you could still be in
the wrong, and this situation would fall into ~LA1.
And even if LA1, i don't see how BEHAVING as if LA1 is more rational than
if behaving as if ~LA1. As in John Clark's email about Pascal's wager,
the rational way to behave if LA1 might be to behave as if ~LA1, depending
the nature of the real moral answers.
[expansions snipped - definitions in agreement with mine]
My rejection of LA1 makes me unfit (by your conclusion, with which i
mostly agree with) to argue rationally about morality, and i suppose my
claim (like many others') is that you cannot argue rationally about
morality since LA1 seems very weak.
I'm not sure if i'm clear or even if i understand your arguments correctly
so explain/criticise/dismiss at your leisure. I don't get easily offended
either so flame if need be.
chau
Alejandro Dubrovsky
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:54 MST