Re: Singularity: Individual, Borg, Death?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Dec 03 1998 - 09:01:50 MST


Alejandro Dubrovsky wrote:
>
> On Wed, 2 Dec 1998, Eliezer S. Yudkowsky wrote:
>
> [snipped lots]
> > functioning cohesively to produce correct answers. I don't allow myself to be
> > directed by wishful logic. Even my root supergoal, the Singularity, is
> > produced by a tight logic chain of minimal length, and I defy anyone to
> > produce a shorter chain of logic without assuming what they're trying to prove.
>
> so, what is this chain of logic that leads you to seek the Singularity?
> I'm sorry if you've stated it before, i may have missed it.
> chau

Okay. First, the actual Interim logic:

Step 1: Start with no supergoals, no initial values.
  Result: Blank slate.
Step 2: Establish a goal with nonzero value.
  Either all goals have zero value (~P) or at least one goal has nonzero value
(P). Assign a probability of Unknown to P and 1-Unknown to ~P. Observe that
~P (all goals have zero value) cancels out of any choice in which it is
present; regardless of the value of Unknown, the grand total contribution to
any choice of ~P is zero. All choices can be made as if Unknown=1, that is,
as if it were certain that "at least one goal has nonzero value". (This
doesn't _prove_ P, only that we can _act_ as if P.)
  Result: A goal with nonzero renormalized value and Unknown content.
Step 3: Link Unknown supergoal to specified subgoal.
  This uses two statements about the physical world which are too complex to
be justified here, but are nevertheless very probable: First, that
superintelligence is the best way to determine Unknown values; and second,
that superintelligence will attempt to assign correct goal values and choose
using assigned values.
  Result: "Superintelligence" is subgoal with positive value.

Or to summarize: "Either life has meaning, or it doesn't. I can act as if it
does - or at least, the alternative doesn't influence choices. Now I'm not
dumb enough to think I have the vaguest idea what it's all for, but I think
that a superintelligence could figure it out - or at least, I don't see any
way to figure it out without superintelligence. Likewise, I think that a
superintelligence would do what's right - or at least, I don't see anything
else for a superintelligence to do."

--
There are some unspoken grounding assumptions here about the nature of goals,
but they are not part of what I think of as the "Singularity logic".
Logical Assumptions:
LA1.  Questions of morality have real answers - that is, unique,
observer-independent answers external from our opinions.
Justification:  If ~LA1, I can do whatever I want and there will be no true
reason why I am wrong; and what I want is to behave as if LA1 - thus making my
behavior rational regardless of the probability assigned to LA1.
Expansion:
By "questions of morality", I mean the cognitive structures known as "goals"
or "purposes", whose function is to choose between actions by assigning
relative values to different states of the Universe.
By "real answers", I mean that there are logically provable theorems,
experimentally provable laws, or physically real objects that correspond to
these functions sufficiently that it would be perverse not to regard our
cognitive models as imperfect approximations of reality.
By "unique, observer-independent" I mean that the objective morality is not
different from each intelligent perspective, although of course different
intelligences may have different pictures.
By "external from our opinions" I mean that the objective morality is not
influenced by our own opinions about it, and is likely to be as strange and
alien as quantum physics or general relativity.
Note:  This does not say anything about whether the correct value is nonzero.
Rational Assumptions:
RA1.  I don't know what the objective morality is and neither do you.
This distinguishes from past philosophies which have attempted to "prove"
their arguments using elaborate and spurious logic.  One does not "prove" a
morality; one assigns probabilities.
LA1 and RA1, called "Externalism", are not part of the Singularity logic per
se; these are simply the assumptions required to rationally debate
morality.(1)  The rules for debating morality do not differ fundamentally from
the rules for debating reality.  If you substitute "reality" for "morality",
"statements" and "opinions" for "goals" and "purposes", and "models" for
"actions", in the assumptions above, you will have true statements with the
same interpretation.
(1)  This is not to say that the assumptions are necessarily true, simply that
I do not see a plausible or useful alternative.
-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:53 MST