Re: Yudkowsky's AI (again)

From: den Otter (neosapient@geocities.com)
Date: Thu Mar 25 1999 - 07:32:22 MST


----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>

> The whole altruistic argument is intended as a supplement to the basic
> and very practical theory of the Singularity: If we don't get some kind
> of transhuman intelligence around *real soon*, we're dead meat.

Not necessarily. Not all of us anyway.
 
> My current estimate, as of right now, is that humanity has no more than
> a 30% chance of making it, probably less. The most realistic estimate
> for a seed AI transcendence is 2020; nanowar, before 2015. The most
> optimistic estimate for project Elisson would be 2006; the earliest
> nanowar, 2003.

Conclusion: we need a (space) vehicle that can move us out of harm's
way when the trouble starts. Of course it must also be able to
sustain you for at least 10 years or so. A basic colonization of
Mars immediately comes to mind. Perhaps a scaled-up version
of Zubrin's Mars Direct plan. Research aimed at uploading must
continue at full speed of course while going to, and living on, Mars
(or another extra-terrestrial location).

Btw, you tend overestimate the dangers of nanotech and
conventional warfare (fairly dumb tech in the hands of fairly dumb
people), while underestimating the threat of Powers (intelligence
beyond our wildest dreams). God vs monkeys with fancy toys.

> So we have a chance, but do you see why I'm not being picky about what
> kind of Singularity I'll accept?

No. Only a very specific kind of Singularity (the kind where you personally
transcend) is acceptable. I'd rather have no Singularity than one where
I'm placed at the mercy of posthuman Gods (I think all Libertarians,
anarchists, individualists and other freedom-loving individuals will have
to agree here).
 
> The point is - are you so utterly, absolutely, unflinchingly certain
> that (1) morality is subjective

Probably, but who cares? Whether it's objective or subjective, seeking
to live (indefinitely) and prosper is *always* a good decision (if only
because it buys you time to consider philosophical issues such as the
one above). If "objective morality" tells me to die, it can go and kiss
my ass.

(2) your morality is correct

Maybe(?) not perfect, but certainly good enough.

(3)
> AI-based Powers would kill you and
(4) human Powers would be your
> friends - that you would try to deliberately avoid an AI-based Singularity?

Any kind of Power which isn't you is an unaccepable threat,
because it is completely unpredictable from the human pov.
You are 100% at its mercy, as you would be if God existed.
So, both versions are undesirable.

> It will take *incredibly* sophisticated nanotechnology before a human
> can become the first Power - *far* beyond that needed for one guy to
> destroy the world.

Hence, to space, asap svp.

> (Earliest estimate: 2025. Most realistic: 2040.)
> We're running close enough to the edge as it is. It is by no means
> certain that the AI Powers will be any more hostile or less friendly
> than the human ones. I really don't think we can afford to be choosy.

We _must_ be choosy. IMHO, a rational person will delay the Singularity
at (almost?) any cost until he can transcend himself.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:23 MST