Re: Singularity, Breaker of Dreams

From: den Otter (neosapient@geocities.com)
Date: Wed Sep 09 1998 - 04:09:10 MDT


----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>

> den Otter wrote:
> >
> > ----------
> > > From: Eliezer S. Yudkowsky <sentience@pobox.com>
> >
> > > Sooner or later, some generation will face the choice
> > > between Singularity and extinction. Why push it off, even if we
> > > could? And besides, we might not die at all.
> >
> > But we can't rely on that, can we?
>
> Depends on what you mean by "rely". If you mean, "Can we assume that the
> probability is 90%?", the answer is "No". If you mean, "Can we behave as if
> the probability is 90%?", the answer is "Yes". Our world is dynamically
> unstable, and is being acted on by powerful forces and positive feedbacks
> which serve to destabilize it further. Under the circumstances, the only
> decision we can make effectively is whether we'll die in nuclear war or
> nanowar, or whether a Singularity will occur. These are the two stable
> states, and the Universe is filled with stable things.

The question is: how likely is it that there will be a world-wide disaster
in the few extra years/months that it presumably takes to create SI
from upgraded/uploaded humans (as opposed to creating SI from AI).
This must then be weighed against the possibility of (super) AI
terminating our existence. If you include the possibility of moving to
space (surely quite feasible in a decade or two), then the scales tip
towards waiting with SI until humans can be uplifted, IMO. Of course,
this assumes that AI will be easier than uplifting, which is very likely
but not completely certain.

> The forces inside a Singularity are powerful, complex, and far more dependent
> on external factors then the initial conditions. To the extent that initial
> conditions do have effect, they must use unstable forms of insanity to shield
> the AI from the external truth. In short, trying to control the Singularity
> would result in a world scoured bare and THEN Transcenscion. While I might
> find this outcome acceptable, I don't think anyone else would - and even from
> my perspective it's too dangerous; what if the Blight scours the Earth bare
> and then commits suicide?

So, if I understand you correctly, you would have peace with your own demise
as long as your "brainchild", the Blight, lives? That's...traditional. Personally,
I wouldn't be even satisfied with an uploaded copy of myself to continue
"my" existence, let alone some completely alien form of life.

> I couldn't command the future even if I had the complete source code of a
> Macintosh-compatible seed AI in front of me right now. Choose between
> pre-existing possibilities, perhaps, but not add possibilities that weren't
> there before.

Of course, if you use AI to cause a Singularity you have effectively
handed over control. Enhancing yourself (with chip implants, for
example) to the point of SI (or at least significantly increased
intelligence) would on the other hand give you a fair amount of
control over things to come. No guarantees, but certainly a
fighting chance.
 
> > The facts are simple:
> >
> > 1) at this time, only a handful of people really grasp the enormity
> > of the coming changes, and (almost certainly) most of them are
> > transhumanists (unfortunately, even in this select group many
> > can't/won't understand the consequences of a Singularity, but
> > that aside).
>
> Sounds a bit elitist to me.

The pot and the kettle? Yes, it's elitist in the sense that only a small
group of people (will) see the Singularity coming, and even less will
try to "surf its wave" (instead of trying to run away and getting washed
over). On the other hand it isn't elitist at all since everyone is free to
join this group of survivalists. I'm not exculding anyone, on the contrary:
I've told hunderds of people (via the web) about the things to come,
but (almost) no-one would listen.

> Are you sure *you're* one of the Chosen?

I have chosen myself, and so can you or anyone else. I
know that Max for example has chosen himself too, so
at least I'm in good company. ;-) At the moment it may
mean little, but at least it's a beginning. I accept that
the most probable outcome is failure (and death), but
since there is nothing to lose, I might as well go for it
(the same reason why I've signed up for suspension).

> Seriously, my perspective doesn't really allow for dividing humanity into
> groups like that.

I'm quite sure most of humanity will never know what hit them
(some will still be in the stone age when it happens). Most
people seem to even be in denial with regard to Y2K, which is
*nothing* compared to the Singularity. A few people see it coming,
some might even try to influence the event, but most won't do
anything. Humanity is already devided. This is no personal
preference, but a fact.

> Where thinking is concerned, you've got rocks, mortals, and
> Post-Singularity Entities. Sorting mortals by intelligence is as silly as
> separating rocks or PSEs.

As in nature, intelligence (in this case mainly foresight) will be the
great discriminator. The "best adapted" mortals will become their
own successors, the PSEs. The fate of the rest is very uncertain.
 
> > 2) This gives us a *huge* edge, further increased by the high
> > concentration of scientific/technological talent in the >H community.
>
> No, it doesn't. We ain't got no money. We ain't got no power.

Correct. A major flaw, which deserves more attention than any other.
Money=power, money=life. It is time everybody woke up and
realized that without money, all our future dreams are just hot air.

>_Zyvex_ might
> be said to have an edge because it's doing work in nanotechnology. The MIT
> labs might be said to have an edge. If you're really generous, I could be
> said to have an edge because of "Coding A Transhuman AI" or "Algernon's Law".
> What I'm trying to convey is that each individual has ver own "edge". Not
> only that, but I think that if all the Extropians worked together it would
> simply slow things down.

Individual edges are small, and useless if you don't see the big picture. To
successfully create SI *from humans*, broad co-operation is needed.
Besides, I don't trust Zyvex or MIT *with my life*, and neither should
you. I want to be there when SI is born, nothing less. Co-operation,
a "Singularity Club", would increase the personal chances of survival
of all people involved. It is the rational thing to do for the egoist and
altruist alike.
 
> > 3) If we start preparing now, by keeping a close eye on the
> > the development of technologies that could play a major part
> > in the Singularity (nanotech, AI, implants, intelligence
> > augmentation, human-machine interfaces in general etc.)
> > and by aquiring wealth (by any effective means) to set up a
> > SI research facility, then we have a real chance of success.
>
> Go ahead. Don't let me stop you.

If I could do it by all myself, I wouldn't be wasting my time on this
list, would I? ;-)
 
> > Note: the goal should (obviously) be creating SI by "uplifting"
> > (gradually uploading and augmenting) humans, *not* from AIs.
>
> Too damn slow. Turnaround time on neurological enhancement is probably at
> least a decade and probably more - for real effectiveness, you have to start
> in infancy. I can't rely on the world surviving that long.

> Also, I trust humans even less than I trust AIs.

At least you know this devil (humans). The only (rational) reason
why we would want to have the Singularity asap is our innate mortality. If
science can find a way around that without the help of SI, then there is
much less of a rush. You can move to space to keep safe from WW3
and like mishaps, and work on the slower but more reliable
SI-from-humans-approach. Everyone keeps a close eye on the others,
and when the technology is ready, everyone uploads and gets
augmented at the same time. Then...all bets are off.

> Also, the first-stage neurological transhumans will just create AIs, since
> it's easier to design a new mind than untangle an evolved one.

The main question should not be "what's easier", but "what's safer". You
may trust AI more than humans (I can get into that), but do you trust AI
more than *yourself*?
 



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:33 MST