Re: Gattaca on TV this weekend

From: Eugen Leitl (eugen@leitl.org)
Date: Sun Jun 23 2002 - 13:25:06 MDT


On Sun, 23 Jun 2002, Brian Atkins wrote:

> I have already stated in previous messages that I personally have such
> motivation now, and expect to continue that motivation if I became
> more intelligent. I can point you to other people who claim to have
> the same motivation. If you have some proof you can show that all of

You're repeating what I said. Most people have this motivations today. I
have these motivations. Jane and her dog have these motivations. This is a
good thing, since allowing us a window of empathy with unmodified people.
 
However, we're not talking about today, nor about people. For all
practical purposes my motivations subjective 3 MYears downstream are
completely inscrutable. Given evolutinary pressure and
Darwinian/Lamarckian drift my motivations become completely inscrutable
after a much shorter period. You're not only assuming that your
motivations survive unchanged, but that you're actually able to protect
humanity and associated ecology against players which don't give a flying
fuck about the matter.

Oh, and not to forget, the principal players in your scenario are nonhuman
to start with.

> us will lose this motivation upon becoming more intelligent, please
> lay it out. Otherwise concede the point please.

I'm not playing a game of points. You're not a Power, hence your current
motivations are irrelevant to this discussion.

> > There two brands of Singularity: those in which we make it, and in those
> > we're just another stratum in the fossil record. I'm particular to the
> > former, and thus tend to seem matters somewhat in a boolean manner.
>
> You do or do not admit that there is inherent risk to yours? Yes or no?

Life is uncertain. Some futures are intrinsically more risky than others.
Any future trajectories involving rapid change driven by emergent nonhuman
players have DANGER written in LARGE RED letters ALL OVER IT. Your
specific flavour of Singularity is based on creating a runaway nonhuman
player to drive as hard as change as possible. Hence your future involves
the maximum risk possible. It is really funny that you're asking me
questions involving risk stated in the boolean domain.

Is there an inherent risk in crossing the road? Yes. Is there an inherent
risk in detonating a 20 MT nuclear device from close proximity? Yes.

> > You can safely exclude space colonization. It's a desideratum, not
> > something I truly expect. As to fantasies, it's a matter of perspective.
>
> Thanks for conceding that point. So will you also concede that the longer

Point? Which point? Who's got the point? Taco BOING!

> we draw out the pre-Singularity period, the higher the chances of hitting
> an existential risk? You know, asteroid collisions with Earth, all that
> good stuff...

Big killer impacts happen in geological time scales. They are completely
irrelevant to the developments concerning us. Distribution in hardenend
compartments only addresses the threat of global ecophagy by biovorous
nanoreplicators. Barring unexpected advances this is a difficult target in
designspace.

> > I'm not buying "lower risk" for a second.
>
> (the audience is reminded Eugene hasn't even read our documentation yet)

I have read a few passages of it. Thank you for reminding me to take CFAI
apart. I will.
 
> > As I've said before, I promise to read and comment on the SIAI. However, I
> > do not trust my judgement in this matter by a far margin. There is simply
> > too much at stake here for a single person to decide.
>
> Umm, aren't you the person advocating laws backed up by jack-booted thugs
> to enforce a near-relinquishment world? Or did I miss something.

If you insist to be a troll, there will be no discussion. As you wish.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:59 MST