Re: Gattaca on TV this weekend

From: Eugen Leitl (eugen@leitl.org)
Date: Sun Jun 23 2002 - 07:21:35 MDT


On Sat, 22 Jun 2002, Brian Atkins wrote:

> Eugen I know you are smarter than this. We ain't talking about a
> meteor disaster or ice age here- this is fast change driven by smarter
> than human intelligence. There is no reason to believe that anyone who

You will observe that the current extinction event (which is rapidly
mounting to become one of the Big Ones) is precipitated by intelligences
smarter than animal. And there's not a damn thing we
smarter-than-your-average-primate can do about it, even if we tried hard.
(And we're not breaking our extremities in the attempt, which I guess
tells us a number of things about trans-species empathy).

> decides to stay behind as normal humans will have any negative issues
> to deal with. They likely will live in a world that is protected from
> the kinds of natural disasters you mention, a world of free basic
> resources where they can pursue what they wish freely.

You're certainly describing something desirable. However, I fail to see
how this can be achieved. There is no motivation for superior players to
keep us all cozy & warm. (Pets are strictly a primate thing).
 
> Eugene you advocate creating a Singularity-ish future where human
> uploads drive the change. There is inherent risk in that alone, not to

There two brands of Singularity: those in which we make it, and in those
we're just another stratum in the fossil record. I'm particular to the
former, and thus tend to seem matters somewhat in a boolean manner.

> mention that while we wait for plain ole humans to slowly develop that

Life is frequently risky, and so far seems to end with death. All latest
efforts to the contrary, alas.

> uploading technology over a long period of time we are stuck here with
> various existential risks still hanging over our head (plus 150k
> deaths/day). Your answer to that appears to be various fantasies
> involving space colonization and relinquishment which have little
> actual chance of working or becoming possible in the next 50 years.

You can safely exclude space colonization. It's a desideratum, not
something I truly expect. As to fantasies, it's a matter of perspective.
 
> I present an alternate scenario involving an AI technology that can
> potentially be developed much earlier with the same or lower risk
> compared to uploading humans, and so far I am not getting any good

I'm not buying "lower risk" for a second.

> criticism to it. Most people prefer to bash it using irrelevant
> arguments rather than actually read the documentation involved in my
> proposed experiment protocol.

As I've said before, I promise to read and comment on the SIAI. However, I
do not trust my judgement in this matter by a far margin. There is simply
too much at stake here for a singple person to decide.
 
> I'm still not getting it. Perhaps you can explain in better detail and
> less hand waving what exactly prevents me as a SI (or even less) from
> spending my time to develop an uploading system that can be produced
> using replicating nanotech and local materials available everywhere on
> Earth, and then dropping this on Earth as a gift for free.

Lately I'm hearing similiar arguments from the archae, which complain that
we don't make Earth a better place for them to live. Sure we can, in
theory. Will we? No f*cking way.
 
> I already stated I would be motivated to do such, so we can scratch that
> off the list. As an entity likely running faster than real time with the

Unfortunately, you're not an SI. Archae don't speak for humans.

> likely ability to spawn off other instances of myself or semi intelligent
> design processes I have plenty of free time for designing and planning

Why am I not spending my time to make the world a better place for the
archae? Because I prefer a different environment. Also, I have to pay my
bills, so excuse me if I have to follow requirements which allow me to
keep my habitat more or less luxuriously equipped. So sorry, achae.

> so we scratch those off the list. The amount of energy and matter under
> my control at this point is likely way way more than needed for this
> project, so scratch that off. As for deploying and growing it, that shouldn't
> be a problem either. What am I missing?

I think you're taking a bit too much for granted. Then, neither of us is
an SI, so what do we know?
 
> Believe it or not (if you can de-anthropomorphize for just one nanosec),
> some superintelligent entities may actually care about plain ole humans
> left on Earth more than than any given human is capable of right now. Feel

Wonderful. I guess it's all in http://www.singinst.org/CFAI/index.html
somewhere. Perhaps it is. We'll see.

> free to give a detailed answer as to why your scenario is the One True
> SI Future.

Any detailed predictions must necessarily fail, and be it due to
probabilistic reasons alone. I'm just trying to extrapolate from the past,
using a minimum of basic assumptions. As such I try to minimize the amount
of ad hoc assumptions in the models, and try to define envelopes for
scenarios, instead of focusing on anything specific.

I have no idea whether this makes them anymore probable. Not that it
matters, I'll be dead (or comfortably drooling in the geriatrics ward)
before any of this happens.
 
> Non-sequitur? This whole discussion we're having is already assuming
> we can develop some kind of FTA technology. If you want to start a

What is FTA, please?

> discussion about whether uploading is even possible start another
> thread.

Sorry, I'm not feeling like this now. I will bring this up on MURG before
long, though.
 
> Remind me not to be around after you upload unless I have a lot of
> powerful friends with me or a Sysop.

Excellent, now we have a mechanism for posthuman warfare. Isn't it fun?
 
> I don't think the timeframe for humans to conquer all disease and aging
> is anytime soon.

It doesn't matter as long we're sure we can stash them away in the dewars,
with tolerable loss of information. I'm not putting a lot of betting money
on this, but it is a probability distinctly different from zero.
 
> Did something change or have you still not even read our work?

I sometimes read a passage or two. But, you're right, and I'll reserve
judgment after I've read the whole hog.

(Not that my comments matter, given the issues at stake. This is not even
peer review).
 
> Last I looked you aren't a "serious AI researcher" yourself, so I'm not
> sure how valuable your opinions are in this area. The few AI people out

Ad hominem. My qualifications or lack of qualifications in the area are
orthogonal to the issues, which impact every living being on Earth. You
have to suffer their input on the matter.

> there who seem to be aware of the potential of seed AI and are Singularity-
> aware (Goertzel, Kurzweil, etc.) have made statements to the effect that
> this work is NOT premature. How could it /possibly/ be premature when we
> have people out there right now coding what they claim has seed AI potential?

We've had people claiming that for the last four decades. So far there's
no signs we're really tickling the dragon's tail yet. And Ain't That A
Good Thing.

> I'm very glad at least our organization is attempting to reduce this risk,
> although I wish there were more people working on the issue.

I'm glad you're working on it, and we definitely need more constructive
input on the issues from as many and as diverse parties as possible.

You might be right, you know.

> > Human competitive general AI is never safe. Especially, if it can profit
> > from positive feedback in self-enhancement. We humans cannot do this
> > currently, so we would be facing a new competitor with a key advantage.
>
> Again with the iron-clad statements with no backing basis. Last I checked
> you had no matching proof of the impossibility of Friendly AI.

Given the consequences, the burden of proof if firmly in your court.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:58 MST