Re: Gattaca on TV this weekend

From: Brian Atkins (brian@posthuman.com)
Date: Sun Jun 23 2002 - 11:30:58 MDT


Eugen Leitl wrote:
>
> On Sat, 22 Jun 2002, Brian Atkins wrote:
>
> > Eugen I know you are smarter than this. We ain't talking about a
> > meteor disaster or ice age here- this is fast change driven by smarter
> > than human intelligence. There is no reason to believe that anyone who
> > decides to stay behind as normal humans will have any negative issues
> > to deal with. They likely will live in a world that is protected from
> > the kinds of natural disasters you mention, a world of free basic
> > resources where they can pursue what they wish freely.
>
> You're certainly describing something desirable. However, I fail to see
> how this can be achieved. There is no motivation for superior players to
> keep us all cozy & warm. (Pets are strictly a primate thing).

I have already stated in previous messages that I personally have such
motivation now, and expect to continue that motivation if I became more
intelligent. I can point you to other people who claim to have the same
motivation. If you have some proof you can show that all of us will lose
this motivation upon becoming more intelligent, please lay it out.
Otherwise concede the point please.

>
> > Eugene you advocate creating a Singularity-ish future where human
> > uploads drive the change. There is inherent risk in that alone, not to
>
> There two brands of Singularity: those in which we make it, and in those
> we're just another stratum in the fossil record. I'm particular to the
> former, and thus tend to seem matters somewhat in a boolean manner.

You do or do not admit that there is inherent risk to yours? Yes or no?

>
> > mention that while we wait for plain ole humans to slowly develop that
> > uploading technology over a long period of time we are stuck here with
> > various existential risks still hanging over our head (plus 150k
> > deaths/day). Your answer to that appears to be various fantasies
> > involving space colonization and relinquishment which have little
> > actual chance of working or becoming possible in the next 50 years.
>
> You can safely exclude space colonization. It's a desideratum, not
> something I truly expect. As to fantasies, it's a matter of perspective.

Thanks for conceding that point. So will you also concede that the longer
we draw out the pre-Singularity period, the higher the chances of hitting
an existential risk? You know, asteroid collisions with Earth, all that
good stuff...

>
> > I present an alternate scenario involving an AI technology that can
> > potentially be developed much earlier with the same or lower risk
> > compared to uploading humans, and so far I am not getting any good
>
> I'm not buying "lower risk" for a second.

(the audience is reminded Eugene hasn't even read our documentation yet)

>
> > criticism to it. Most people prefer to bash it using irrelevant
> > arguments rather than actually read the documentation involved in my
> > proposed experiment protocol.
>
> As I've said before, I promise to read and comment on the SIAI. However, I
> do not trust my judgement in this matter by a far margin. There is simply
> too much at stake here for a singple person to decide.

Umm, aren't you the person advocating laws backed up by jack-booted thugs
to enforce a near-relinquishment world? Or did I miss something.

>
> > I'm still not getting it. Perhaps you can explain in better detail and
> > less hand waving what exactly prevents me as a SI (or even less) from
> > spending my time to develop an uploading system that can be produced
> > using replicating nanotech and local materials available everywhere on
> > Earth, and then dropping this on Earth as a gift for free.
>
> Lately I'm hearing similiar arguments from the archae, which complain that
> we don't make Earth a better place for them to live. Sure we can, in
> theory. Will we? No f*cking way.

I'll take that as "No, I have no explanation to what you asked"

>
> > I already stated I would be motivated to do such, so we can scratch that
> > off the list. As an entity likely running faster than real time with the
> > likely ability to spawn off other instances of myself or semi intelligent
> > design processes I have plenty of free time for designing and planning
>
> Why am I not spending my time to make the world a better place for the
> archae? Because I prefer a different environment. Also, I have to pay my
> bills, so excuse me if I have to follow requirements which allow me to
> keep my habitat more or less luxuriously equipped. So sorry, achae.

We aren't talking about you, we're talking about me and others interested
in helping out.

>
> > so we scratch those off the list. The amount of energy and matter under
> > my control at this point is likely way way more than needed for this
> > project, so scratch that off. As for deploying and growing it, that shouldn't
> > be a problem either. What am I missing?
>
> I think you're taking a bit too much for granted. Then, neither of us is
> an SI, so what do we know?

You and your jack booted thugs seem to have a handle on it.

>
> > Believe it or not (if you can de-anthropomorphize for just one nanosec),
> > some superintelligent entities may actually care about plain ole humans
> > left on Earth more than than any given human is capable of right now. Feel
> > free to give a detailed answer as to why your scenario is the One True
> > SI Future.
>
> Any detailed predictions must necessarily fail, and be it due to
> probabilistic reasons alone. I'm just trying to extrapolate from the past,
> using a minimum of basic assumptions. As such I try to minimize the amount
> of ad hoc assumptions in the models, and try to define envelopes for
> scenarios, instead of focusing on anything specific.

Yes but the point is that the past is not a guide to this future. A
monkey given absolute power is not a guide to how an AI will necessarily
behave.

> What is FTA, please?

Some term Anders made up... fast transformation assumption I think

> > I don't think the timeframe for humans to conquer all disease and aging
> > is anytime soon.
>
> It doesn't matter as long we're sure we can stash them away in the dewars,
> with tolerable loss of information. I'm not putting a lot of betting money
> on this, but it is a probability distinctly different from zero.

Don't forget the "for all humans on Earth" part. Otherwise it really doesn't
make much of a dent in the 150k deaths/day rate.

> > Last I looked you aren't a "serious AI researcher" yourself, so I'm not
> > sure how valuable your opinions are in this area. The few AI people out
>
> Ad hominem. My qualifications or lack of qualifications in the area are
> orthogonal to the issues, which impact every living being on Earth. You
> have to suffer their input on the matter.

An ad hominem is based on bringing up an irrelevant fact about your person.
You are advocating introducing law enforcement for a technology (AI),
therefore I do believe this is a relevant fact. If you are unqualified
to judge the science of the issue, why shouldn't I treat your comments
just as if some ignorant Green Party person came up to me and started
talking about how the world was going to end because of global warming
so we have to ban anything that outputs CO2?

> > > Human competitive general AI is never safe. Especially, if it can profit
> > > from positive feedback in self-enhancement. We humans cannot do this
> > > currently, so we would be facing a new competitor with a key advantage.
> >
> > Again with the iron-clad statements with no backing basis. Last I checked
> > you had no matching proof of the impossibility of Friendly AI.
>
> Given the consequences, the burden of proof if firmly in your court.

More non-answers, but true. This will have to wait until we can get some
real results.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:59 MST