Re: Revolting AI

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Mon Mar 11 2002 - 03:40:46 MST


On Mon, 11 Mar 2002, Samantha Atkins wrote:

> > * no evolutionary algorithm AI experiments unless air gapped and following
> > a containment SOP. Do not reconnect or reuse the components outside
> > of containment until wiped clean (again following through a SOP, at
> > very least power down and do a full state wipe). Continuously revise
> > SOPs.
>
> Really insufficient. Did you read James Hogan's "Two Faces of
> Tomorrow"? A flowery ending for an intractable problem.

The issue is not absolute security. There is no such animal. The issue is
reducing the number of potential nuclei during transition through a
specific vulnerability time window.

> How will you limit this "adaptive diversity" for adapting into something
> smarter than you feel safe having around? What keeps this worm from

Um, it's an immune system to combat emergence of a mind seed, not a mind
seed.

> effectively growing in intelligence? Won't the R&D program likely

It doesn't have intelligence, it's self amplifying brute force machine to
find vulnerabilities. It has no neuronal DSP cargo onboard. It's designed
to reduce the amount of substrate for a worm with such cargo, reducing
bootstrap resources. Which are thresholded, so it pays limiting the
initial size of the petri dish.

> strengthen the possibility of self-enhancing AI that you seem so afraid
> of? Total security is a myth. Doesn't mean we shouldn't try, just that
> we should never expect to totally suceed indefintely.

I think I specifically mentioned what I'm trying to achieve with the list
of countermeasures.

> What does this readiness look like? Won't we likely be dead due to

Few 10^6 to 10^9 uploads, preferably off-planet.

> being inadequate to the increasing complexity and speed of change before
> we are "ready"?

We will very probably be dead of side effects of a Blight. It is not
obvious that we will die as a result of inability to cope with a few of
our problems.

> NO. Freedom is more important than tracking any and everyone and
> everything that might be a little scary. If these possibly scary people
> have to get permission for everything the thing of or want to try
> outside of what is already accepted then you can kiss much progress at
> all goodbye.

Recombinant DNA research is regulated. Recombinant DNA research involving
potent human pathogens is severely regulated. Nevertheless, progress still
happens. I'm proposing regulating specific areas of computer science,
which is unprecedented for it, but is otherwise common practice in a
number of other industries.

> And when you find this approach is just not very tractable without AI
> level intelligences (if then)?

The nice thing about uploading is that it doesn't involve any Power-grade
intelligences. It's a large effort, but it's very conventional research.

> How will expanding the territory you need to police to feel secure help
> you?

The point is that this is another compartment, with very different
environment conditions than down here.

> You have a strong assumption that uploading is the only way to a sane
> future. I have my doubts that minds optimized to and by and built for
> 3-d meat spaces will make very healthy upload citizens. I have even

Artificial reality is indistinguishable from the real thing, if properly
done. If you don't trust people being "healthy" (whatever that means) in
reality, are you at all concerned with welfare of people?

> more doubts that getting uploading to work reasonably will develop
> enough efficient intelligence fast enough to deal with the increasing
> demand - particular if AI is largely put on hold.

The uploading process is almost fully automated, requiring only high-level
human supervision. Since we're talking about molecular circuitry, scaling
up the processes rapidly is very possibly. I'm assuming sufficient
automation so that the scanning stages can be mass-produced almost as
rapidly, possibly using partial closure capable automation.

> In the short term I would put a lot more energy into human augmentation,
> human-computer interface and symbiosis and developing medical NT than I
> would into uploads. I believe that is a much faster way to maximizing

People are dying *right now*. Cryonics need to be validated and deployed
*right now*. Simultaneously, we need to engage into massive R&D into
volume molecular mapping and modelling of simple organisms.

Human augmentation means currently wearables. Nothing wrong with
developing them, and new interfaces. Any invasive interfaces/symbiosis
require medical NT. Medical NT the Freitas way is *hard*. It means you
have to develop conventional NT first. I'm of course for pushing
nanotechnology research, as molecular electronics and scanning stage will
much profit from it.

Meaning, we don't really disagree. Lots of our initial goals overlap.

> ourselves, our survivability and our intelligence with relatively low
> impact on security.
>
> I would also put energy into shifting the socio-economic assumptions of
> the world so that the advantage of one and especially the increasing
> ability of one is not seen as a threat instead of a boon to others.
> Without that augmentation will be fought tooth and nail, much less
> uploads.

We're getting grassroot augmentation via cellphone/PDA-->wearable route.
The acceptance of gargoyles will be quantitative within the next decade.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:55 MST