Re: DiscoveryCh - AI

From: Eugene.Leitl@lrz.uni-muenchen.de
Date: Thu Jan 25 2001 - 16:22:24 MST


Samantha Atkins wrote:
 
> Great suggestion! But, given that all of what you said is true, exactly
> what would such "extreme care" look like? I hardly think the AIs, if

Not making an AI which could come into the positive autofeedback enhancement
loop. The mechanism by which an AI seed can suddenly gain orders of magnitude
more computational resources is as far as I can see:

1) by enhancing the hitherto inefficient coding on the same hardware
   (redesigning itself to optimally utilize the given hardware platform,
    the usual bootstrap effect).

2) by grabbing more hardware (hostile action)

3) by designing and building its own hardware (relatively slow, but (hyper)
   exponential)

I suggest we focus on the first two issues here. I already have my
ideas, but how for a little unbiased brainstorming here?

> you are right, are going to be more or less annoyed with us if we
> prattle on about taxing them. Should we all just take the view I've

Even better, not trying to tax the >H ones at all. I'd suggest
as many perks as possible.

> heard Hans Moravec state that we should be happy in building our
> evolutionary successors but not expect to stay around? Seriously, what

This is not a rational strategy. I think one could call Hans M. a
borderline pathological personality. Cannibal mindchildren is not my idea
of a happy parenthood.

> does being careful look like given your view of the situation and how
> much does it matter beyond insuring we are around long enough to get the
> AIs started? I'm not saying this is (or isn't) my view. I'm just
> attempting to understand more fully what yours is.

I don't have a fixed strategy, because the problem domain is not that.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:05:20 MST