RE: fluffy funny or hungry beaver?

From: Eugen Leitl (eugen@leitl.org)
Date: Sat Jun 08 2002 - 11:38:57 MDT


On Sat, 8 Jun 2002, Lee Corbin wrote:

> My two cents: when an AI does take over some part, large or small,
> of the Earth's surface, it will have an agenda. I hope that part

My 0.02 euro: experiments involving hard-edged positive feedback
autoenhancement loops are potentially lethal for slowtime flesh people
which use a vulnerable natural ecology at the bottom of this gravity well
for life support.

>From this follows: while we're passing the window of high vulnerability 1)
we should reduce the rate of such experiments, focusing on most dangerous
ones 2) we should push for technologies making people less vulnerable.

We might not succeed, but it sounds the most worthwhile route.

> of its agenda is something close to what we would call friendly
> behavior. Therefore, I am unoffended by suggestions that we
> deliberately attempt to insert niceness or friendliness into such
> creatures at the outset. (Our survival may depend on it.)
> Therefore, I applaud efforts of people like Eliezer to define a sort
> of Friendliness for his project.

Trying to precipitate a hard-edged Singularity deliberately is dangerously
irresponsible behaviour. Arms race based arguments (we have to do it
first, so we're ahead of the others with our specific flavour of
Singularity -- do you prefer vanilla or banana daiquiri?) do sound rather
unconvincing.

> Listen, someone somewhere will do this. Look whose being

We can close down a number of easy routes for those feeling frisky. Given
the threshold required, clandestine projects are not likely to succeed
while remaining clandestine.

"Someone somewhere will do this" implies a probability of unity over
window of vulnerability, and inability for intervention. I don't think
this is true.

> anthropomorphic now with "ethical whitewater". I think it's getting

Yes, I'm being anthropomorphic. I don't want to die prematurely, nor see
those close to me die. Call me squemish.

> pretty stupid of people (not you) to stand and rail against the wind
> protesting the inhumanity and lack of kindness of nature. Yes, it
> would be very NICE if the universe were kind, and we could count upon
> some mysterious process to infuse visiting aliens with kindness and
> goodness. But that's silly.

It's not about the aliens. (Given how things are standing, we'll be
probably kicking some (nonsentient) alien ass before long). It's about
whether we're going to get off the scene for good, both as individuals,
and as a species.
 
> Some small group somewhere will codify something, and it may not
> by any means be any kind of "consensus" that you write about above.

I doubt the group will be very small, and given how hard it is to achieve
criticality the amount of constraints codified will be necessarily very
small, if the experiment not completely unconstrained. The easiest route
appears to be evolutionary algorithms on good starter substrate using
predator/prey co-evolution on a sea of molecular hardware.

> I fear that the successful group of AI builders will be first by
> omitting all the "unnecessary" touchy-feely stuff that will, as
> a by-product, save my skin. They'll be first just because they
> concentrate on getting their AI to take over, period. So we
> must *encourage*, not *discourage* AI groups building in something
> nice into the base of their machine.

I think we need to discourage AI groups heading for criticality
(human-level all purpose AI, not insect and small mammal grade which
appears rather safe).
 
> > (For the sake of argument, never the other two and more probable outcomes
> > from the experiment: catatonic and Blight).
>
> Yes, maybe so. But we can at least try for a different outcome.

Unfortunately, any successfull experiment has a very strong probability
for blighting this place, so I'd wish people would stop trying.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:40 MST