Re: making microsingularities

From: Samantha Atkins (samantha@objectent.com)
Date: Mon May 28 2001 - 23:12:33 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > "Eliezer S. Yudkowsky" wrote:
> > >
> > the record, for those of you who aren't aware, there's a
> > > significant faction that regards the whole Slow Singularity scenario as
> > > being, well, silly.
> >
> > But the fast path assumes a lot of "friggin magic". It assumes
> > that we can build not only a true AI but an SI and instill it
> > with or have it develop wisdom and Friendliness.
>
> Not at all. The fast path says that, after you build a certain threshold
> level of AI, it flashes off to superintelligence in relatively short
> order. The slow path says that you achieve human-equivalent AI and then
> it basically hangs around in the same place for the next fifty years. So
> the word "silly" is being used advisedly, as shorthand for "screamingly
> anthropomorphic" or "as seen on Star Wars". As for Friendliness, I don't
> see the connection.
>

You toss the word "anthropomorphic" in pretty often. But this
misses the point that it would most likely be a lot better for
humankind or at least a lot less cataclysmic if the full
Singularity, replete with SysOp did not happen in a decade or
two. That might be wishful thinking, or less likely than the
catastrophe that you believe is the nearly inevitable
alternative. But the idea deserves a bit more consideration
imho than just being labeled "anthropomorphic" or dismissed with
"as seen on Star Wars".

You might be right in your cogitations. But a hell of a lot of
real world people, extropians among them, are not comfortable
with the idea of the SysOp coming and saving us all in the short
term or with the SysOp as "The Answer". It is a little too pat
and fraught with danger.

I don't believe we can both make an AI seed that "flashes to
superintelligence" and have it turn out both Friendly and wise
in the timeframe you envision. I would be happy to be wrong.
If I am wrong I still don't believe that the majority of the
human race is going to be happy with the arrangement or that it
will necessarily be good for us.

- samantha



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:49 MST