Re: Eliezer S. Yudkowsky' whopper

From: Andrew Lias (anrwlias@hotmail.com)
Date: Thu Oct 05 2000 - 09:08:05 MDT


Eugene Leitl <eugene.leitl@lrz.uni-muenchen.de> writes:
>Andrew Lias writes:

> > Making predictions is a risky business at best. The
> > strong AI community has repeatedly embarrassed itself by making
> > over-confident estimates about when strong AI would show up.
[...]
>A have a (somewhat miffed-sounding, since written from the view of
>the Lisp community) view on that from:
> http://www.naggum.no/worse-is-better.html
[...]
Great example!

> > I think that we all agree that we are approaching a point where an
>explosion
> > of intelligence will become a viable possibility. I think, however,
>that
> > it's rather foolish to assert that it will happen before a given time.
>It
> > *might* happen by 2020. It also *might* happen in 2520. It also
>*might*
> > happen tomorrow, if some obscure line of research that nobody is aware
>of
> > hits the jackpot. We simply don't know what factors may advance or
>retard
> > the date.
>
>This view is too harsh. We can surely assign probabilities, assuming
>nothing happens to the global civilization (major war, global climate
>flip-flopping (just read a paper on corellation of species extinctions
>with volcanic activity, anthropogenic gases may also act as
>precipitant), asteroidal impact, the triffids, terminal boredom) it's
>extremely unlikely to happen tomorrow, probability rapidly peaking
>after that, having a maximum somewhat by slightly after 2020, but
>*distinctly* before 2520.

Oh, I quite agree. I deliberatly bracketed the 2020 guess with a
hyper-pessimistic and a hyper-optimistic set of guesses. I would be just as
surprised as you would be if it took that long (or if it happened tomorrow).
  On the other hand, we really don't know what factors are going to effect
the date. As that Grand Old Man of Extropianism, Vernor Vinge, stated, if
the complexity of the brain is five or six orders of magnitude beyond our
current estimates, it may be such a complicated task to duplicate
intelligence in other substrates that, even though its feasible *in
principle*, it may be impossible in practice. Now, I want to emphasize that
(just like Vinge), I doubt that we will run into such a roadblock, but we
shouldn't dismiss the possibility out of hand.

My major concern with predictions is one of credibility. As noted, I
consider the Singularity to be a real possibility and one that is (whatever
else you think of it) consequential to us Homo saps. It is definitely
something that we don't want to just blunder into. Foresight is our most
precious commodity when it comes to this issue. But is we, who have been
giving it the most serious thought, hinge our warnings on a particular date,
we run the same risk that evangelical rapturites run when they make their
predictions -- the date comes and goes and we have egg on our face. Any
predictions we make after that are going to be viewed with suspicious, as
will all of our other ideas. Let's face it, we're far enough on the fringe
as it is. Given that I *do* think that we have important things to say, we
really should give due consideration to our reputations, else we're going to
be ignored at a point in time when it may be critically important to be
heard. Even vague guesses (e.g., "Sometime in the 21st Century") may come
to bite us. At best, I think that we should simply say that we think that
we are approaching a crucial event in human history and that, although no
one knows when it will happen, it may well happen within our lifetimes.

>I would be indeed genuinely surprised if it
>didn't happen before 2050 (you can call me in my home for the elderly
>if this turns out wrong, or drop the notice by the dewar).

To be honest, I would share your surprise. Don't think that I don't have my
own private guesstimates. I'm just not going to publicly stake my
reputation on them. :-)

> > My primary concern is that the only thing that we can control,
> > when it comes to such a singularity is the initial conditions. I can
>think
>
>Question is, whether initial conditions have an impact on early growth
>kinetics. I think they do. If they don't, the whole question is moot,
>anyway, and all we can do is lean back, and watch the pretty pyrotechnics.

Oh, sure. I would be a fool to deny this. Perhaps a more precise statement
would be that *if* there's anything we can control with respect to the final
manifestation of the Singularity, it's going to be in setting the initial
conditions. It's one of those cases where we may as well try because we
don't have anything to lose for the effort and potentially much to gain.

> > It is my hope that we will be able to see *just* far enough ahead that
>we
> > don't just blunder into the damned thing (ask me about my monkey
>scenario!
> > ;-). One thing that seems certain to me is that there seems to be a
>lot of
> > unfounded speculations regarding the morality and rationality of
> > post-organic hyperintelligence. It seems that the only same position
>to
>
>Guilty as charged. However, evolutionary biology is equally applicable
>to sentients and nonsentients.

I would agree. Of course, as J.S.B. Haldane (roughly) said (using "Nature"
to mean evolutionary forces) Nature is more clever than you. Evolutionary
biology has great explanatory value when it comes to accounting for the
current state of affairs, but it's predictive capacity (while undeniably
existant) is far more limited in scope. This is certainly true when dealing
with hyperintelligent entities who will have much more direct control over
their upgrade paths than any other creature in the history of biology has
ever had (including ourselves). As such, I expect them to follow rational
rules of development (which is NOT to be confused with them being rational
beings -- as noted, I think that this is an unfounded assumption), but that
the predictive scope of those rules (or our capacity to derive all the
relevant rules) may be limited. Which isn't to say that we shouldn't try.
Indeed, I think that making a concerted effort to develop predictive models
is the most responsible thing that we can do with respect to the whole
thing.

> > hold in an arena that harbors such beings it to *be* such a being (and
>even
> > that might be presumptive -- we are assuming that amplified
>intelligence is
> > a good thing to have; it's possible that hyperintelligences are prone
>to
> > fatal or adverse mental states that only manifest beyond a certain
>level of
> > complexity; after all, we do know that certain pathological mental
>states,
> > such as a desire for self-martrydom, only show up at human levels of
> > intelligence).
>
>Yeah, but we have to decide today, in face of limited data. We have to
>work with current models, however faulty. The longer we wait, the
>harder will it be to change the course.

Absolutely! I am not advocating that we should throw our hands up in the
air our of a fatal sense of frustration. However, in order to make a good
start, we really do need to examine our assumptions as closely as possible.
In my thoroughly unhumble opinion, too many in this group as still working
from positions based on unfounded assumptions. I realize that dynamic
optimism goes hand in hand with Extropianism (which is one reason I hesitate
to call myself a party Extropian), but there's a point where optimism
becomes Polyannistic. To use a mythological metaphore, Epimetheus was
certainly more optimistic than Prometheus, but Prometheus was more often
right. When it comes to foresight, an overly optimistic view can blind us.

> > Frankly, the notion that we are approaching a Singularity scares the
>hell
> > out of me. If I thought that there were some viable way to prevent it
>or
> > even to safely delay it, I'd probably lobby to do so. I'm not
>convinced
> > that there are any such options. As such, my personal goal is to be an
> > early adopter and hope that self-amplification isn't the mental
>equivilent
> > of jumping off of a cliff.
>
>Sure, but would be the alternative?

Well, that's my point! There isn't one. At least not one that I can see.
As such, I'd rather deliberately move towards the Singularity, however
cautiously, rather than stumbling into it blindly. I'm certainly not
advocating an osterich attitude. It's coming. It has the potential to be a
very, very, very bad thing. It also has the potential to be a very, very,
very good thing. We must do what we can to influence events such that they
favor the latter case and not the former. Among the things we need to
consider is whether or not it is possible, or even advisable, to allow
(non-upload) AIs to precede us and, if they precede us, whether its
possible, or even advisable, to try to place controls on them. If not, what
can we do to predispose to them treat the rest of us well. Similar
questions arise over the question of intelligence amplification and uploads.
   Nor am I suggesting that you folks have been lax -- I've subscribed
because there are important and interesting points being debated. My major
concern is that a lot of the current discussion does seem to hinge on some
basic assumptions that seem a bit dubious to me. One in particular that
sticks out is that a lot of folks around here seem to equate increased
intelligence with increased rationality. At best, I think that it can be
argued that more intelligence gives one the potential to be more rational,
but I think that it can be argued, just as reasonably, that more
intelligence also grants the capacity for greater irrationality. As such,
simply trusting an SI with the keys to the future, under the presumption
that it will do take the most rational course of action, seems to be a naive
presumption from where I'm standing.

>(Assuming, it happens at all) we
>can't prevent it, we can (at best) delay it. My statistical (assuming
>current numbers) life expenctancy will be exceeded in about 60
>years. If the Singularity turns out malignant, I can only die once.

Unless you get a really malevolent entity out of the process. The converse
of a technological Heaven is a technological Hell.

>It
>can be a bit premature, it can be quite nasty, but it will be brief.
>If it's softer, it gives me a genuine chance to achieve immortality
>the Woody Allen way.

I'm with you, there. I do *hope* that the optimistic projections will
obtain. When people ask me what my lifes ambition is, I am tempted to say
that I want to become a god. I don't because I know how such a statement
would shock most people -- it never the less is what I am hoping will be the
case.

>Dunno, sounds good to me.

All I can say, for certain, about the Singularity is that it is looming
before us. I don't think that any of us, as of yet, have a clear idea of
what lies beyond it. Doing what we can to develop some plausible idea of
how it can fall out (plausible meaning, here, well-modeled) and how we can
effect the fall out must be, I think, our first duty.

We have the unenviable task of being children who must decide how we become
adults (lest the decision is wrenched from our hands by circumstance). It
is an exciting thing, but we can not afford to underestimate the dangers
involved.

_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at
http://profiles.msn.com.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:25 MST