From: Evan Reese (evansreese@directvinternet.com)
Date: Tue Apr 09 2002 - 01:15:19 MDT
----- Original Message -----
From: "Ben Goertzel" <ben@goertzel.org>
To: <sl4@sysopmind.com>
Sent: Sunday, April 07, 2002 11:34 AM
Subject: RE: Why bother (was Re: Introducing myself)
>
>
> > I'm not expecting a
> > "next" world war. I do not believe we are always on the verge of
> > not making
> > it, or at the edge of a precipice, or any other doom clichae - eve of
> > destruction, gotta get that one in. I recognize problems, but my
outlook
> > isn't the bleak one you seem to have. If I was really that down on the
> > future, I would have packed it in decades ago.
>
> It seems to me that we do not have an adequate knowledge base to make
sound
> rational judgments about such things.
>
> We do not have a knowledge base about past technological civilizations,
from
> which we can induce the probability of the human race ruining itself via
> nukes, biological warfare, etc.
>
> Thus, I think, anyone's opinion on such matters is bound to be more
> subjective and philosophical than empirical & objective.
That's fine. I can buy that. But when I asked Eliezer why he believed we
were toast if his project didn't come off, he sent me to Friendly AI, which
doesn't contain an answer to the question. If he just believes it because
his gut tells him, then why doesn't he just say so?
I *don't* believe it. But I freely admit my biases. I don't believe it
partly because I see evidence that modern societies are more dynamic, stable
and resourceful than past ones. True, we have more deadly weapons, and
individuals are relatively more powerful than they've ever been; and they
will become increasingly so as time goes on. But the system' of society as
a whole is also more able to handle such threats better than past societies
were. I think the history of computer viruses is a good example: They are
much easier to write than ever before, and more people are writing them and
there are far more opportunities for them to spread than ever before. But
the damage they do proportional to the size of the internet is relatively
slight compared with the *relative* damage of, say, the internet worm that
that guy Morris put out back in '88 or '89. The knowledge to protect and
defend against such things has grown faster than the damage a small group or
individual can do.
I think the same case can be made for such things as nanocomputers, among
other things. The more people who have knowledge of these things and how
they work, the harder it will be for some rogue(s) to create something that
can wreak real havoc..
But aside from that, I simply believe WITHOUT PROOF that we are unlikely.
Perhaps I simply do not want to believe that we would be so unfortunate, or
stupid, or apathetic to cause all of humanity to be wiped out. I admit
that.
But my viewpoint is at least *partly* based on what I consider evidence.. I
recognize that others come to different conclusions. But it is not ALL just
gut feeling. I certainly see the *possibility* of any number of catelysims,
but possibility and likelyhood are two very different things. If Eliezer
has evidence for his view that death is the alternative to his seed AI
project, and it isn't JUST gut feeling, he hasn't been forthcoming with it
so far.
Hey, I was just asking.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT