From: Brian Atkins (brian@posthuman.com)
Date: Sat Jun 24 2000 - 14:28:10 MDT
How can you debate something (AI) where it is impossible to know what
the outcome will be? No one can say for sure whether or not the first
AI will kill everyone or upload everyone or ? I suppose that leaves you
with two possible discussions: how much of the risk can be managed (and
what chances roughly are you left with that the AI kills everyone), and
how does that compare with the risks of _not_ developing such an AI.
Also you can discuss who would you prefer to develop the first AI if
you had to choose?
Anders Sandberg wrote:
>
> "Bryan Moss" <bryan.moss@btinternet.com> writes:
>
> > Cue sensationalism. We see the Bomb, we see human skulls,
> > we see a post apocalyptic world, we see de Garis looking
> > serious next to his Brain Machine. All the while de Garis
> > explains how he thinks AI has more destructive potential
> > than nuclear weapons. (Personally I find the idea of
> > comparing intelligence to explosives rather disturbing.)
>
> I think de Garis is indeed exaggerating and possibly worsening the
> same problem he says he wants to avoid by setting up a conflict
> beforehand in a very cartoonish manner, but one should give it to him
> that he at least tries to think ahead on the drastic philosophical and
> idological implications AI would have. Now if we could produce some
> reasoned debate, things would get better.
>
> --
> -----------------------------------------------------------------------
> Anders Sandberg Towards Ascension!
> asa@nada.kth.se http://www.nada.kth.se/~asa/
> GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:29:26 MST