Anders Sandberg wrote:
> I think de Garis is indeed exaggerating and possibly
> worsening the same problem he says he wants to avoid by
> setting up a conflict beforehand in a very cartoonish
> manner, but one should give it to him that he at least
> tries to think ahead on the drastic philosophical and
> idological implications AI would have. Now if we could
> produce some reasoned debate, things would get better.
I think you give de Garis too much credit; his vision of a
"gigadeath" war does nothing to address the philosophical
and ideological implications of AI. I think the "threat" of
AI can be broken down into two components: that this
intelligence is artificial (which, I think you'll agree is
needless discrimination), and that this intelligence has the
potential to be far more capable than us. It is the idea
that a more capable intelligence should be considered in the
same light as nuclear weapons that really disturbs me. Is
there a point where feeling threatened by another person's
capabilities becomes a legitimate concern? (Perhaps if
resources were scarce, but even then who is more likely to
use them effectively?)
BM
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:14:17 MDT