From: Ben Goertzel (ben@goertzel.org)
Date: Wed Dec 31 2003 - 10:48:52 MST
> >a Friendly AI must be the first being to develop strong nanotech
> on Earth,
> >or one of the first, or we are all going to die in a mass of grey goo
>
> Something about this strikes me as wrong-headed. Since
> grey goo is considerably closer in time than Friendly AI,
> an imperative like this will lead people to focus on stopping
> "strong nanotech", rather than creating Friendly AI.
Hmmm...
1)
I don't see any strong reason to believe that strong nanotech is closer in
time than Friendly AI.
Nanohype is at a maximum currently, but that doesn't mean that strong
nanotech is just around the corner.
Taking a dispassionate point of view, surely the scientific jury is still
out regarding the question of whether strong AI or strong nanotech will
arrive first.
My own perspective is that strong AI is closer, but arguing for this
perspective is not my goal in this message.
2)
It's just not true that humans developing strong nanotech will *necessarily*
lead to destruction.
Whether strong nanotech, in human hands, leads to destruction or not depends
on a lot of "details" including scientific and political ones.
Similarly, lot of people thought the advent of nukes would lead to world
destruction. Well, it still could. But for a combination of technological
and political reasons, this certainly isn't a necessary consequence of the
existence of nuclear weapons...
What I see in these various overconfident statements is a fear of the
unknown --- an attempt to place order on the unknown future by making
unwarranted assumptions. We can't rationally assume so much about what's
coming -- as much as, emotionally, we might like to.
So far as preparing for the future goes, the most important thing is to get
ourselves -- individually and collectively, with whatever human and/or AI
intelligence is available -- in a mental position that is capable of dealing
adequately with the advent of wild surprises. This is much more important
than planning for any particular conjectural contingency.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT