From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Mon Feb 13 2006 - 16:15:02 MST
Hi Philip,
What makes AI unique in this context is that it could cause our extinction apart from anyones desire for world domination/extinction. It need only be poorly programmed. So, in that sense, it poses a greater risk. On the other hand, I think our only real chance to avoid massive worldwide deaths is with a very-near-term friendly AI in responsible hands. This AI could disable, or destroy if necessary, such psychopathic "world-reformers".
Jeff
Philip Goetz <philgoetz@gmail.com> wrote:
Numerous people on this list are truly alarmed at the prospects of AIs
wiping out humanity. It seems to me that it would be easier to
develop existential-threat diseases than to develop AI. For instance,
it is already known how to make a 100%-lethal
vaccine-and-antibiotic-and-natural-resistance-resistant smallpox. If
someone were to modify it further to have a latent period - say, of
three years after infection - we would be looking at something that
could kill pretty much everyone on the planet.
Humans are prone to imagine utopian social orders, and to believe that
they know the one true way how to order society so as to cure all its
ills. Such schemes have always failed. I think it is inevitable that
some new world-reformer will decide that their favorite social order
will work this time, if they can only start from a clean slate, with
no other competing social orders. This person, not the military
invader or the religious zealot, is the most dangerous; unlike our
other homicidal nutcases, they have an incentive to kill EVERYONE on
the planet except a selected few.
---------------------------------
Relax. Yahoo! Mail virus scanning helps detect nasty viruses!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT