From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Jan 29 2008 - 13:38:42 MST
Social issues
* Humans wouldn't accept being ruled by machines.
o Rebuttal synopsis: Our society has gradually come to
accept things that would have sounded preposterous a thousand years
ago (racial equality, capitalism, democracy, etc.)
* An AI would just end up being a tool of whichever group built
it/controls it.
o Rebuttal synopsis: A sufficiently talented group could
build this kind of superintelligent AI, but it's not inevitable. This
is a possible failure mode and we need to take steps to prevent it.
* Power-hungry organizations are going to race to AI technology
and use it to dominate before there's time to create truly Friendly
AI.
o Rebuttal synopsis: This would be a very bad thing, so if
there's a significant possibility of it happening, we need to get
cracking on our own project.
* An FAI would only help the rich, the First World, uploads, or
some other privileged class of elites.
o Rebuttal synopsis: Human social status would mean very
little to any kind of AI. The AI has no real reason to care how many
green pieces of paper we have.
* We need AI too urgently to let our research efforts be derailed
by guaranteed Friendliness.
o Rebuttal synopsis: However bad the world may be without
AI, a failed UFAI could *destroy* the world and everything in it,
which is much worse.
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT