From: Kate Riley (kate_riley7@hotmail.com)
Date: Sun Dec 12 1999 - 17:28:54 MST
>The goal of AIs is to create something substantially smarter than a
>human in all domains, from science to philosophy, so that there's no
>question of who judges the AI to be intelligent; the AI is a better
>judge than we are.
>
>The purpose of AI is to create something substantially smarter than
>human, bringing about the next step into the future - the first truly
>substantial step since the rise of the Cro-Magnons - and ending human
>history before it gets ugly (nanotechnological warfare, et cetera). We
>don't really know what comes after that, because the AIs are smarter
>than we are; if we knew what they'd do, we'd be that smart ourselves.
>But it's probably better than sticking with human minds until we manage
>to blow ourselves up.
I must admit that this puzzles me. If we create such a thing and always
assume that it is the best judge in all situations, how do we know when it
is mistaken? What happens if the AI decides, in its expanisve wisdom (or
perhaps in one of its inevitable flaws), that the human race should not
exist, and decides to pull the plug? Would you fight it? Or decide that
since the AI is smarter than you, it must be right, and willingly lay down
your life for the "greatest good"?
Kathryn Riley
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:06:03 MST