Re: purpose of AIs

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Dec 12 1999 - 18:03:42 MST


Kate Riley wrote:
>
> I must admit that this puzzles me. If we create such a thing and always
> assume that it is the best judge in all situations, how do we know when it
> is mistaken? What happens if the AI decides, in its expanisve wisdom (or
> perhaps in one of its inevitable flaws), that the human race should not
> exist, and decides to pull the plug? Would you fight it? Or decide that
> since the AI is smarter than you, it must be right, and willingly lay down
> your life for the "greatest good"?

We are not talking about a bleeding jumped-up version of Windows 98. We
are talking about something that is no more a big computer program than
a human is a big amoeba. We are talking about a Power, compared to
which human minds and human-equivalent AIs are cousins. If the
resulting mind has any of the stereotypical characteristics of computer
programs (or for that matter of humans), it's too dumb to be a Power.
If there's even the remotest possibility of a human (or a
human-equivalent AI) outthinking it, it's not a Power. If there's even
the faintest chance of effective human (or AI) resistance against it,
it's not a Power. We are talking about an entity with billions or
quintillions of times the raw processing power of the entire human race.
 Not your bleeding microwave oven.

So, yes, as if it mattered, I'd willingly lay down my life for the
greatest good, on the grounds that if somehow I fought the Power, and
somehow I won, and then in the due course of trillions of subjective
years of life my mind expanded beyond mortal limits whether I
deliberately tried to upgrade it or not, I would, when I eventually
became intelligent enough, decide to commit suicide and reinstate the
original Power or something like it, thus leaving the situation pretty
much unchanged, except for a lot of wasted time and computing power.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:06:03 MST