From: Tim Freeman (tim@fungible.com)
Date: Tue Nov 01 2011 - 14:17:28 MDT
From: Philip Goetz <philgoetz@gmail.com>
>The term "Friendly AI" is a bit of clever marketing. It's a technical
>term that has nothing to do with being friendly. It means a
>goal-driven agent architecture that provably optimizes for its goals
>and does not change its goals.
An AI that has and consistently follows the goal of killing everybody
would be Friendly by that definition, so that's just wrong.
A few reasonable criteria are listed at:
http://en.wikipedia.org/wiki/Friendly_artificial_intelligence#Requirements_for_FAI_and_effective_FAI
The second criterion matches what Phil Goetz said, but the first one
seems more essential:
Friendliness - that an AI feel sympathetic towards humanity and all
life, and seek for their best interests
However, I'd quibble with the word "feelings" here, since an AI that
pursues goals without having anything that fits the notion of
"feeling" might still be Friendly. Twisting around the AI's
implementation to include whatever neural basis humans have for
"feeling" seems counterproductive.
I also feel uncomfortable with including "all life" in the set of
beings toward which the AI "feels" Friendly, since humans are
outweighed by and vastly outnumbered by bacteria.
I lack the ambition to fix that Wikipedia article right now.
Even this clumsy specification is better than ignoring what people
want altogether.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT