From: Randall Randall (randall@randallsquared.com)
Date: Wed Aug 17 2005 - 03:03:34 MDT
On Aug 17, 2005, at 1:58 AM, Richard Loosemore wrote:
>
> d) If the Seed AI does not make the choice to be Friendly (on the side
> of "convergence" (see parallel post in reply to Ben)), then it will
> allow rival projects, etc. etc. It All Ends In Tears. For reasons
> discussed elsewhere, I think this extremely unlikely. This is the
> only case that could generate a paperclip maximiser that did not have
> general intelligence and awareness of such issues as motivation.
In service of what goal is such a choice made? If a goal that
directs a choice to be "Friendly" exists, then it was so already.
If not, it cannot choose (except in error) that way.
Choices imply reasons to choose, which are just another name
for goals. Humans may not have a "highest" goal, but building
an optimization process without one would be leaving the
ultimate direction of the process unset.
It may be that there is no way to build a system as intelligent
as a human with a single goal, but that doesn't seem the way to
bet right now.
-- Randall Randall <randall@randallsquared.com> "Lisp will give you a kazillion ways to solve a problem. But (1- kazillion) are wrong." - Kenny Tilton
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT