From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Wed Aug 17 2005 - 18:44:16 MDT
Richard Loosemore wrote:
>
> Nobody can posit things like general intelligence in a paperclip
> monster (because it really needs that if it is to be effective
> and dangerous), and then at the same time pretend that for some
> reason it never gets around to thinking about the motivational
> issues that I have been raising recently.
>
> [snip]
>
> Does anyone else except me understand what I am driving at here?
>
Yep. Think so. You are positing one type of AGI architecture, and the
other posters are positing a different type. In your type the AGI's action
of "thinking about" its goals results in changing those goals to be quite
different. In the other type this does not occur. You suggest that such a
change must occur, or perhaps is very likely to occur. You have provided
some arguments to support your suggestion but, so far, they have all had big
holes blown in them. Got any other arguments to support your suggestion?
Michael Roy Ames
Singularity Institute For Artificial Intelligence Canada Association
http://www.intelligence.org/canada
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT