From: Christopher Healey (CHealey@unicom-inc.com)
Date: Wed Aug 17 2005 - 08:37:26 MDT
All,
I don't really have any new content to offer on the topic, but I've noticed a particular pattern in this conversation that might be attenuating progress a bit.
It may be prudent to explicitly draw the distinction between multiply-rooted and singly-rooted goal systems (which I leave to those more technically versed than I). It would appear that some individuals in this discussion are relatively clear in their own mind regarding them, but may take for granted the fact that others are not as clear. I posit that some of the disagreement here is the result of toggling one's reasoning back-and-forth between these different type of goal systems, conflating them, and in the process counteracting any consistent conclusion.
I suppose that this amounts to anthropomorphizing, but I believe a little more precision here in our language can help us to "play our cards face up", helping to collectively identify such mistakes before they leads us too far astray.
In a similiar light, I've also noticed that some who have read CFAI tend to argue the causality of AGI behavior *as if* the reasoning laid forth in it is "given" and somehow constrain AGI behavior by default (i.e. This *is* how it works, rather than, this is how it *might* work if we choose it to be so). This CFAI-morphism is just as bad, for the same reasons. It subtley blinds us to the reality of the full space of possible outcomes. And as many threads here have amply argued, this tends to have a nasty effect on our estimation of where the real dangers lie.
And that's really what our goal is, right? Not simply to address the dangers we *are* aware of, but to address those dangers that *actually* threaten us. To tune the structure of our reasoning, realizing that while it remains materially in error, we will without a doubt miss something critically important.
-Chris Healey
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT