From: Christopher Healey (CHealey@unicom-inc.com)
Date: Wed May 19 2004 - 10:56:34 MDT
Well, FAI seems more a discipline of risk mediation than anything else.
What Eliezer seems to be saying is that any black-box emergent-complexity solution is to be avoided almost without exception, because if you don't understand the mechanism then you've ceded precious design-influence you could have brought to bear on the task. You could say that, when it comes to self-learning emergent-complexity, we understand a little bit about how to implement something we cannot predictably model though generalization. At least that is my impression, based on my limited experience.
So to minimize the chance of an undesireable outcome, we should use techniques we CAN predict wherever possible. Even in the position where we lack any theoretical basis to actually quantify the assurance of predictability, we can take actions that trend toward predictability, rather than use techniques that are known to reduce it. In other words, by maximizing the surface-area of our predictable influences against the rock-hard-problem of FAI, we're more likely push it in the direction we seek. Using a primarily emergent approach is more akin to randomly jamming an explosive charge under that rock and hoping it lands exactly 1.257m from the origin along a specific heading.
The problem is still there, of course. Warren Buffet has a first rule for getting rich: Don't lose money! In the absence of statistically assured predictability, a similiar path here seems prudent. Don't knowingly cede predictability!
Chris Healey
________________________________
From: owner-sl4@sl4.org on behalf of Aubrey de Grey
Sent: Wed 5/19/2004 11:26 AM
To: sl4@sl4.org
Cc: ag24@gen.cam.ac.uk
Subject: Re: ethics
This is just to say that I hope this discussion continues and especially
that Eliezer finds time to set out his refutation of John's point in a
fair bit of detail, because it is the key problem that I have always had
with FAI of whatever form but I have never had time to delve thoroughly
enough into the field to discover a cogent refutation (or lack of one!).
By "refutation" I only mean a minimal one: I can't see how the problem
of unpredictability of complex self-learning systems can be avoided even
in principle.
Aubrey de Grey
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT