From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jun 03 2004 - 17:38:19 MDT
> But, in any case, building a very clever system to reach
> a goal (Friendliness) seems to me to be more in line with
> what Eliezer is doing than building a generalized,
> humanlike person. Since it seems easier to build that
> than a humanlike person, it would be reasonable to worry
> about the attractors that other projects might fall into.
I'm not sure why you think it's easier to build this kind of
single-goaled, super-powerful optimization process, than to build a
human-level self-improving general intelligence.
One important point is that we have an example of a human-level general
intelligence -- billions of examples, in point of fact. But we have no
examples of the kind of optimization process Eliezer's proposing now, so
to construct one, we must proceed entirely based on theory and
experimentation. And IMO current mathematical and computing theory does
not bring us very far toward knowing how to create this kind of
optimization process within reasonable computational space and time
constraints.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT