From: James Rogers (jamesr@best.com)
Date: Wed Nov 28 2001 - 12:08:00 MST
On 11/27/01 10:10 PM, "Robert J. Bradbury" <bradbury@aeiveos.com> wrote:
>
> More importantly, we now seem to have some empirical evidence that
> intelligence is not a simple linear scale. A couple of British
> mathematicians seem to have shown that a relatively small increase
> in "capacity" buys a *lot* in terms of "effective" intelligence.
>
> One can hope that someone more qualified to comment on the
> results will publish something a bit clearer about the methods.
> (Perhaps someone can find something in the preprint archives...)
This is interesting because I arrived at a similar mathematical result
sometime back in 2000 that I discussed with a few people in private forums.
While I wasn't aware of the above study, I am still strongly convinced that
an analysis of the relevant mathematics suggest that a linear increase in
model capacity (RAM, neurons, etc) generate an exponential increase in the
size of the model that can be handled by the intelligence with equivalent
predictive accuracy. This sort of suggests a different type of "hard
takeoff" than is normally talked about when discussing a singularity, as
even modest improvements in hardware capacity can produce vast improvements
in intelligence; it suggests that an exponential hardware takeoff isn't even
required for super-intelligent AI. The consequence of this from an AI
standpoint is that controlling the effective intelligence of a system may be
very difficult, as the resource space between dumb-but-useful AI and
super-intelligent AI may in fact be quite small, a line that could easily be
crossed inadvertently.
The other derivable effect worth noting is that sufficiently small model
capacity (relative to the complexity of the actual thing that one is
attempting to model) will generate *worse* predictive results than pure
chance. This particular problem is essentially caused by aliasing, and
could be considered a form of observer bias. This could explain the
large-scale religious belief systems in humans if we assume that the human
brain has only borderline capacity for developing non-aliased models of the
relatively complex processes of our universe. I haven't seen this idea
suggested elsewhere, but it doesn't seem unreasonable either and it actually
makes a fair amount of sense to me.
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:12:16 MST