From: J. R. Molloy (jr@shasta.com)
Date: Thu Oct 05 2000 - 12:08:13 MDT
Eugene Leitl clarifies,
> Of course it's a autocatalytic and a positive autofeedback loop, but
> the growth function would clearly saturate without a substrate
> change. However long and hard I study, I won't be able to instantly
> factor 2 kBit integers in my head, or even leap tall buildings in a
> single bound.
Right, so time matters when it comes to increasing intelligence.
Homo sapiens has increased intelligence over the millenia, but AI could do as
well or better over the course of minutes or seconds.
> Sure we will change substrate (or create successors in the new
> substrate and grow extinct) and the growth will continue, but it would
> require extremely clever timing to keep the growth function
> continuously smooth.
Yes, I've noticed from reading about cleverness increase that humans tend to use
new knowledge to extend their power base. This is also known as war. Punctuated
equilibrium in the increase of human intelligence does not appear smooth.
> (If I see this happening, I'll start believing in weird shit like
> fairies, the best of all possible worlds, spacetime singularities as
> computers and the Omega point.
If you start believing in such weirdness, some will say that confirms this is
the best of all possible worlds.
>blanch<
> Sure Moravec claims the linear log plot is being linear despite
> repeatedly changed substrate, but I think his metric is rigged. We'll
> see where it goes in the next decades, perhaps I'll become acolyte of
> the Church of Singularity yet).
Make sure it's the *First* Church of Singularity, because you know how those
denominational splits tend to dilute the true and authoritative word of Bob
Sing.
--J. R.
The obscurest epoch is today. STEVENSON
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:25 MST