Xiaoguang Li, <xli03@emory.edu>, writes:
> first of all, some seem to believe that current technological
> growth trends fit a hyperbolic curve and will therefore reach a true
> mathematical singularity some time in the near future. however, my
> calculus reference calls the hyperbolic functions "a class of exponential
> functions," and a cursory examination seems to indicate that hyperbolic
> functions do not grow to infinity in finite time. what gives?
I think you are referring to hyperbolic sine and cosine, which are related to exponential functions. In looking at, say, human population growth the curve is something like k / (2040-y), which is a hyperbola and goes to infinity in year 2040.
> in fact, a very brief review of the most elementary functions
> gives one the impression that almost all functions that reach a
> singularity involve division of two distinct forces or what is essentially
> the same, a logarithm. examples include rational funtions
> (f(x) = -1 / (x - 5)), logarithmic functions (f(x) = -ln(5-x)), and
> trignometric functions (f(x) = tan(x)).
>
> thus a speculative leap: at least two interacting forces are
> necessary to reach a true mathematical singularity, and no single force
> can "grow" to infinity in finite time. that is, faster-than-exponential
> growth (x!, e^(x^x), etc.) does not imply singularity per se.
It may be true that most functions which go to infinity do involve a division operation, but I don't think you can leap from that to say that two interacting "forces" are involved. If you write a differential equation like f'(x) = f(x)*f(x) then there are no obvious candidates for two forces, but you will get y = 1/x as a solution, which will got to infinity.
> the above leads me back to vinge's interpretation of the
> _technological singularity_: that the singularity is a dialectic between
> accelerating growth of technology on the one hand and static human nature
> on the other. what the singularity delineates is the ever-shrinking gap
> between the rate of change that a human being can accomodate and the
> amount of change that transpires in unit time. it decribes a progression
> of rapidly shrinking "prediction horizons" at the end of which is a
> darkness in which the human being is essentially blind -- unable to guess
> what is to come in the very next moment.
Yes, this is the conventional interpretation, but if we take it literally I think it is a problematic definition. The problem is that the rate of change that a human being can accommodate is likely to increase as technology improves. People today are comfortable with rates of change far higher than those throughout history. Toffler wrote "Future Shock" in the 1970s, but here we have the business world practically reinventing itself in a ten year period and nobody bats an eye.
With exponential growth, the absolute rate of change becomes ever higher, but the rate of change relative to current levels is constant. Is this supposed to be fast enough to cause a Vingean singularity? It isn't clear to me.
Perhaps, by analogy with black holes, it would be better to define a "prediction horizon" which is where it is hard to understand what goes beyond that point, from some perspective. The term Singularity would then be reserved for changes which are infinitely powerful. In that case, yes, we have passed through a prediction horizon from the point of view of the cave man, or even arguably from the point of view of someone living 200 years ago. And we may well pass through another prediction horizon in 20-30 years with AI and nanotech, from our point of view. But that may not be a Singularity.
> however, in order for the preceding to be true, the force behind
> technological growth must be independent of the human being (or else it
> will most likely level off as a bacterial population exhausting its
> nutrient supply). that is, to obtain two truly interacting forces, the
> human being must create her nemesis, the self-modifying intelligence.
Here I think your speculations are going off track. This two-force idea can't be taken so literally as to imply that AI is necessary or sufficient for a Singularity.
> all of the above suggests that the _technological singularity_ is
> in some sense a trick of perpective -- technology never really grows
> infinitely fast, but we won't be able to tell the difference if we grow
> too slowly or not at all.
This is more like what I was proposing to call a horizon.
> ergo, the implications can be summarized as follows: with AI, the
> singularity is possible; with IA, the singularity is avoidable; and with
> both, the singularity is uncertain. if this is true, then the race between
> AI and IA may be the driving conflict on the road to the singularity.
With IA, there is no reason to expect the biological limitations of humans to limit growth rates like the bacteria exhausting their nutrients. I don't see a clear reason why AI would be more likely to lead to a singularity than IA.
Hal