From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jul 28 2001 - 02:22:48 MDT
Zero Powers wrote:
>
> At some point in time, whether its a hundred, a thousand, or a trillion
> years into the future, I am bound to run out of new experiences, interesting
> questions, new frontiers. I don't imagine any of us will ever be
> omniscient, so I suppose there will always be *something* new to learn or
> some ultimate theory to disprove. But once it starts getting to the point
> where it takes 99.9% of effort to make a 0.1% advance, no doubt about
> it...I'm bored.
That is the pessimistic theory.
The optimistic theory states that the amount of Fun Space to explore is an
exponentially increasing function of processing power.
The optimistic theory appears to be more in accordance with observed
anthropological history. See "What is Seed AI?" on the SIAI website for a
general exposition of why intelligence has so far been observed to
increase exponentially with linear increases in computational power,
rather than the other way around.
The conclusion of the optimistic theory of Fun Space is that a finitely
sized brain, probably a small multiple of current human computational
capacity, is capable of living the current age of the Universe at a
trillionfold faster processing speed, without running out of Fun Space to
explore; a superintelligence will never run out of Fun Space for
hyperexponential multiples of the current age of the Universe; and an
intelligence that can grow at *logarithmic* speeds (never mind linear
speeds!) will *never* run out of Fun Space.
However, this optimistic theory may need to be adjusted for a minimum
*percentage* rate of growth per given subjective span, in order to
maintain an optimal balance of personal growth. Even if this percentage
rate is relatively very slow, i.e., 1% per subjective century, it would
still soon require exotic resource acquisition architectures (paired
manufacture of positive and negative matter bound up in a fractal Van Den
Broeck micro-warp, etc.) for human space to acquire new computational
substrate at the exponential rate that would be needed to meet natural
demand even if the population remained constant.
But that's not our problem. Our problem is getting to the point where we
have this problem.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:09:13 MST