Why?
Assuming a symmetrical cryptosystem, the algorithm has to iteratively map
a large seed, consisting of many binary/word sites. This is clearly a
high-entropy Hamiltonian (rugged energy function), causing the system to
execute a random walk in a (very) high-dimensional space. The larger the
space, and the more dimensions there are, the lesser the probability of
stepping on its own track (cycling) in the course of the random walk.
Moreover, mapping should be extremely divergent, causing neighbour voxels,
wherever the sample is located, to diverge as soon as possible (not
overdoing it, though).
Clearly, a one-time pad would mean an explicit writeup of the Hamiltonian --
everything else would be just a short hand for it (it is obviously
incollapsible). Here lies the crux: the code/data conglomerate is
equivalent to an index, selecting one special from the multitude of all
Hamiltonians. So I conjecture, a good algorithm, while containing very
simple code, must contain any number of opaque, high-entropy tables to
generate good random at all. Another conjecture: high-connectivity
integer automata networks, defined by an high-entropy lookup-table (the
larger, the better), especially those pseudorandomly changing
connectivity of the network in course of the evolution should produce
excellent, cryptosuitable pseudorandom -- even better, a whole family of
good pseudorandom number generators.
ciao,
'gene
P.S. I can't express myself properly, and one should prove several things
about the above, but I feel strongly there is some truth to it.