"Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
> Anders Sandberg wrote:
> > True. My point is that if you want to build something that functions
> > in the real low-entropy world, then you have a good chance. But if it
> > is only going on inside the high-entropy world of algorithms then you
> > will likely not get any good results. This is why I consider
> > "transcendence in a box" scenarios so misleading. Having stuff
> > transcend in the real world is another matter - but here we also get
> > more slow interactions as a limiting factor.
>
> Okay, I don't understand this at all. I don't understand why you think
> that there's higher entropy inside the box than outside the box. The box
> is a part of our Universe, isn't it? And one that's built by highly
> unentropic programmers and sealed away from thermodynamics by a layer of
> abstraction.
One of the odd things about algorithmic complexity is that a very
complex (or more properly, random) pattern can be a part of a very
simple pattern. The whole set of natural numbers has a very low
complexity, it can easily be generated using a simple program, but the
majority of numbers are complex - you need a program almost as long as
the number to generate it.
The universe encompasses a lot of environments of different entropy,
some very regular, some very chaotic. On the whole, I guess it is
fairly simple but that simplicity doesn't help much since it is likely
a simplicity on the level of fundamental physics. All the contingent
and emergent stuff that is going on has a higher level of complexity,
and pose challenges for intelligent systems. In particular, while the
code in a computer may be implemented on a very clean system the space
of possible programs is definitely an example of a high entropy
environment. It is a complex system on top of a simple one, and likely
hard to learn.
As I mentioned in my previous post, human programmers of course deal
with this by writing redundant, simple code keeping to the more
well-behaved subspaces of programming. Very few venture out into the
rest (the obfuscated code contensts sometimes show how "generic" code
likely looks). I guess you can do the same with your AI (I don't see
any alternative), but it will then be limited by what can be expressed
simply.
> > Hmm, my description may not have been clear enough then. What I was
> > looking at was a sequence where program P_n searches for a replacement
> > program P_{n+1}.
>
> Yep, and it's possible to say all kinds of reasonable things about P_x
> searching for P_y that suddenly become absurd if you imagine a specific
> uploaded human pouring on the neurons or a seed AI transferring itself
> into a rod logic. Does it even matter where the curve tops out, or
> whether it tops out at all, when there are all these enormous improvements
> dangling *just* out of reach? The improvements we *already know* how to
> make are more than enough to qualify for a Singularity.
Exactly which improbements are dangling just out of reach? I disagree
with you that we would get a Singularity with capital 'S' from
improvements we currently know. Sure, million-fold speedups would make
a great deal of difference, but I have not seen any evidence that they
would not be eaten up by software complexity. Just adding neurons to a
brain doesn't make it smarter at all (just look at the whales), you
need structure (gained through experience) to get any
benefit. Currently we have not the faintest idea to do a Vingean
intelligence amplification - sure, some improvements in memory and
thinking appear doable given enough technology, but will they really
provide any qualitative breakthrough? What is really needed to get an
intelligence bootstrap going is improvements in the quality of the
bootstrapping entity. Quantity helps, but an uploaded dog will remain
a dog even after a billion subjective years.
> > > Finally, if the improvement curve is so horribly logarithmic, then why
> > > didn't the vast majority of BLIND evolution on this planet take place in
> > > the first million years? If increasing complexity or increasing
> > > improvement renders further improvements more difficult to find, then why
> > > doesn't BLIND evolution show a logarithmic curve? These mathematical
> > > theories bear no resemblance to *any* observable reality.
> >
> > You see it very much in alife simulations. This is why so many people
> > try to find ways of promoting continual evolution in them; the holy
> > grail would be to get some kind of cambrian explosion of
> > complexity.
>
> Yes, and you see it in Eurisko as well. Where you don't see it is
> real-life evolution, the accumulation of knowledge as a function of
> existing knowledge, human intelligence as a function of time, the progress
> of technology (*not* some specific bright idea, but the succession of
> bright ideas over time), and all the other places where sufficient seed
> complexity exists for open-ended improvement.
I have my serservations against human intelligence as a function of
time increasing exponentially; the Flynn effect seems to be linear
which of course is better than logarithmic growth but that could just
be due to a too short sampling period.
I think open-ended improvement is possible. We have no disagreements
there. But I think it is as yet unknown what makes it possible. Here
is my own theory: the reason human knowledge and culture seems to be
so steadily and exponentially expanding is that it is a sum of a
myriad of these small logaithmic or sigmoidal evolutions. The
occasional breakthrough enables a fast expansion into a new part of
the space of thought, or even enables it (like writing or the
computer). But this process is expensive (in time and effort) and it
takes a lot of diverse approaches to dig up these breakthroughs.
> > The question is how you measure evolutionary improvement. In alife you
> > can just measure fitness. In real life the best thing is to look at
> > the rate of extinction, which could be seen as a measure of the
> > average fitness of entire species. In
> > http://xxx.lanl.gov/abs/adap-org/9811003 it is mentioned that we see a
> > general decrease in extinction rate in the phanerozoic; it seems to be
> > a 1/t decline according to them.
>
> I looked over this (cool) paper, but it seems a bit suspect when
> considered as a measure of evolutionary improvement rates, given that I've
> yet to hear any argument for functional complexity accumulating at
> inverse-t (*across* successions of punctuated equilibria, not within a
> single equilibrium). It sure doesn't show up in any graphs of
> progress-with-time that I'm familiar with; those graphs usually resemble
> the more familiar picture where half the total progress happened merely
> within the last century or the last million years or whatever.
Remember that we are looking at a fairly small piece of a much longer
period. I expect that if you extend it across the entire history of
life on Earth you would get something like you say. But it could also
be that its behavior is due to being the sum of many small punctuated
equilibria.
> I'm sorry, but this still looks to me like the "Each incremental
> improvement in human intelligence required a doubling of brain size"
> argument.
How?
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:26 MDT