Re: Challenge of Design Complexity

From: Michael Lorrey (mike@lorrey.com)
Date: Tue Dec 15 1998 - 10:49:04 MST


Robin Hanson wrote:

> Eliezer S. Yudkowsky wrote:
> >> At the world level, average IQ scores have increased
> >> dramatically over the last century (the Flynn effect), as the
> >> world has learned better ways to think and to teach.
> >> Nevertheless, IQs have improved steadily, instead of
> >> accelerating. Similarly, for decades computer and
> >> communication aids have made engineers much "smarter,"
> >> without accelerating Moore's law. While engineers got
> >> smarter, their design tasks got harder.
> >
> >And, to summarize briefly my reply from the Singularity debate:
> >
> >All your examples, and moreover all your assumptions, deal with (1) roughly
> >constant intelligence and (2) a total lack of positive feedback. That is, the
> >model treats with the curve of a constant optimizing ability, which quite
> >naturally peters out. The improvements are not to the optimizing ability, but
> >to something else. The Flynn effect improves brains, not evolution. Moore's
> >law improves hardware, and even VLSI design aids, but not brains. There's no
> >positive feedback into anything, much less intelligence. With intelligence
> >enhancement, each increment of intelligence results in a new prioritized list,
> >since improving intelligence changes which improvements can be contemplated or perceived.
>
> To me these are theories of a "magic something else." Sure, you admit,
> IQ has increased, knowledge has grown, communication has improved, and
> we have better decision & design aids. But to you none of this counts
> as "feedback" because "intelligence" is "something else," something not
> captured in all the usual concepts of and measures of intelligence, and
> something which has not improved in eons. But when we learn to improve
> that something else, you say, watch out! Maybe, I say, but maybe there
> is no magic frozen now but oh so powerful something else.

I think what he's trying to get at is that the intelligence increases recorded to date are
more a matter of good education and proper nutrition during childhood allowing current
humanity to reach its current potential. The positive feedback happens when we are able to
increase the current potential intelligence limit. Likewise, Moore's Law, right now does not
increase the intelligence of the user, unless constant use is factored as part of 'proper
education'. Itelligence increases due to brain/circuit interfaces would then bring Moore's Law
directly to bear on human potential intelligence.

However, here, there will still be diminishing returns, as the easy leaps are taken first,
then the higher cost, more drastic and lower benefit ones are taken later. At some point the
only things hindering increases in intelligence will be:
a) the speed of light
b) quantum tunelling
c) the thermal conductivity of materials

So the singularity as an infinite increase in intelligence within this universe is not likely.
There will likely be some limit somewhere, unless quantum computing can be used on such a
scale to allow instantaneous thinking.

Mike Lorrey



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:50:01 MST