Re: Darwinian Extropy

From: Dan Clemmensen (dgc@shirenet.com)
Date: Mon Sep 23 1996 - 19:46:32 MDT


Robin Hanson wrote:
>
> Dan Clemmensen writes:

>
> >Basically, I don't believe that we understand the basics of human
> >cognition.Therefore our attempts at self-augmentation have no firm
> >basis. We do, however, understand the basics of machine computation:
> >we can design and build more powerful computer hardware and software.
> >Since we understand this basis already, I believe that an SI can also
> >understand it. I believe that an SI with a computer component will be
> >able to design and build ever more powerful hardware and software,
> >thus increasing its own capabilities. I think that this is likely to
> >lead not just to an improvement, but to a rapid feedback process.
>
> Consider an analogy with the world economy. We understand the basics
> of this, and we can change it for the better, but this doesn't imply
> an explosive improvement. Good changes are hard to find, and each one
> usually makes only a minor improvement. It seems that, in contrast,
> you imagine that there are a long series of relatively easy to find
> "big wins". If it turns out that our minds are rather badly
> designed, you may be right. But our minds may be better designed than
> you think.
>

Now we're getting somewhere. I really feel that your analogy is
inappropriate. Our understanding of computer hardware and software
is considerably more complete than is our understanding of the world
economy, and we have demonstrated the ability to continue to increase
the capabilities of both hardware and software enormously since the
development of the computer. Furthermore, computers are already being
used
to assist in the further development of computing. Good changes are not
hard to find, and 18 month's worth of development results in a doubling
of capability. Yes, I do "imagine" that there are a long series of "big
wins". I base this on the recent 30-year trend. Yes I'm very aware that
Moore's law is purely empirical and that there are arguments that the
rate
must slow down for various physical reasons. I'm also aware that similar
arguments have been advanced every year since the promulgation of
Moore's
law, but the rate hasn't slowed. All of this has occured before the
advent
of an intelligence with a computer component.

I'm not arguing that our human minds are poorly designed. First, I
believe
that the human mind is evolved, not designed. For complex systems that
I'm
aware of (computer software sytems, mostly), an evolved system can
generally
be replaced by a designed system that uses the behaviour of the evolved
system as a functional specification. This will generally result in
dramatic
performance improvement. By analogy, it would be possible to design an
improved human brain, IF we had a complete understanding of how the
brain
works. Since we don't, we can't design a new one on the same principles.
I argue that we don't have to. Instead, we will develop an intelligent
entity that has a computer as one component. This entity will be smart
enough
to develop new hardware and software to augment itself. This the
fundamentally
new factor.

We've actually already developed the primitive precursors of this
entity.
a computor development organization that uses its own computers to
develop
the next generation computer, or a software tools shop that uses its own
tools to develop its next-generation tools, is such an entity. However,
these primitive examples are not yet focused on self-augmentation, and
are not tightly-integrated enough to precipitate a runaway fast-feedback
of self-augmentation.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:45 MST