Re: Darwinian Extropy

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Tue Sep 24 1996 - 06:13:31 MDT


On Mon, 23 Sep 1996, Robin Hanson wrote:

> Dan Clemmensen writes:
> > [snip]
>
> Not all information can be computed, if one doesn't have the right

Information? Computed? Obviously, you are not referring to the
information theory definition of information...

> inputs. Furthermore, even for stuff that can be computed, it's not

A generic computation can be defined in terms of mapping an input vector
to an output vector. (This is so abstract, it encompasses about
everything). If this is a simulation, "rightness" metric would mean its
congruency with that part of reality we set out to model (of course, not
the real reality, but our measurement of the reality by means of
senses/gadgets, a fingerprint delta).

Concerning "right" inputs: there must be a mapping function defined over
the entire space of input vectors. For such a computation there are no
"wrong" inputs.

A computer implementation of this must be bug-free, orelse some vectors
are not mapped: the computer crashes, the mapping is not defined.

> clear there is some maximum computational "depth" (compute cycles over
> output + input length) It would be very interesting if you could prove
> that a computer has some universal minimum discount rate. That would

I don't know, whether you refer to minimum discount in terms of
complexity theory.

> be well worth publishing. However, it seems you are a long way from
> showing this.

If terms of computing steps, the minimum O() to sort N random numbers
for a purely sequential process would be at least O(N). (Obviously, we
need to look at each number at least once, and only if to make sure
whether they're already sorted).

That a minimum complexity threshold exist, is not open do doubt. Each
clearly defined task has such threshold. Some of this is easy to prove,
some of it terribly hard. Just think about NP-complete and NP-hard
problems.

I think this can be intuitively called "no free lunch" conjecture. (I
know there is a lock on that string already, but this is a local
environment definition of limited visibility scope ;)

> > Why do you think an SI will understand itself any more than we
> > understand ourselves? And even if it could, that doesn't mean such
> > understanding will lead to much improvement.

To improve itself, the system needs not to understand itself entirely, it
isn't even possible. Improvement, probably drastical, is possible by
comparatively piffling investments. Using digital evolution for
optimization purposes is one such instance. I don't care for the method,
just for the result I wish to obtain. The method might be just too boring
(automagical) or plain too complicated.

> >
>
> >Basically, I don't believe that we understand the basics of human
> >cognition.Therefore our attempts at self-augmentation have no firm

We've made excellent progress relatively recently, which still shows no
signs of slacking.

> >basis. We do, however, understand the basics of machine computation:
> >we can design and build more powerful computer hardware and software.

I think, we might learn a lot from biological cognition.

> >Since we understand this basis already, I believe that an SI can also
> >understand it. I believe that an SI with a computer component will be
> >able to design and build ever more powerful hardware and software,
> >thus increasing its own capabilities. I think that this is likely to
> >lead not just to an improvement, but to a rapid feedback process.

Agree.
 
> Consider an analogy with the world economy. We understand the basics
> of this, and we can change it for the better, but this doesn't imply

This must be the overestimation of the century. We can understanding the
basics of the economy, we don't understand the gestalt (top-level
phenomena) at all. Some of world economy is ergodic (at least at times
it is). Meaning, it is fundamentally unpredictable as is... weather
(unless spacetime is deterministic, and God kindly gives you an account
with root rights).

If we'd have the understanding, we would be able to engineer the future
trajectory of the development. I am not sure I will live to see such
accomplishment.

> an explosive improvement. Good changes are hard to find, and each one

Robin, positive intelligence autofeedback loops are unprecendented. It
does no good looking for comparisons. Because there are none. The
Cambrian explosion is often used as an (imperfect) comparison. I think it
will be far more drastic.

> usually makes only a minor improvement. It seems that, in contrast,
> you imagine that there are a long series of relatively easy to find
> "big wins". If it turns out that our minds are rather badly

I think he is right. A computer/human hybride performs significantly
better than the isolated human. And that's just the beginning.

> designed, you may be right. But our minds may be better designed than
> you think.

Human IQ is (assymetrically) bell-shaped distributed. It spans the
entire spectrum, from moron to Einstein. Assuming, everybody is an
Einstein equivalent? Surely, this must have some impact upon the world?

We don't know, whether there is a limit to cognition capacity. At a
guess, there is none. The only limits are those of computational physics.

Assuming, we've already hit them, or are even approaching them is not
realistic, imo.

'gene

> Robin D. Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/
>
>

_________________________________________________________________________________
| mailto: ui22204@sunmail.lrz-muenchen.de | transhumanism >H, cryonics, |
| mailto: Eugene.Leitl@uni-muenchen.de | nanotechnology, etc. etc. |
| mailto: c438@org.chemie.uni-muenchen.de | "deus ex machina, v.0.0.alpha" |
| icbmto: N 48 10'07'' E 011 33'53'' | http://www.lrz-muenchen.de/~ui22204 |



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:45 MST