Re: The Singularity

From: Dan Clemmensen (Dan@Clemmensen.ShireNet.com)
Date: Mon Jul 20 1998 - 20:42:09 MDT


Robin Hanson wrote:
> [Dan and 'gene,]
> I interpreted both of your initial statements as rejecting inquiry into
> things post-singularity. It seems as if you both think we *can't*
> possibly know anything, so we shouldn't try. Then in response to my
> queries for clarification, you ask me to show you prediction textbooks
> with formulas derived from first principles.
>
> 0) I submit my paper http://hanson.berkeley.edu/filluniv.pdf, as giving
> mathematical predictions derived from first principles regarding
> an important aspect of post-singularity behavior.

I enjoyed your well-written and well-reasoned paper, but it starts from a
bunch of reasonable but unsupported assumptions about the structure and
motivations of the SI. The principal assumptions are that the SI wants to
expand and that the SI is not singular. As a simple hypothetical counter,
assume that your conclusions are found to follow from your assumptions even
afer careful scrutiny by an SI. Then it's possible that the SI can infer
from introspection that all SIs will be motivated to merge instead of
competing.

> 1) Humans have a vast amount of knowledge and insight, only the tiniest
> fraction of which can be expressed as equations derived from first
> principles. It's a big mistake to say we know nothing about a subject
> if no such equations are presented.

"First principles" may be too strong a requirement, but weakening it
is dangerous. In the past, humans' vast amount of knowledge and insight
has lead to consensus belief systems such as christianity, communism,
and freudian psychology.

> 2) Even if we knew nothing about a subject, that wouldn't mean
> we couldn't learn something if we put our minds to it. You need a
> much stronger argument to reject inquiry other than we don't know
> anything now.

Indeed, humans can construct enormous intricate philosophical structures.
There is a very large body of such work relating to the nature of
heaven.

> 3) Human insight isn't indexed much by year of applicability. The best
> experts in banking now know things relevant for forecasting post
> singularity banking. People who understand art well know things
> relevant for forecasting post-singularity art. Just becasuse there
> isn't a book called "Post-singularity banking and art" Doesn't mean
> people don't know things relevant for this.

Off-hand, I'd guess that banking is irrelevant to a singular SI. It's not
clear that it's relevant to a plural SI. Art as understood by chimpanzees
is IMO qualitatively different than art as understood by humans. There is
no reason to assume that SI art will be comprehensible to humans, or that
it is like human art in any way.

> 4) For a subject a broad as "post-singularity", insight just isn't very
> discrete, since there is so much relevant knowledge. We know more
> about the year 2098 now than we did in 1898, and likely will know
> more ten years from now. Our insight improves incrementally, so
> there is no cliff beyond which we know *nothing*. I see no
> "horizons," analogous to where the curve of the Earth makes human
> visual resolution suddenly fall to uselessness.
>

I assert that the only relevant thing we know that was not known in
1898 is that a singularity has a high probability of occurance before
2098.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:23 MST