On your web site, you say
> The ultimate object, remember, is for runaway positive feedback
In general I agree with that, except
> to take over and give birth to something transhuman
(1) It doesn't have to be "runaway" -- it will proceed at its own pace,
(2) I think in terms of IA instead of AI. The seed is within us. I am
creating an environment around myself that fosters a positive feedback
mechanism in my own mind (and body), which will lead to something --
"transhuman," maybe, although I'm increasingly uncomfortable with such
words. I just think of it as clear thought and perfect health.
I also have other aims, such as wiping hip-hop off the face of the earth, but I don't imagine Extropians could relate to that.
You and I have such different vocabularies that it would take a very long time to establish communication. I don't get my philosophical vocabulary from Hofstadter and Vinge. I thought you would understand my reference to the ur-meme immediately. Do you understand how Judaism works as a meme? Have you read Everett Fox's translation of the Torah? or Aryeh Kaplan's translation of Sefer Yetzirah? or Isaiah, in any translation? or "The Art of Biblical Poetry" by Robert Alter? I guess you don't think that sort of thing is worth bothering with. Not to mention Frege, Wittgenstein, Austin, Goedel (his own papers, not filtered through GEB), Rene Thom, and all the other stuff I read. I read a lot of math -- not Hofstadter's self-reflective stuff, but plain old math. And plain old science -- I subscribe to Nature and read it every week; it would never occur to me to subscribe to a science fiction magazine, or to make science fiction the center of my thought. And plain old history, and plain old fiction and poetry (Homer, Virgil, Shakespeare, Keats, Goethe, Rilke, etc). We have both read Dawkins, but the "meme" meme by itself is not enough to establish a common ground for discussion of ultimate goals and how to get there.
Nevertheless there is a deep resonance here. As I read your web site, I get an eerie sense of deja vu. I feel like I am reading my own notebooks from a decade ago. At that time I still believed in AI. I wanted to create a new kind of entity, not exactly a religion, not exactly a business, not exactly a school, but a combination of all three -- a network of schools and businesses that would make money not for its own sake but with the aim of creating the Singularity. (I actually used that word for a while, after going to a Terence McKenna seminar at Esalen in 1988). The whole thing was going to be organized as a corporation called Recursive Systems. For various reasons nothing ever came of this. I guess the main obstacle was that I was uncomfortable with the messianic pretensions involved.
There are three ways to get people to write checks:
as NSF and DOD (or their equivalents in some other country).
I think path #2 is the wisest choice.
I'm not going to be writing checks for the Elisson Project, because, as I explained in geniebusters, I think the whole thing is based on a fallacy. To say that computing power is doubling every n years is at most a half-truth. The number of transistors on a chip is doubling, and the clock speed is doubling, but that doesn't imply that intelligence is doubling. It doesn't imply that there is going to be a Singularity. It wouldn't surprise me if, a decade from now, you write something like geniebusters, in which you describe how it gradually (or perhaps suddenly) dawned on you that you plus the software you create will always understand philosophy better than the software by itself. It should be an interesting paper -- maybe better than geniebusters, which is certainly not the last word on the subject.
Lyle