From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 05 1999 - 01:38:35 MDT
Lyle Burkhead wrote:
>
> On your web site, you say
>
> > The ultimate object, remember, is for runaway positive feedback
> > to take over and give birth to something transhuman
>
> In general I agree with that, except
> (1) It doesn't have to be "runaway" -- it will proceed at its own pace,
> whatever that is.
Actually, it does have to be runaway, because that's the only way we'll
get even human equivalence, much less transhumanity, without eight
centuries of programming. But see below...
> (2) I think in terms of IA instead of AI.
Well, I'm in a bigger hurry. IA just takes too damn long, and - also
short of genies - you basically have to start in infancy to get any sort
of major enhancement.
> You and I have such different vocabularies that it would take a very long
> time to establish communication.
I'm willing to believe that. No offense.
> And plain old science -- I
> subscribe to Nature and read it every week; it would never occur to me to
> subscribe to a science fiction magazine, or to make science fiction the
> center of my thought. And plain old history, and plain old fiction and
> poetry (Homer, Virgil, Shakespeare, Keats, Goethe, Rilke, etc). We have
> both read Dawkins, but the "meme" meme by itself is not enough to establish
> a common ground for discussion of ultimate goals and how to get there.
Well, I read Dante's _Inferno_, but I also read Niven and Pournelle's
_Inferno_. I own a few back issues of _Nature_, but I also have some
used issues of _F&SF_. I read David Chalmers, but I also read Greg
Egan. Science without speculation is blind (to paraphrase Einstein);
specialization is the defining characteristic of civilization (to quote
my father); the specialists in speculation are SF writers; Q.E.D.
(For those of you with no classical education, Q.E.D. stands for "quod
erat demonstratum", Latin for "So there!")
> Nevertheless there is a deep resonance here. As I read your web site, I get
> an eerie sense of deja vu. I feel like I am reading my own notebooks from a
> decade ago.
Well, everyone feels that - the trouble is that everyone seems to have
completely different ideas of what I'll grow into.
I'm a Specialist, so every generalist is going to see a piece of
themselves in me. That's what a Specialist *is*.
> At that time I still believed in AI. I wanted to create a new
> kind of entity, not exactly a religion, not exactly a business, not exactly
> a school, but a combination of all three -- a network of schools and
> businesses that would make money not for its own sake but with the aim of
> creating the Singularity. (I actually used that word for a while, after
> going to a Terence McKenna seminar at Esalen in 1988). The whole thing was
> going to be organized as a corporation called Recursive Systems. For
> various reasons nothing ever came of this. I guess the main obstacle was
> that I was uncomfortable with the messianic pretensions involved.
Well, looking back on my own ideas of the Phoenix Academy, I'd say the
main obstacle was that it was a blatantly impractical, philosophically
grounded fantasy with no connection whatsoever to reality.
> There are three ways to get people to write checks:
>
> 1. Define your project as research in computer science, with potential
> military applications. Explain it in terms that make sense to agencies such
> as NSF and DOD (or their equivalents in some other country).
Heh. Any sort of computer war in the next few decades is probably going
to be a war of Specialists - if I wanted to specialize in cracking, I
could basically just walk through anything that hadn't been designed by
my equivalent. And Elisson in its infancy could toast any human,
including me. Despite this, my Web pages have been up for two years and
the Men in Black haven't come for me yet.
Framing ultratechnology as military reseach sounds very risky. It's
exactly the sort of attention we ought to avoid.
> 2. Define your project as a business. Break it down into steps, in such a
> way that each step is profitable in its own right. Do the same thing with
> your AI system that Stephen Wolfram did with Mathematica, or John Walker
> did with AutoCad, or Bill Gates did with MS-DOS. It is still possible to
> start from scratch and make billions of dollars in the software industry.
> Someone will have the same dominant position in robotic software that Bill
> Gates has in PC software. (This is what I meant the other day when I said
> you wouldn't have to worry about money if you spent your time writing
> software instead of reading the list.)
If I were going into the software business, I wouldn't try to write a
limited AI, because that sort of project would inevitably fail, unless I
abandoned the Elisson paradigm entirely and wrote a Webmind-like
"crystal intelligence". I can beat the best AI on the market, but not
with crippleware that I'll spend the rest of my life supporting.
> 3. Define your project in religious terms, in such a way that people care
> about it and want to see it happen. Call it Singularitarianism, or some
> such ism. Or just say you are going to create the Messiah, or be the
> Messiah. A lot of people will believe this and write checks, if you have
> the stomach for it. It is still possible to make money with cults. I see
> living proof of this every day -- I live across the street (diagonally)
> from the Scientology Celebrity Center. The money pours in. There are also a
> lot of preachers making a ton of money off the coming Apocalypse. But I
> don't think anybody has pursued this from a specifically Jewish angle. A
> lot of Jews here in Los Angeles (and elsewhere) expect the Messiah to
> appear any day. The opportunity is there, for whoever wants it. I don't. If
> the first one didn't come back to life after they crucified him, I don't
> imagine my prospects would be any better.
I'm sorry, but at this present time - my ethics, being based on
knowledge instead of oaths, are always subject to change - I view the
world as being composed of equals, people who have as much right as I do
to make decisions based on knowledge. I do not view the world as being
composed of manipulable components. At this present time, I do not
intend to lie to people about the Singularity. I simply don't know
enough to lie.
And if I did, it wouldn't work, so I really don't know why I'm talking
about ethics. Go down this path and Singularitarianism will become
another cult. The Singularity has to be the spearhead of the scientific
and computer community, not a religious cult hiding in the corner.
Besides, every religious person with three ounces of brains knows damn
well what the Singularity is, in their philosophy, about five seconds
after they hear about it. They don't need it pounded into them with a
sledgehammer and they'd be rightfully suspicious if we tried.
When people ask me "Where does God fit into all this?" I tell them that
it's bloody well obvious, but if I came out and said so, I'd be claiming
God's support, which I don't intend to do unless God comes out and says
so. I think this is a perfectly true thing to say, from everyone's perspective.
> I think path #2 is the wisest choice.
No kidding.
> I'm not going to be writing checks for the Elisson Project, because, as I
> explained in geniebusters, I think the whole thing is based on a fallacy.
Well, I can understand that. If it weren't for my belief that *I* can
do it, I'd say that Hofstadter and Lenat have another twenty years of
the kind of work they did in their prime, and almost everyone else is
hopeless. Take a look at _Coding_. Eight fundamental principles, not
one; no philosophical grounding; and qualities that are recognizably
unique to the human brain, not just mammalian brains in general.
> To say that computing power is doubling every n years is at most a
> half-truth. The number of transistors on a chip is doubling, and the clock
> speed is doubling, but that doesn't imply that intelligence is doubling. It
> doesn't imply that there is going to be a Singularity.
"Probably the most intelligent question I get asked about the
Singularity is 'Just because we have all this computing power doesn't
mean we know how to use it. Can we really program an Artificial
Intelligence that's smarter than human?' This page explains how."
- Description of "Coding a Transhuman AI", in "The Low Beyond".
> It wouldn't surprise
> me if, a decade from now, you write something like geniebusters, in which
> you describe how it gradually (or perhaps suddenly) dawned on you that you
> plus the software you create will always understand philosophy better than
> the software by itself.
Not if the software learns to enhance itself, in which case I no longer
necessarily understand the software. Besides which, philosophy has been
the bane of AI since its inception. Paradigm Numero Uno in Elisson is "Pragmatism".
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/singul_arity.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:28 MST