From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Nov 08 2002 - 04:19:57 MST
Anders Sandberg wrote:
> On Fri, Nov 08, 2002 at 01:11:53PM +1100, Brett Paatsch wrote:
>
>>Eleizer wrote:
>>
>>>The
>>>strength of seed AI is the theory. You read "Levels of Organization in
>>>General Intelligence" and either you get it or you don't.
>>
>>I think this is true.
>
> I think this approach is wrong. First, the above presupposes that the
> theory is so clear, so easily understandable that anybody who actually
> gets the signal can "get it" for the whole theory.
No, it doesn't. It presumes that enough people will understand the theory
to help out on building a seed AI. You are - no offense, of course -
thinking like an academic. My goal is not to convince a majority of the
field, or even to convince a specific percentage minority. That kind of
convincing would require experimental evidence. Of course getting
experimental evidence isn't my purpose either. It might happen, but it
would be a way station if there are interesting interim results on the way
to the Singularity and we need to attract more attention.
> That requires
> *brilliant* writing, something very rarely seen in any subject. There
> are just a few such classic publications in any field. Not to disparage
> Eliezers writing style, but I would be surprised if he managed to pull
> that one off when e.g. Einstein didn't (his papers were good, but they
> weren't clear at all). Trying to reach levels of clarity like this is a
> long quest for perfection, the harder the more complex the subject. And
> the "either you get it, or you wont" approach tends to repel people. If
> I didn't get it the first time, why should I ever waste my time trying
> to get it a second, even when the problem was a slight misunderstanding
> of a term?
Why is it absolutely necessary to me that you agree with me? I agree that
it's absolutely necessary that the theory be correct. If you find a
genuine bug, *then* I care. But if, for your own private mistaken
reasons, you just don't care, isn't that your business?
> The second point is that papers are judged in a context, and this
> context aids in understanding them. If I know a paper is part of a
> certain research issue I can judge it by looking at how it fits in - who
> is cited, what terminology is used, what kinds of experiments and models
> are used etc. It doesn't have to agree with anybody else, but it can
> draw on the context to provide help for the reader to understand its
> meaning and significance. A paper entirely on its own has a far harder
> work to do in convincing a reader that it has something important to
> say.
You can certainly get that kind of context by looking at "Levels of
Organization". Do please take a look if you haven't already:
http://singinst.org/LOGI/
Incidentally, here's more information about the book:
http://www.goertzel.org/realaibook/RealAIProspectus.htm
(Naturally, everything is way behind schedule on the book, but by Ifni I
turned in *my* paper on time. I think all the chapters finally did
arrive, though.)
> What I worry about, Eliezer, is that you will end up like Stephen
> Wolfram. You spend the next decades working on your own project in
> relative isolation, publishing papers that are ignored by the mainstream
> since they don't link to anything that is being done. One day you will
> read in the morning paper that somebody else has had your idea, written
> about it inside the main context, developed it into a research
> sub-discipline that gets funding and lots of helpful smart postdocs, and
> eventually has suceeded. At best you might get a footnote when somebody
> writes up the history of the research issue.
This wholly fails to fill me with trepidation and dread. Write me out of
the history books; fine. "Failing to build an AI" is something I care
about. "Someone builds an unFriendly AI first" is something I care about.
I did write "Levels of Organization". And I've been tentatively
considering trying to write a paper called "Toward an Evolutionary
Psychology of General Intelligence" and submitting that to Brain and the
Behavioral Sciences. But it would cost time that would have to be taken
away from developing the Singularity Institute, thinking about and writing
up Friendly AI, and so on. I have a very realistic picture of what to
expect from publishing a few papers. I do not expect fame and fortune. I
do not expect to convince the field. I do not expect that the immense
effort needed to put forth the idea academically could or would bear any
fruit in the absence of a realized AI as experimental evidence. Endlessly
arguing about the basics doesn't just waste time and mental energy, it
prevents you from thinking about advanced topics. What might happen, if I
write that paper, is that a few people would be interested, I would
acquire some small amount cachet from having published in BBS, and I would
have given something back to science. If that risk-adjusted benefit
appears to be worth more than any of the other things I could do with a
couple of months, I'll write the paper. I am absolutely not going to
spend the rest of my life writing papers. Academia is not the center of
the world.
Seed AI theory is not my brilliant idea that will win me a place in the
history books if I can only convince the academic field to listen to me.
Seed AI theory is something I need to build an AI. As long as the theory
is actually *right*, forcing Marvin Minsky to admit that it's right can
wait until after the Singularity, if then. If for some strange reason
it's necessary for "seed AI" to be argued academically, then at some point
some Singularity-sympathetic fellow with a doctorate who will take over
the job of arguing academically - and collect all the citations, of course.
> There is a tremendous power in being part of a community of thinkers
> that actually *work* together. Publishing papers that are read means
> that you get helpful criticism and that others may try to extend your
> ideas in unexpected directions you do not have the time for or didn't
> think if. Academia may be a silly place, but it does produce a lot of
> research. If you are serious about getting results rather than getting
> 100% of the cred then it makes sense to join it.
It's a nice ideal but I don't see it happening in practice. I would be
happy to see a small handful of people who *agreed* for correct reasons,
much less disagreed for correct reasons. I spent a number of months
writing "Levels of Organization" because I've drawn on a tremendous amount
of science to get where I am today, and I understand that there's an
obligation to give something back. I do acknowledge my responsibility to
my readers; I wrote the clearest, most accessible paper I could. But
having done so, I just don't see myself as having a responsibility to
spend the rest of my life trying to convince academia I'm right. Is it
really that necessary to score one last triumph on a battleground that's
about to become obsolete?
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:00 MST