The nature of academia (was: Finding new support for SIAI Research Fellowship)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Nov 07 2002 - 17:02:52 MST


Hal Finney wrote:
> Eliezer writes:
>
>>As there is now a specific subproject that needs immediate funding, there
>>is a better chance than previously of receiving funding from foundations.
>
> What is this subproject?

The Research Fellowship - clearly defined budget, can start immediately, etc.

> I know you have rejected this advice in the past, but you should consider
> going to school and getting a degree. This will give you more credibility
> and ultimately make it easier to get funding. Drexler had to do the
> same thing. Sadly, credentials are often more important than quality
> of ideas when you are trying to get taken seriously.

Actually, Drexler is in my view one of the primary reasons *not* to take
this detour. He played by all the rules and as far as I can tell it
helped him not one bit. The people who made fun of nanotechnology without
providing any numbers went on making fun of nanotechnology without
providing any numbers. The people who were interested remained
interested. The Foresight Institute stayed marginalized. One or two
people who said they wanted to see a degree, and really meant it, may have
switched. The other skeptics found something else to complain about.

Science as a social process is rational enough to advance a correct new
theory, despite all opposition and controversy, no matter whose feathers
or ruffles or what conventional wisdom it contradicts, if that theory has
evidence. Science is not rational enough to advance a correct theory just
because that theory clearly is better supported by existing evidence than
any of its alternatives - even in cases of extreme asymmetry, such as
_Nanosystems_ versus Smalley's purely intuitive arguments - if the theory
ruffles people's feathers and does not yet have experimental proof.
Science works because it is capable of listening to the voice of
experimental evidence over any amount of academic reputation, but before
that evidence arrives, a Nobel Prize and a verbal dismissal will go on
trumping any amount of math. Science has not silenced the voice of tribal
status, it's just managed to listen to experimental evidence as well. If
that evidence is lacking, science goes back to tribal word-fights.

And matters in Artificial Intelligence are a lot worse than in physics and
chemistry. If Drexler can't win against Smalley when Drexler has a
doctorate and Smalley is simply and clearly wrong if anyone bothers to do
a little math, am I really supposed to succeed? The culture in AI has
been subsisting on almost no evidence; the major figures in the field are
not those who once successfully listened to the voice of Nature but those
who won past word-fights. It is just barely plausible that I could
advance an evolutionary psychology of general intelligence in advance of
the experimental proof of working AI, but if so, that theory would be sold
to existing evolutionary psychologists and neuroscientists, who are still
operating in an experimentally driven culture. The chance of selling the
theory to the academic AI field is effectively zero. Eventually - if
indeed this event happens at all before the Singularity alters the current
regime of human-driven scientific progress - I might stand up with a
working general AI and say: "I hereby claim this scientific territory,
formerly believed to be a part of computer science, in the name of
evolutionary psychology." You can face down an entire field if you have
the experimental evidence and they have nothing, and AI as it stands has
nothing. But it's foolish to expect to do it without the experimental
evidence, and in this case the only way to get the evidence is to build a
working AI.

I expect that if at the end of four years or eight years or whatever I got
a doctorate, skeptics would simply switch to saying "Where's the
experimental proof?" or "But you're just another guy with a Ph.D." This
is simply not a war that can be won on credentials or other social
grounds, because there will always be people with more credentials. The
strength of seed AI is the theory. You read "Levels of Organization in
General Intelligence" and either you get it or you don't. If you get it,
it doesn't matter who the proponent of the theory happens to be - it's
about the idea, not the person. If someone doesn't get the idea, there's
no way I can win on personal grounds, with or without a degree.

Yes, a doctorate would be more convenient, all else being equal. But it's
not worth four or eight years. If a doctorate is useful, SIAI can recruit
someone who already has a doctorate.

> I realize that this may seem like a detour if you are still holding
> to a goal of 2008 for the Singularity, but that date has never seemed
> credible to me.

And if, in 2008, new developments suddenly make it seem likely that
tremendous computing power will be available in 2010, and a complete
working theory of Friendly AI and preferably a tested prototype is needed
RIGHT AWAY, can I have those six years back?

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:00 MST