From: Anders Sandberg (asa@nada.kth.se)
Date: Fri Nov 08 2002 - 07:35:47 MST
On Fri, Nov 08, 2002 at 06:19:57AM -0500, Eliezer S. Yudkowsky wrote:
> Anders Sandberg wrote:
> >
> >I think this approach is wrong. First, the above presupposes that the
> >theory is so clear, so easily understandable that anybody who actually
> >gets the signal can "get it" for the whole theory.
>
> No, it doesn't. It presumes that enough people will understand the theory
> to help out on building a seed AI.
At the core here is our standard disagreement on how hard it is to make
a seed AI and what kind of intelligence is needed. If your intuition is
right what matters is just to get a sufficiently skilled group together
and to implement the seed, then things will take care of themselves. If
my intuition is right, then making a seed is a messy problem that would
benefit from a broad analysis and the efforts of many groups.
> You are - no offense, of course -
> thinking like an academic. My goal is not to convince a majority of the
> field, or even to convince a specific percentage minority. That kind of
> convincing would require experimental evidence. Of course getting
> experimental evidence isn't my purpose either. It might happen, but it
> would be a way station if there are interesting interim results on the way
> to the Singularity and we need to attract more attention.
Actually, quite a lot of convincing or interesting is done without a
shred of experimental evidence. Even if we leave out theoretical physics
(where mathematical elegance or the feeling that an approach is
promising is paramount) and look at neuroscience we see this a lot (for
example cell assemblies and that 40 Hz oscillations matter). If
something doesn't pan out after a while it is abandoned.
I *like* being academic. It is a nice subculture. But this spiel is not
about me trying to get you to join the gang, but rather about you
getting the resources and recognition you need to get stuff done.
> >The second point is that papers are judged in a context, and this
> >context aids in understanding them. If I know a paper is part of a
> >certain research issue I can judge it by looking at how it fits in - who
> >is cited, what terminology is used, what kinds of experiments and models
> >are used etc. It doesn't have to agree with anybody else, but it can
> >draw on the context to provide help for the reader to understand its
> >meaning and significance. A paper entirely on its own has a far harder
> >work to do in convincing a reader that it has something important to
> >say.
>
> You can certainly get that kind of context by looking at "Levels of
> Organization". Do please take a look if you haven't already:
> http://singinst.org/LOGI/
I liked it a lot just because you are getting academic here - you show
how it fits in with other approaches to AI and other things people have
written. This is something people can start to put into a context and
discuss, unlike many of your previous major essays.
One doesn't have to sit on an university to participate in the broader
debate.
> >There is a tremendous power in being part of a community of thinkers
> >that actually *work* together. Publishing papers that are read means
> >that you get helpful criticism and that others may try to extend your
> >ideas in unexpected directions you do not have the time for or didn't
> >think if. Academia may be a silly place, but it does produce a lot of
> >research. If you are serious about getting results rather than getting
> >100% of the cred then it makes sense to join it.
>
> It's a nice ideal but I don't see it happening in practice. I would be
> happy to see a small handful of people who *agreed* for correct reasons,
> much less disagreed for correct reasons. I spent a number of months
> writing "Levels of Organization" because I've drawn on a tremendous amount
> of science to get where I am today, and I understand that there's an
> obligation to give something back. I do acknowledge my responsibility to
> my readers; I wrote the clearest, most accessible paper I could. But
> having done so, I just don't see myself as having a responsibility to
> spend the rest of my life trying to convince academia I'm right. Is it
> really that necessary to score one last triumph on a battleground that's
> about to become obsolete?
It is not a matter of obligation, it is IMHO a matter of efficiency: you
want to get the maximum amount of brainpower to work on an important
problem. Even if you had the best idea since analytic geometry written
up in crystal clear prose that might win you a Nobel literature prize
most academics would ignore you, but that doens't matter. The scientific
community is a huge resource of brainpower, and convincing even a small
part that you are doing something interesting would be leveraged into a
lot of brainpower being spent on your project in various forms. It would
mean potential for real funding. If you are right about the minimal
needs of a seed, that would be enough.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:01 MST