Re: Singularity?

From: Matt Gingell (mjg223@nyu.edu)
Date: Fri Sep 03 1999 - 01:15:08 MDT


----- Original Message -----
From: Eliezer S. Yudkowsky <sentience@pobox.com>

>Okay. First of all, that's not right. There's a number of modules
>containing domain-specific *codelets* - not knowledge - and the
>"domains" are things like "causality" or "similarity" or "reflexivity",
>not things like "knowledge about the French Revolution" or "knowledge
>about cars". Furthermore, the world-model is not centralized. There is
>nothing you can point to and call a "world-model". The world-model is a
>way of viewing the sum of the interrelated contents of all domdules.

When I say 'knowledge,' I mean procedural knowledge as well as declarative
knowledge.

The idea of trying to identify a number of independent primitive domains that
can act as fundamental, orthogonal bases - if I'm reading you correctly - is
interesting. The coordination issue is fundamental though. There's a lot of hand
waving in your discussion of 'interfaces,' 'translators,' and the implicit world
model.

You've made some very bold claims and, of course, I'm interested when anyone as
obviously intelligent as you are thinks that they have something to say. I'm
even willing to overlook the 'mutant-supergenius' remarks, which I certainly
hope are tongue in cheek, and not dismiss you as a crackpot out of hand. You
really would be well served though by an effort to make your ideas more
accessible to others working in the field. In subsequent revisions of your
manifesto, which I do understand is a long way from finished, I'd consider
beginning with a concise, detached overview of exactly what your theory is and
how it differs from what's out there already. It's difficult to wade through the
mixture of theory, philosophy, and implementation ideas to discover exactly what
it is you're actually proposing. An exposition of your intuitions is one thing,
but it's not science and it's not a theory. I honestly don't mean any disrespect
by this - I'm just trying to offer some constructive criticism, which you're
free to consider or dismiss.

>In my mind, I conceptualize "AI" and "transhuman AI" as being two
>entirely different fields. AI is stuck in the cardboard box of
>insisting on generalized processing, simplified models, and implementing
>only a piece of one problem at a time instead of entire cognitive
>architectures; they're stuck on systems that a small team of researchers
>can implement in a year.

Well, we can give running a try after we've got crawling under control. If you'
ve got a theory then the orthodox thing to do is perform an experiment and see
what happens. If a small team of researchers can't produce some kind of positive
result in a year or so, then it's likely time to reconsider the approach. I know
this from first hand experience - the devil truly is in the details. I've had
plenty of 'great' ideas that didn't scale, didn't converge, got stuck on local
maxima, turned out to be uncomputable, or just simply turned out to be obviously
wrong once I thought about them with enough precision to turn them into a
program. You don't know, however sure you are you know, until you try.

>I like to think that I actually appreciate the humongous complexity of
>the human mind - in terms of "lines" of neurocode, not content, but code
>- and that I've acknowledged the difficulty of the problem of creating a
>complete cognitive architecture. I suffer from absolutely no delusion
>that a transhuman AI will be small. In the end, I think the difference
>is that I've faced up to the problem.

I'm not sure this fits with your previous assertion that, given your druthers,
you'd be done by 2005. You're certainly not alone in thinking the problem is
profoundly difficult.

-matt



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:00 MST