Re: Singularity?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Sep 02 1999 - 17:36:36 MDT


Matt Gingell wrote:
>
> >See http://pobox.com/~sentience/AI_design.temp.html [343K]
> >
> I skimmed your document and, with all due respect, I do not see that your model,
> as I understand it, differs significantly from classical AI. You have a number
> of modules containing domain-specific knowledge mediated by a centralized world
> model. This is a traditional paradigm.

Okay. First of all, that's not right. There's a number of modules
containing domain-specific *codelets* - not knowledge - and the
"domains" are things like "causality" or "similarity" or "reflexivity",
not things like "knowledge about the French Revolution" or "knowledge
about cars". Furthermore, the world-model is not centralized. There is
nothing you can point to and call a "world-model". The world-model is a
way of viewing the sum of the interrelated contents of all domdules.

> The macro-scale self improvement you
> envision is not compelling to me – if you’ve written a program that can
> understand and improve upon itself in a novel and open-ended way then you’ve
> solved the interesting part of the problem already.

Precisely. That is, in a nutshell, the entire problem of seed AI.
"Write a program that can represent, notice, understand and improve on
its local component code and global design paradigms in an open-ended way."

> Could you identify the cardboard box you think AI research is stuck in, and what
> you’d change if you were in charge. (You have 5 years...)

In my mind, I conceptualize "AI" and "transhuman AI" as being two
entirely different fields. AI is stuck in the cardboard box of
insisting on generalized processing, simplified models, and implementing
only a piece of one problem at a time instead of entire cognitive
architectures; they're stuck on systems that a small team of researchers
can implement in a year.

I like to think that I actually appreciate the humongous complexity of
the human mind - in terms of "lines" of neurocode, not content, but code
- and that I've acknowledged the difficulty of the problem of creating a
complete cognitive architecture. I suffer from absolutely no delusion
that a transhuman AI will be small. In the end, I think the difference
is that I've faced up to the problem.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:00 MST