From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Mar 27 1999 - 19:46:28 MST
"Eliezer S. Yudkowsky" wrote:
>
> And how does the Elisson architecture handle the problem? I'd answer
> that, but I have to go make dinner. More on this later, and I ought to
> read the whole book before I go on. Nice book, though.
Well, I didn't read the book, but here's the rest:
In Elisson, high-level thoughts are spread out across many domdules;
they can use all the Notice-level functions of any domdule, and benefit
from all of them. The key thoughts, as with humans, take place on
*this* level, and not the domdule level. I believe that symbols, with
their power to symbolize across domdules ("cat" implies fur and meow),
are the key to the problem of interfacing. This is what I have called
"the research problem of symbolics", which has its own table of contents
in Coding a Transhuman AI.
Webmind integrates the Notice level through a common Represent format,
but that's not enough. High-level thoughts, Understanding, must be
built on the Notice level. Webmind contains no designed way (that I've
read yet) to build on the Notice level. Because the Notice functions
are all expressed using the same basic format, it is conceivable that
they will combine to form higher-level patterns - perhaps even across
formats, where agents exist to bind them. But such flashes of
Understanding will be tentative, because the system has not been
designed around them. The Notice level can *only* interact through the
Represent level, and only the presence of heuristics, Notice-agents that
notice Notice interactions, would allow for the presence of
Understanding at all. Even then it would be awkward, since the
meta-Notice heuristic-agents can only notice the Represent-level trails
of Notice agents, and must deduce patterns from that.
In Elisson, the goal of symbolics is to integrate the Notice level
across domdules into an Understanding level. I have proposed several
ways to do this and broken the problem into a dozen facets; I have
separated the problem of symbolic-code domdule description and
symbolic-code domdule manipulation, and pointed out how this is another
case of the reciprocal relation between reflection and will, between
data and choice. But I confess that really don't know how symbolics work.
Webmind's answer to the problem of symbolics is that the Notice level
interacts through the Represent level; the Notice level uses no special
data formats and cannot be manipulated directly. It is a legitimate
answer; {solving the problem of interaction through reduction to a lower
level} is a very useful design principle. In this case, I do not think
it suffices. The Notice level requires its own data formats and its own
manipulative choices, the interface to symbols.
I don't know what programs these symbols in humans, and for Elisson I
have suggested heuristics, pattern-catchers, programmer intervention,
and even Elisson's conscious design; I have suggested nonsymbolic
solutions such as analogic thought and fractional heuristic soup. But
it is a problem that has to be addressed.
Anyway, that's my initial take on Webmind.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/singul_arity.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:24 MST