From: hal@finney.org
Date: Thu Sep 02 1999 - 18:25:36 MDT
Eliezer S. Yudkowsky, <sentience@pobox.com>, writes:
> In my mind, I conceptualize "AI" and "transhuman AI" as being two
> entirely different fields. AI is stuck in the cardboard box of
> insisting on generalized processing, simplified models, and implementing
> only a piece of one problem at a time instead of entire cognitive
> architectures; they're stuck on systems that a small team of researchers
> can implement in a year.
>
> I like to think that I actually appreciate the humongous complexity of
> the human mind - in terms of "lines" of neurocode, not content, but code
> - and that I've acknowledged the difficulty of the problem of creating a
> complete cognitive architecture. I suffer from absolutely no delusion
> that a transhuman AI will be small. In the end, I think the difference
> is that I've faced up to the problem.
This reminds me of the joke about the guy who says that he and his
wife have an agreement to divide family responsibilities. "She makes
the little decisions, and I make the big decisions." When asked for
examples, he says, "She gets to decide where we'll go out on weekends,
what we'll watch on TV, and whose family we'll spend holidays with.
I get to decide who should be elected to the city council, what the
President ought to be doing about the economy, and whether the Yankees
should trade their best player for a bunch of rookies."
It sounds like you're saying that the "box" of conventional AI is simply
that they are working on tractable problems where progress is possible,
while you would rather build up complex theoretical designs. But designs
without grounding in practical testing are very risky. The farther you
go, the more likely you are to have made some fundamental mistake which
shakes the whole beautiful edifice to the ground.
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:00 MST