From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Tue Nov 16 1999 - 10:38:55 MST
Most people who have commented on Lyle's site have pointed out that he
critiques the "Santa Claus" view of nanotech, which no one takes seriously
anyway. However, Lyle's site includes a lot more than that. Most
notably, he argues that nanotechnology will not make ANYTHING cheaper at
all, a view which I think NOBODY except him holds, as well as a critique
of the importance of artificial intelligence, which many of us do take
quite seriously.
After all, most of us agree that nano-assemblers would be hard to program,
indeed, perhaps Very Very hard to program, but the point of this is that
once the program is written, we never have to write the program again.
He makes the point about how hard it would be to get the nanites to shape
shift from one thing to another, that it would be at least as hard as
programming Windows 2000 from scratch. This is true, but then comes the
flip side: once you DID do all that programming, shape shifting into
Windows 2000 would be as easy as INSTALLING Windows 2000, if not easier.
He makes the point about how hard it would be to get the nanites to build
a battleship, but seems to overlook the fact that ONCE we program the
nanites to build a battleship, we can tell the nanites to make more
without much effort.
His critique of AI is especially weak, IMO. Many of us do think that
we'll be able to create an artificial intelligence which is as smart as a
person, and that it will be able to augment itself so that it will be
smarter than a human not long thereafter. Lyle strongly disagrees,
arguing instead that "Automated systems always exist in a larger context
which is not automated." Certainly all PRESENT automated systems operate
this way, but no one is arguing that AI exists now, but only that it CAN
exist in the future. He argues against what he calls the Bootstrapping
Theorem: "It will be possible for human-equivalent robots to go beyond
the point of human equivalence, even though it is not possible for us." My
defense: the biggest reason it's not possible for us now is NOT our own
complexity. Rather, our biggest bottleneck presently is that it's very
difficult for us to examine the precise state of our own brains in detail,
and even more difficult for us to make precise, detailed changes. An AI
would NOT have this problem, and would therefore be at a great advantage
in terms of augmentation.
Many AI people believe that if we ever got a human-equivalent AI, it would
not only augment itself beyond human capacity, it would do so
exponentially. Lyle counters that this is certainly not the case,
basically because there is "No Moore's Law for Software." What he fails
to notice is that the regular old Moore's Law of hardware will do the
trick. The programs of 1990 (presuming they don't crash outright on newer
systems) now run hundreds, if not thousands of times as fast as they did
on the systems for which they were originally designed. If an AI merely
thought in exactly the same way as a human did, but one hundred times
faster, that alone would represent a huge step forward in intelligence.
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:47 MST