I can't see any signs of a sudden change in feedback mechanisms.
It's silly to expect the first assemblers to be easily programmed general
purpose devices, for the same reason that it would have been silly to
expect the first computer to run Smalltalk. There will be a number of
constraints imposed on designs by the limits on what is chemically
stable, and it will take a lot of trial and error before we know how
to handle those limits well.
LYBRHED@delphi.com (Lyle Burkhead) writes:
>There are no Genies and never will be. I'm not saying AI will never
>exist. What I'm saying is that it doesn't matter: any entity with
>at-least-human intelligence (artificial or not) won't work for free.
What will keep AI salaries noticeably greater than zero? The supply
of AI's will be, for most purposes, nearly unlimited. If I can create
a million copies of myself, how hard would it be to create my own
Exxon?
hanson@hss.caltech.edu (Robin Hanson) writes:
>>Why not? What will limit the speed of adoption of MNT? what will limit
>>the rate of advance of MNT technology, given the obvious (to me, anyway)
>>feedback mechanisms by which MNT will beget better MNT?
>
>Of course there are feedback mechanisms. But that doesn't imply fast
>change, only change at some speed. Imagining, designing, testing,
>tuning, marketing, etc. of designs all takes time and intelligence.
>And make no mistake - nanotech design is HARD. These are very complex
>machines that are imagined.
Very complex? Most of the nanotech designs I've heard people talking
about can be fully specified (at atomic level precision) in minutes
worth of communication. That will obviously change as the designs grow
more powerfull, but my guess is that nanotech devices will often be
much simpler than corresponding low-tech devices. I agree with the
rest of your conclusions here, but doubt that complexity explains them.
hanson@hss.caltech.edu (Robin Hanson) writes:
>I agree that the 18 month doubling rate of capacity is likely to
>continue. However, I'm not convinced that this rate depends much on
>the "IQ" of the researchers involved.
>
>If this doubling rate were very sensitive to the total number or
>researchers, lots more researchers would be hired to speed up the
This assumes the employers would get much of the benefits. There
are obvious free-rider problems with many kinds of research.
>process. If it depended sensitively on the IQ of these researchers,
>a higher premium would be placed on hiring high IQ researchers.
Can a higher premium be usefully placed on IQ? My impression is that
the most successfull employers value IQ or something equivalent above
all other features in their hiring decisions. It isn't clear that
higher incomes will attract more high IQ people - those that are
influenced significantly by money in their choice of jobs usually
can get equity with a high expected value.
>To the contrary, there are lots of good ideas, and the big expense is
>doing the grunge work of trying them out. Raw smarts helps a little
>in picking the winners, but not that much. How much grunge work gets
>done depends on the size of the global market for computers.
Raw intelligence does affect the speed and quality with which the
tasks you are calling "grunge work" get done.
>Tightly integrated self-augmentation groups *have* been tried, with
>poor success. The fact is that progress depends much more on lots of
>little improvements across the whole economy than many people want to
>admit.
Now that's a stronger and harder to evaluate argument.
-- ------------------------------------------------------------------------ Peter McCluskey | | The theory gives the answers, pcm@rahul.net | http://www.rahul.net/pcm | not the theorist. - Allen Newell pcm@quote.com | http://www.quote.com |