Re: AI over the Internet (was Re: making microsingularities)

From: James Rogers (jamesr@best.com)
Date: Sun May 27 2001 - 14:42:55 MDT


On 5/27/01 12:26 PM, "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
>
> If it's even theoretically possible for a human being to write distributed
> code, then you have to figure that a reasonably mature seed AI that
> started out on a tight Beowulf network will be able to adapt verself
> pretty easily for the Internet. Or FPGAs. Or nanocomputers. Or a
> galactic abacus if that's what's available.

This misses the point entirely. Who cares if an AI on a Beowulf cluster can
adapt itself to the Internet if being implemented on the Internet would
actually lead to a net *loss* of capability? Distributed systems don't
scale infinitely or even asymptotically with the number of nodes; they
eventually peak and then start to decrease in performance. Slow networks
and complex problems just make the distributed system peak earlier, and AI
on the Internet would peak very early indeed.

In fact, one could argue that a seed AI is not likely spread rapidly over
the Internet because a seed AI would know enough about distributed systems
to be able to calculate if there was any net gain in doing so. But it
doesn't take a seed AI to actually do those calculations.
 

> But I do not agree that network latencies - or
> firewalls, for that matter - would present a significant obstacle to a
> mature, computer-native general intelligence.

This is somewhat naïve. I am not talking about problems that are merely
difficult or beyond current human ability, I am talking about concepts that
you can sit down with a pencil and paper, using well-understood math, and
prove the impossibility/stupidity of their implementation. I have not seen
anything offered that explains how so many limits fundamental to the
mathematics of the problem can be blithely ignored by a sufficiently
intelligent AI. Would you please explain how an AI would do this without
invoking magic?

Firewalls are largely irrelevant; though some would very arguably (and
probably verifiably) be impossible to penetrate even with AI, there are
enough relatively unprotected resources to simply ignore the very well
protected ones.

-James Rogers
 jamesr@best.com



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:48 MST