Re: SECURITY: Kaaza

From: Eugen Leitl (eugen@leitl.org)
Date: Sun Apr 07 2002 - 05:19:05 MDT


On Sun, 7 Apr 2002, Samantha Atkins wrote:

> If you check the original post I think you may notice that I don't

Time to upgrade to a real mailer. It's futile trying to find the thread
now in the mess that is my inbox. (Mutt, Really Soon Now, Honest).

> believe that merely having access to spare computational capacity is
> sufficient to create an SI much less to do so with a hard-edged
> transition.

Sufficient? I don't know. However, Moore's law (intergration density, not
performance, that's something different) is still for real, as is
networking technology (though the transition to purely photonically
networks has been delayed due to dotbomb), the architectures are still
sufficiently all-purpose (in fact, in future the trend towards
increasingly dedicated architectures might reverse with the advent of
embedded RAM, FPGA cores and eventually cellular architectures, which will
turn to molecular scale in 15-20 years), the security is still nonpresent,
there is no trend to address the security issue by means other than
actively supressing exploits and reports on said, and the meme that global
resources are marshallable in a concerted effort is firmly out there.

Though we're currently very probably safe, a trend there is. Bootstrapping
AI is hard, but the longer we wait, the more probable it gets.
 
> Did I say anything that justifies you acting as if I spouted
> some true-believer notion? I don't think so.

You mentioned the capital-F word, though. Since I don't see how you could
define Friendliness today, not to speak of scaling it over so many orders
of magnitude (including indecidedability issues at system scale itself), I
don't believe in Friendliness. (Unless somebody can show me a proof that
Goedel was wrong, plus that you can predict state X at t+N without having
to traverse the trajectory of N states -- Always).
 
> So what? Please show a coherent plan of how these computers can
> be used to force the development of an SI. I don't see that

My whole point was that worrying about hidden P2P suite cargo is really
unfounded, as a group who's serious about taking over >80% of global
networked hardware out there could do that. I was not saying anything
about what that is good for.

> they can so I think the worry of that developing from the unused
> computational capacity merely being tapped is unfounded.

You want a coherent plan? I thought the list archives were lousy of them.

If I was crazy enough to try to breed an SI I would first map the fitness
space of (spiking) integer automaton networks which map well to existing
hardware base using evolutionary algorithms (there are several ways to do
this). This is embarassingly parallel, so it maps well to loosely coupled
node model of current and mid-future global networks. I would experiment
with methods of iterated pattern generation on said 3d lattice aligned
nodes in CA framework from compact genome seeds. After I got that licked,
I would co-evolve the mutation function on above integer automaton
substrate (i.e. both the substrate mutates, the mutation function mutates
(it's important to make an unconstrained yet self-adjusting framework as
possible), and individua composed from both compete), using simple
problem-solving tasks. Navigation in 3d space (playing against
themselves, and people occasionally), text, voice and image comprehension,
remote system subversion, and the like.

This is still embarrassingly parallel. What to do next is less clear, but
could involve ramping up task complexity, use free-running co-evolution in
hostile scenarios, and the like.
 
Since I don't think doing above would be a smart idea, I have not pursued
the matter in any detail. (In fact if I knew for certain above would work,
and could be useful to anybody I wouldn't have posted that outline).

> If you are sandboxing to avoid it spontaneously evolving into an SI
> then I think the efforts are a waste of energy as the possibility
> being protected against is impossible to occur in such a manner.

There's nothing spontaneous about emergence of an SI. It's a (very) large
scale engineering effort. The stuff I mentioned is about hardening systems
against remote hostile takeover, thus limiting the availability if initial
substrate. Because there's a bootstrap effect, limiting the initial
substrate base is worthwhile.

Of course, no one cares about this currently.
 
> Now a really good AI team with a very well thought out SI seed
> architecture might me able to do something with such a net of
> computational power but even then the poor overall latency of the
> system would work against the result being all that impressive. Or do
> you see something that I am missing?

We're comparing apples, oranges and wombats here. First, a lot of the SI
creation bootstrap tasks are embarassingly parallel. Folding@home crunches
for about a day, before making a (brief) connection to the network. Many
classes of tasks are like that.

Secondly, depending on how much "neurons" you can package within the node,
you can reduce the total bandwidth of exchanges. If we're talking about a
3d network, by varying the surface/volume ratio could can optimize the
network to be utilized optimally (in the U.S. alone there are currently
about 6*10^6 machines on cable modems with up to 40 kByte/s bandwidth and
few 10 ms latency on the local loop).

Thirdly, I was not talking about today's networks, nor today's nodes.
Residential Ethernet currently being rolled out in few parts of the world
gives you Fast Ethernet (200 MBps full duplex on the local loop) access,
with switches meshed over GBit Ethernet routers. First machines with
onboard GBit Ethernet have been shipping already, simultaneously few-port
GBit switches have dropped in price sufficiently so that you'll see GBit
Ethernt displacing most of current niche occupied by Fast Ethernet over
the course of the next 2-3 years. 10 GBit Ethernet is being prototyped
right now, with deployment anticipated in less than a decade, much earlier
in network backbones. Now 10 GBit doesn't sound like a lot, but you should
look at the data rate processed by both our retinas. And current systems
are basically overwhelmed by 1 GBit data rates, and can't even extract
features from a USB webcam properly, which is a task comparatively
mappable to current hardware paradigm.

Within the decade (give or take a few years) we should see first samples
of (2d) molecular memory shipped, and conventional CPU clocks ramping up
to well over 10 GHz. Unconventional (cellular) architectures, if indeed
deployed, will be capable of switching at several THz scale. Now if (in a
decade or two) I've got a >>10^9 cell system with native switching speeds
of ~10^12 Hz, connected to ~10^10 GBit/s local-loop network as a *single
node*, while there are ~10^9 of such nodes on the network, that's a
considerable potential resource.

And the generation after that will be 3d molecular circuitry based.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:18 MST