From: Eugen Leitl (eugen@leitl.org)
Date: Mon Apr 08 2002 - 07:30:37 MDT
On Sun, 7 Apr 2002, Adrian Tymes wrote:
> True. It's more common to turn off various things until the computer
> starts behaving again.
Even with Redmondware still dominating the home user market, the uptimes
have become better.
> After a short, but very noticeable, delay to take care of any current
> tasks. Seen it on the other @home clients.
To repeat, a properly programmed client on a modern box is invisible in
terms of user reaction time.
> That's true. And, in any case, you'd have the neurons care about
> whether their neighbors fired at time N-1 anyway. Just saying this
> would slow things down to the point where it wouldn't be all that
> practically useful.
The neighbours are in this case nodes on the local network loop, neighbour
ports on a switch. The latency profile for those nodes is good even now.
Furthermore, the bootstrap stages are not realtime-bound. Plus, we're
talking about networking and hardware of next decades. Plus, once you're
post-bootstrap stage you'll find the system will very soon design and
build its own hardware. The point is to make the bootstrap bottleneck as
narrow as possible.
> Which leaves the system open to forged results (once anyone finds out
> what's going on) and to acting improperly on a given user's computer
> at some given time. Statistically, a certain percent of users will
> become upset enough to leave - and, once they do, another certain
> percent of users will become that upset...
You're assuming loss of service, because the system is doing something
else at the time? If the system cares about keeping up appearances, it
certainly can provide you with a much superior service using a fraction of
initial resources.
> But when the load becomes too high, they'll move to something that,
> while it may have less users, also gives them their computer back.
Keeping pulling hardware from under your feet is a selection pressure
which will result in less intrusive, more stealthy agents.
> Unlike the classical example, in this case there is, at any given
> time, potential for a frog's friend to tell them how great it is
> outside the boiling pot, or for the frog to find out for itself.
Do you watch your box 24 h/day? Do you run a packet sniffer on your local
network, and regularly look through the logs? Which system is keeping the
logs?
> Those have little to no impact on the system - unless they do, and the
> system is at all monitored (like a desktop, not like a server you
> stick in a colo and walk away from) in which case they tend to get
> noticed.
The point is that all current systems are swiss cheese. With a library of
about 10 (preferably hitherto undisclosed) vulnerabilities you can own 90%
of all machines on the global network, backbone infrastructure included,
if you have a competent team and do it in a concerted manner.
> Seriuosly: nuclear war could also be the end of humanity, yet many
While nuclear war would be definitely extremely nasty, it would not kill
even the bulk of humanity. But, it is definitely one of the blips on the
global threat screen. I don't see how we can afford to ignore even a
single blip of many.
> fewer people complain about military solutions when their country is
> not actively involved in a war. Not to mention various plagues
> (natural or manmade) whose cures could only be found by an AI - and,
> given when they could strike, a self-boostrapping "rogue" AI free of
> procedural development constraints may be the only way to get such an
> AI in time. In which case, *preventing* self-bootstrapping rogue AIs
> may mean the end of humanity.
>
> Frankly, the known harm that your solution would impose far exceeds
> the risk times possible danger of a self-bootstrapping rogue AI.
There is no solution save for making networked computers more secure on
all levels. Secure systems do not have a higher market fitness so far.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:19 MST