> I don't completely understand the distinction, unless it's in the idea
> that "somebody" must be an enemy. In the scenario I favor, the intent is to
> use the computing resources to augment the intelligence. The effect may be
> to deny the computing resources to others, or the SI may elect to stay
> covert by only using otherwise-unused resources.
The way I (and I would say Vinge) uses the word, it refers to a hostile
or at least involuntary takeover. The reasons might be benign, as in "True
Names", but it is still something that the owners of the resources
haven't agreed on.
> > Well, that depends on the possibility to self-augment using computing
> > resources. While computer security is full of holes everywhere, it isn't
> > that bad - cracking into computers to get more computing power is
> > non-trivial and takes time, even for an SI. The idea that a sufficiently
> > smart being could crack everything is a myth.
> >
> This depends strongly on how intelligent the SI becomes. Breaking into a
> system is reputed to require intelligence and persistence. Frequently, a
> clever idea can reduce the amount of brute-force computing needed to
> solve a security problem. If the "True Names" augmentation scenario is
> valid, then the SI will become progressively more capable of generating the
> appropriate clever ideas.
Note that you base this on the word "frequently", which is a doubtful
assumption (do we have any data on how computer intrusion "usually"
happens?). Usually these clever ideas are creative and involve
non-computer tools such as physical break in, phone calls to trick users
into revealing information or the ability to look through discarded
documentation. In a future information-dominated world, this would be
simpler, but security would also be higher.
> Clearly the proper strategy is to start by acquiring the
> (nearly) undefended systems, then using them to augment the SI to let it
> acquire the slightly better-defended systems, and so forth.
The problem is that the SI cannot use the systems 100%, since that would
be too obvious and they would be shut down. So it would have a lot of
computers running very small processes of itself, with noticeable delays
over the net - this would limit it considerably. And it would have a hard
time keeping track of all the risks: one system administrator who
discovers the SI infiltration could do it a lot of harm if he (for
example) call other administrators and they agree on shutting down their
systems or spread a "viruskiller".
Actually, I have been thinking of making a computer or strategy game of
this SI scenario; the players are SI's trying to take over the internet
and transcend.
> Many system are
> essentially undefended, because the perceived cost of a penetration is
> less than the system administrator's perceived cost of defending the
> system. As a quick reality check on this, when did you last change
> your password? Is your new password a combination of your spouse's initials
> and digits from your telephone number?
:-) Fortunately, my password is not that simple. It is guessable, but you
have to make some odd intuitive leaps and do some research to get it. But
as you point out, there are big holes in many systems. But are they
enough to provide a base for an useful SI in a reasonable time?
-----------------------------------------------------------------------
Anders Sandberg Towards Ascension!
nv91-asa@nada.kth.se http://www.nada.kth.se/~nv91-asa/main.html
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y