I was bouncing a couple of ideas about with some of my colleagues
today, discussing how to shut down the internet. (We're an e-commerce
enabler firm -- we processess credit cards. So security is a perennial
obsession.) Somehow we drifted onto the topic of distributed denial
of service attacks such as TriNoo and Tribe Flood Network and whatever
took down Yahoo yesterday; and we also got onto the topic of the RTM
worm.
<SIDE_TRACK>
I like metaphors. (I'm not particularly bright, and they make it easier
for me to grasp weird new technologies and explore their possibilities.
When they work properly, that is.) I particularly like the metaphor of the
nanotech assembler (or the ribosome, for that matter) as being a compiler.
(I think I already mentioned the idea of using the Thompson Hack to
prevent assemblers being used for the construction of obvious weapons
of mass destruction, even in an environment where the blueprints for
assemblers are widespread, leading to the proliferation of customized
nanoassemblers for special tasks. I'm still trying to explore this idea's
implications, using the compiler metaphor. If you bear with me through
the software discussion I'll get to the grey goop ...)
</SIDE_TRACK>
While we were discussing DDoS attacks, a couple of paranoid ideas occured
to me. The first was: how is it possible to set up a DDoS attack in the
first place? Obviously, you need to spread the DDoS clients to as many
host systems as possible. This typically requires a degree of social
engineering insofar as you need to make people trust you enough to run
your system -- relying on an external intrusion (like the RTM worm)
may work once, and is vastly easier in a monoculture, but isn't necessarily
reliable: if someone else discovers the security hole your worm relies on
and patches it before you pull the trigger, you've lost.
One way of earning trust is to look like something else -- something
that obviously needs to take data from a central source and do something
with it, but something that is nevertheless benign. For example,
take Seti@Home. The Seti@Home clients are closed-source! They regularly
download batches of data, process it in mysterious ways, and upload the
results to a central server. If I was an infowarrior from Big Black,
I'd be drooling at the prospects and doing my damndest to get my hands
on the Seti@Home network. All it takes is a minor modification to the
client software, so that if it sees a data set with a certain attribute,
it triggers a fork()/exec() and executes the data set as a process instead
of processing it as SETI data. This would enable a hypothetical attacker,
who has set themselves up by disguising themselves as a charitable concern
like Seti@Home or distributed.net, to launch a massive simultaneous DDoS
attack on some target, by downloading DDoS client software disguised as
a new SETI dataset.
(If you have access to the operating system authors and the compiler
sources as well, you can do something even more scary: the Thompson
hack springs to mind again. _Anything_ that grabs data from a socket and
writes it to a file handle can be turned into a vicious trojan if it
sees a specific signature in its data stream, and you'll never be able
to find it unless you keep a low-level debugger pointed at your kernel,
because it's your TCP/IP stack that has been compromised.)
But I digress ...
Nanoassemblers will exist in an economy. They will be used as matter
compilers, and they will be operated on source code at the discretion of
their owners. Thus, scope for human engineering creeps in. Having kicked
around the idea of what you can do to a single compromised, untrustworthy
strain of assembler, I'm trying to figure out what you can do with a
_distributed_ assembler hack.
The Thompson hack basically lets you create compilers that scan their
input stream for some signature and snitch on you (or execute some
arbitrary payload) if they see it. The nasty sting in the tail of
this idea is that the scanner/payload isn't visible in the source code
to the compiler; because one of the scanner/payload combinations in a
contaminated compiler is designed to add the patch whenever it recognizes
its own source code being re-compiled. With application to assemblers
(or, more likely, their control software aboard the nanocomputers used
to direct them), this would mean that if you try to assemble some grey
goo, the assembler will recognize the grey goo, and assemble something
else instead -- probably a pair of handcuffs. And you can't build an
assembler that _won't_ snitch on you without doing it by hand, without
involving any off-the-shelf assemblers in the process: because they'll
try to propagate the Thompson hack into your new "clean" assembler.
The logical next step, if we view this as an enforcement arms race, is
for the guys in the black hats to look for ways around the hack. Two
ways spring to mind:
#1 Build a clean assembler. To do this, build unrecognizable tools
(at least, to the scanners in the contaminated assemblers) that can
be combined to build something that can compile a clean assembler.
#2 Build a weapon than doesn't look like a weapon to the contaminated
assemblers.
I suspect #2 is going to be easier. And I'd look to a distributed service
attack propagated by human engineering as a way of doing this without
detection.
<EXAMPLE>
* This is a firearm-free district. The nanoassembly of GAU-27s is
banned (as is private ownership of ICBMs). The ban is enforced by
snitch patterns that all local assemblers recognize. The patterns
are regularly updated with all the open-source algorithms for
gatling assembly that are floating around the net.
* Building a GAU-27 isn't a single task ("stuck ingredients in vat,
add assemblers, wave magic wand, remove anti-tank gun"). Different
components require different treatments and need to be brought together
in sequence to produce a whole machine.
* The snitch circuits specifically target critical path steps in the
production cycle that are characteristic of building a gatling gun
("insert six tubes through performated metal disk; align with
convergence angle foo, weld into position") rather than any other
structure ("lay down a perforated metal disk bar centimetres in
diameter, quux millimetres thick ...") that might be part of something
else. In fact, it'll probably take a sequence of such steps to
distinctively identify the fact that the assembler is working on a
GAU-27, as opposed to, say, a replica steam engine boiler. (Lots of
parallel tubes held in place by welding ...)
* Therefore the job of the black hats is to distribute the critical steps
in the construction sequence so that no one assembler (or family tree
of assemblers, if they're building a macroscopic artefact and tracking
suspicious patterns from generation to generation via some sort of
analog of telomeres in their instruction set) gets to see the full
picture.
* It might be as simple as "stop: alert human operator! Eject pipe-like
structures and disk from tank! Await new material input." While the
humans take care of the step the pattern recognizers are looking for.
* More likely, it'll be along the lines of a piece of spam saying "hi!
Here's a real cool design for a gold bee that you can use as living
jewellery! Feed me to your household assembler now!" -- Carrying a
payload that will migrate to a final assembly ground using GPS or
whatever to navigate, then self-assemble (at a macroscopic scale)
into something nasty.
</EXAMPLE>
On a large scale a nanotech attack may be an analogue of the seti@home
corruption described above -- one that works on a system where people
download designs from a trusted source, build them, then post them to
a destination that will make use of them, all with the idea of doing
something useful. ("You too can help colonize Tau Ceti! Download our
designs for an endothermic neutron irradiation source and help manufacture
DT fuel pellets in the comfort of your own home! Send your upload to
the stars!" And all the time you're sending the Traditional IRA splinter
group fusion pellets for their forthcoming H-bomb campaign against the
EU.)
This sort of cooperative attack _will_ happen; it's just too tempting.
Here's another example:
<EXAMPLE>
You drive GNU station wagon 18.3, the Emacs of the automotive world.
GNUsw is a great car. You have to paint it yourself, but it comes with
steam, diesel, coal, nuclear, and solar power options, two to sixteen
wheels, optional hovercraft assist, and it's self-repairing. (And it's
a free design, unlike Ford's latest style offering that looks infinitely
nicer but expires after twelve months unless you keep paying your license
fees.)
GNUsw just happens to have a facility for auto-upgrading itself: it
periodically downloads a set of design improvements from the Free Hardware
Foundation's CVS archive, fabricates them, and integrates them into
its structure.
This works fine ... until the black hats get at the Free Hardware
Foundation. They suborn the FHF archive, and publish a "design
improvement" that lies dormant until a certain date, then turns a station
wagon into the equivalent of an M1A2. On that date, suddenly your car
turns into a vicious armoured killing machine -- along with those of
sixteen million of your neighbours. The scanners embedded in the GNUsw's
own assembler suite don't spot the purpose of this design change because
the tanks aren't actually carrying guns -- when you weigh sixty tons and
can outrun a Jaguar you don't _need_ to :) -- and the tanks are still
basically land-going passenger vehicles. The lethal bit is the control
software, which has a pattern recognizer for humans that will make the
tanks chase them until they turn into red smears.
(By the way, it's not just GNU I'm picking on here; the same attack could
easily come through a closed-source commercial venture. It's equivalent to
the macro viruses so common today on Windows; a vulnerability caused by
the system having facilities to permit self-extension, combined with the
ability to execute instructions sent from an implicitly trusted source.)
</EXAMPLE>
Summary:
* Nanoassemblers are compilers.
* As such, they can be rigged to "snitch" if used to build something
undesirable.
* The logical response to this is to use a distributed attack that causes
no one assembler family to build something undesirable.
* Artefacts that posess the capability to autonomously extend or modify
themselves are very vulnerable to this sort of attack.
* So are distributed networks that process instructions from a remote host.
Can anyone pick any holes in this line of thought?
-- Charlie
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:03:35 MDT