Re: NANO: Hacking assembler security

From: Charlie Stross (charlie@antipope.org)
Date: Thu Feb 10 2000 - 10:03:51 MST


On Thu, Feb 10, 2000 at 10:08:29AM -0600, Eliezer S. Yudkowsky wrote:
>
> What you're talking about is not analogous to the Thompson hack; what
> you're talking about is more like a compiler that would recognize *any*
> compiler, even a compiler written for Pascal instead of C++, and which
> would furthermore refuse to compile anything that could be used as a
> spreadsheet. I don't believe it can be done, even with limited AI.
 
Phrased that way, it sounds like a subset of the halting
problem. Definitively Not Doable, without waving a magic wand.

On the other hand, I'm not after a definitive solution; just a "good
enough" one. I expect that general purpose molecular assemblers are
going to be so difficult for non-transcendent beings to design that only
a few specialist teams will do so. Moreover, network externalities (in
the shape of a readily available pool of blueprints for useful
artefacts, whether they are free or commercial) will tend to dictate
that they're all programmed the same way, or in only a couple of different
ways. (I hesitate to say "language" here because it's begging to be
misinterpreted.) Which is why the job of recognizing your own source
code isn't outrageously difficult.

Look at C++, today. If you were the NSA and wanted to do the job, how
many compilers would you need to compromise?

* Visual C++ (Microsoft)
* Metrowerks Codewarrior (Motorola)
* Gcc (Cygnus/Red Hat)
* Borland C++ (Imprise/Corel)

That covers the four main compilers used on something like 90% of the
machines out there. Moreover, they're the ones most budding C++
programmers have access to. You could add specialist small fry like
Watcom or the mainframe vendors, but by reducing the pool of unmonitored
programmers by >90% you've massively cut the risk of being hit by a
malicious program.

(And in today's control-freak-friendly political atmosphere, I can see
calls for this sort of thing being listened to by legislators, halting
problem or no.)

What I'd see as a payload for a Thompson hack on an assembler would be
something a bit like a PC virus scanner: watch out for suspicious
patterns of use ("hey, that was the billionth plutonium atom I just
stacked in that assembly! is he building a bomb, or what?") and snitch.

> What's more likely is that if Zyvex wants to sell more than one Anything
> Box, their Boxes will be programmed to build only those objects which
> have been signed with by Zyvex's public RSA key; which, of course, will
> not include Anything Boxes. Or submachine guns.

I wouldn't buy one. I'd ask a friend to run me off a copy of his/her
GNU assembler, which will build anything that can be authenticated by
the author's public key as stored on a known keyserver.

How do I know that some disgrunted peon at Zyvex hasn't contaminated
the source then signed it with their key? How do Zyvex prevent someone
hacking their Anything Box, which is locked up using a method rather
eerily reminiscent of the region-encoding CSS code in DVD?

The issue _here_ boils down to authentication and trust. I maintain
that big corporate entities are not intrinsically more trustworthy than
small fry like us, and that threats can come from unanticipated sources.

And I still think that the big growth industry in nanotech, once the
first general assemblers come out, will be software (in the sense of
design schemas and construction algorithms).

-- Charlie



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:26:44 MST