From decker.christian at gmail.com Tue Sep 6 11:27:01 2016 From: decker.christian at gmail.com (Christian Decker) Date: Tue, 6 Sep 2016 13:27:01 +0200 Subject: [Lightning-dev] [BOLT Draft] Onion Routing Spec In-Reply-To: <87y4376q25.fsf@rustcorp.com.au> References: <87wpjl3rzh.fsf@rustcorp.com.au> <20160815120647.GA2595@nex> <87h9ajae48.fsf@rustcorp.com.au> <20160818090622.GA28345@nex> <871t1lefuo.fsf@rustcorp.com.au> <20160819183647.GB15105@lightning.network> <87pop1df71.fsf@rustcorp.com.au> <20160902120822.GA4575@nex> <87y4376q25.fsf@rustcorp.com.au> Message-ID: <20160906112701.GA28919@nex> On Mon, Sep 05, 2016 at 11:55:22AM +0930, Rusty Russell wrote: > Christian Decker writes: > > I'd like to pick up the conversation about the onion routing protocol > > again, since we are close to merging our implementation into the > > lightningd node. > > > > As far as I can see we mostly agree on the spec, with some issues that > > should be deferred until later/to other specs: > > > > - Key-rotation policies > > OK, I've been thinking about the costs of key-rotation. > I forgot that we have two potential key-rotations: - Rotating the key used in transactions that hit the Bitcoin network - Rotating the public key used for the DH shared secret generation for the onion routing protocol For the moment I was concentrating on the latter. > Assumptions: > 1) We simply use a single pubkey for everything about a node, aka its ID. > 2) Medium-scale public network, 250,000 nodes and 1M channels. > 3) Every node knows the entire public network. > > Each node ID is 33 bytes (pubkey) each channel is 6 bytes (blocknum + > txnum). You need to associate channels -> ids, say another 8 bytes per > channel. > > That's 22.25MB each node has to keep. > > The proofs are larger: to prove which IDs owns a channel, each one needs > a merkle proof (12 x 32 bytes) plus the funding tx (227 bytes, we can > skip some though), the two pubkeys (66 bytes), and a signature of the ID > using those pubkeys (128 bytes, schnorr would be 64?). > > That's an additional 800M each node has to download to completely > validate, and of course some nodes will have to keep this so we can > download it from somewhere. That's even bigger than Pokemon Go :( > > Change Assumptions: > 1) We use a "comms" key for each node instead of its ID. > 2) Nodes send out a new comms key, signed by ID. > > That's another 33 bytes each to keep, or 8.25MB. To rotate a comms key, > we need the new key (33 bytes), and a signature from the id (64 bytes), > and probably a timestamp, (4 bytes), that's 25.25MB. > > That's not too bad if we rotate daily. Probably not if we rotate > hourly.. > A node's public key used for DH shared secret generation exists independently of its channels. I think we probably should not bind the rotation of the key we use to talk to that node to one of its channels. However, it does make sense to require that a node also has at least one active channel in order for us to care at all :-) The comms key approach is in line with what I was thinking as well. We can bind the new communication key with the channel's existence by showing a derivation path from the node's (fixed) public key and the new key. So a node wanting to rotate its communication key just sends the following: "I am (33 byte), please use key (~4 byte) and here is a (64 bytes) that I signed this rotation off.". The communication overhead is identical to your proposal, but, since you send only the new key, I think in your proposal we'd have to churn through all known node ids to find which one signed the rotation, or where you also using timestamp based derivation? Another case we could consider is having passive rotations: when an endpoint announces a channel's existence it also sends its rotation interval along. Every nodes simply derive the new key and use that for the DH shared secret generation should they want to talk to this node. And nodes have a switchover window in which they accept both (would be necessary in the active rotation as well due to delays). The passive rotation incurs no communication overhead and can be bound to the node's channels, so as long as we believe one of its channels to exist, we rotate its keys. Possibly a mix of active and passive would make sense, with the active rotation enabling emergency rotations in case a key was compromised, but we're in a lot of trouble then anyway :-) > Cheers, > Rusty. Cheers, Christian