From joseph at lightning.network Thu Aug 11 07:49:26 2016 From: joseph at lightning.network (Joseph Poon) Date: Thu, 11 Aug 2016 00:49:26 -0700 Subject: [Lightning-dev] Blinded channel observation In-Reply-To: <87inv8nk6f.fsf@rustcorp.com.au> References: <87a8gmpkde.fsf@rustcorp.com.au> <20160809192814.GA22477@lightning.network> <877fbpps8s.fsf@rustcorp.com.au> <20160809222938.GA25606@lightning.network> <87oa50oqkp.fsf@rustcorp.com.au> <87inv8nk6f.fsf@rustcorp.com.au> Message-ID: <20160811074926.GA9007@lightning.network> On Thu, Aug 11, 2016 at 11:25:36AM +0930, Rusty Russell wrote: > Tadge Dryja writes: > > The method of using a revocation key is compatible with shachain/elkrem so > > has log(n) storage; I'll describe what I developed which omits hashing in > > the commit script and uses only signature verification. If Laolu has made > > a different key revocation scheme I'm not aware, but please do post if so. > > Oh, I blamed Laolu because he told me about it; sorry for misattribution. I came up with it a long time ago, and worked out the details/optimizations /w Laolu more recently (I think he told you that night when everything was finalized). I mentioned the general construction to you a LONG time ago too when you were in the Bay Area, but I probably didn't explain it properly (I was comparing with Vanitygen, if that helps). I think Tadge was the first to implement it though, not sure. > The property I was *hoping* for was the ability for Alice (and Bob) to > independently predict each others' future revocation hashes/pubkeys. > That would neatly allow an arbitrary number of commitment transactions > in flight each way at once. Naively, seems like that should be > possible. I'm not inclined to think an increase in complexity is helpful (if this is necessary), but there are probably some things you can do if you're looking down these paths. It's possible to get the same *bandwidth* optimization you want, except opposite. The idea with "predicting the future revocation hashes/pubkeys" is you only need to send revocations. Instead, it's possible to only send revocation hashes/pubkeys and not send revocations. In other words, instead of predicting each others' future revocation hashes/pubkeys, it's possible to revoke as *part of* giving the next revocation hash/pubkey. You can arrange something similar to a hashchain (shachain/elkrem is an optimization of this, ignore optimizations for a second). We treat privkey->pubkey as an elaborate hash function. I think if you pre-compute it so that privkey -> pubkey, and then take the pubkey output to create a privkey, you have a reversed list of items (let's say you do this 100,000 times). The final privkey -> pubkey computation is the first "revocation keypair" used. The pubkey->privkey step can do anything you want, including hash functions if it makes you feel better (this is the point where one can optimize). (Note: I really mean EC point here, but it's simpler to understand it as a pubkey) If you want multiple in-flight, just have multiple parallel chains (minor increase in permanent storage of counterparty's revocations). I don't see any need for more than a handful in-flight. Note that this explicitly breaks doing multiple in-flight on a single chain, since disclosure of a pubkey is disclosure of all prior revocation states. Essentially, what happens is when you disclose a pubkey, you are providing the next pubkey AND revoking the prior one at the same time. This construction is also possible using hashtree like structures if you're using revocation hashes instead of revocation pubkeys. For the pubkey revocation, a nested chain of privkey->pubkeys are needed instead of hashes since you can't have a usable pubkey point without also getting the corresponding private key. Not sure how useful this is, though. Seems a lot of complexity for some small bandwidth savings, not really that interested in doing all that, but it's the closest I can get to what you want. This is off the top of my head/memory, I didn't write any notes on this, so parts of this (or entirety) might be wrong... -- Joseph Poon