Name: Socratic Seminar

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Date: August 25th 2020

Video: No video posted online

Last month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/

Google Doc of the resources discussed: https://docs.google.com/document/d/1rJxVznWaFHKe88s5GyrxOW-RFGTeD_GKdFzHNvhrq-c/

Transcript completed by: Michael Folkson

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Statechains (Off-chain UTXO transfer via chains of signatures)

Slides: https://docs.google.com/presentation/d/1W4uKMJgYwb5Oxjo1HZu9sjl_DY9aJdXbIYOknrJ_oLg/

Ruben Somsen presentation on Bitcoin Magazine Technical Tuesday: https://www.youtube.com/watch?v=CKx6eULIC3A

Other resources on Statechains: https://twitter.com/SomsenRuben/status/1145738783192600576

This is a presentation on Statechains. If you have seen my previous presentations there is going to be some overlap. Hopefully this is a better explanation than what I did before. I try to make it clearer every time. There are two new things that you probably haven’t heard that are interesting and improvements. That will be interesting for people who do know a lot about statechains. The general gist of it is its an offchain UTXO transfer via a chain of signatures.

My general motivation, the goal is to improve Bitcoin while preserving decentralization like most of my work. We will be covering an explanation of statechains, an easy to grasp one, the limitations and security of it and some recent developments that are interesting.

The 3 simple rules of a statechain. Basically the server signs blindly on behalf of the user. You have this server that just takes requests and then signs. It doesn’t really know what it signs. The user can transfer these signing rights. If I am the user I request a signature and then later I say “Now somebody else can sign instead of me”, some other key. All of these signatures are published in a chain, the statechain. This is all that happens server side. There is not a lot of complexity there. In fact these three simple rules are the full server implementation. For the next couple of slides we are going to assume this works and the server doesn’t misbehave which obviously is a problem but we will get back to that.

The example would be you have Alice and Alice currently is the owner of the statechain. She has the signing rights. She is allowed to tell the statechain what to sign. She says “Sign this message X and then transfer the signing rights over to Bob.” Bob becomes the owner of the signing rights on the statechain and then Bob does the same thing. Bob says “Sign this message Y and now transfer these signing rights to Carol.” This is really all the server does. Somewhat surprisingly this is enough to enable offchain UTXO transfer.

The general implication is you can gain control over a key and learn everything it ever signed through this method because all the signatures are public. Everything the statechain ever signed can be verified. They are public but blinded so you do need the blinding key to unblind them.

The offchain Bitcoin transfer is essentially if you control the key, that statechain key, if you have the signing rights and that key signed some kind of offchain Bitcoin transaction that transfers over some coins to you and there has never been any conflicting signature from that key that signs the coins elsewhere then effectively you have achieved an offchain Bitcoin transfer.

How does this look? First what you do is you start with Alice who controls the statechain key who has the signing rights. Alice first creates an offchain transaction where she is guaranteed to get the Bitcoins back if she ever locks them up with the statechain. After she has this offchain transaction that you see here, an input and an output. The output is S or A* after some kind of timelock. In a week or so the statechain is in control and after a week automatically Alice gets control. After she has this guarantee she is ready to say “Now I will lock up my coins with the statechain because I have this offchain transaction where I get them back.” From this point Alice can transfer the coins to Bob by creating yet another offchain transaction where now Bob gets the coins and simultaneously the signing rights are transferred from Alice to Bob. We have one problem here which I need to get into. Now there are two offchain transactions and they are conflicting unless you have some kind of method that allows that second transaction occur in spite of that first transaction existing.

There are two ways of preventing this. The simplest way of doing is having a decreasing timelock. The coins are locked up with some kind of timelock. The top transaction might become valid in 30 days but then the transaction below it becomes valid in 29 days. That guarantees that the last transaction is the one that will be able to be sent to the blockchain.

This method has some downsides. The preferred method is using eltoo which requires a soft fork. But eltoo I would say as a simple summary is overriding transactions. I am sure people are aware of eltoo but because it is new technology that hasn’t been implemented yet I will have to go over it. Briefly how it works is you have some kind of onchain output and then there is a transaction that spends from it. You have an output that can either be spent by that key S which in case signifies the statechain or after a week Alice gets it. They can create this second transaction which because S is still in control until the timelock expires. If within a week you send State 2 you can overwrite state 1. You can do that again with State 3. Even if State 1 or 2 appear on the blockchain you can always send State 3 afterwards. One final important feature is that you don’t have to send all these states to the blockchain in order to get your money. You can skip states and go from State 0 to State 3.

This eltoo method is how we solve it in statechains, at least the ideal setup. The limitations are you transferring entire UTXOs. You are not sending fractions though you can open up Lightning channels and then you can send a fraction through Lightning on a statechain. There is no scripting which is an upside and downside. It means there is no complexity on the server because they are not verifying anything other than just a signature. What we can do is all the scripts are enforced at the Bitcoin side. You have this offchain transaction and this offchain transaction contains a script. You are just not sending it to the Bitcoin blockchain because you don’t have to. The ideal setup requires Schnorr.

Couldn’t you get the server enforce some kind of script as well as the signature?

Absolutely. You can do so but it comes at the downside of adding complexity and because the preferable way of doing this is having a server sign blindly and so you can’t have any restrictions on what it is allowed to sign. You can’t have restrictions on something you can’t see. Maybe you can have some kind of zero knowledge proof on the blind signature or something that the server can verify. Maybe you can do something like that.

Couldn’t you just have the constraints out in the open? The server saying “You tell the server the next guy can have this money with these constraints.” When the guy comes with the correct credentials you can check the constraints as well. The server could do a timelock.

That is absolutely true. It is possible. The server already does a couple of things. You can add to that. I don’t think it is necessary and I think it breaks the simplicity of the model. So far I haven’t really seen that being desirable. But it is certainly something you could do. You could have a complex scripting language that the server is enforcing on top of what it is already doing. That is entirely possible. There is one thing that you do need from the server, ideally you need some kind of atomic swap guarantee. The method I came up utilizing scriptless scripts is slightly complex but kind of elegant. There are a couple of things you want the server to do but my design goals are to keep that at a minimum.

If you have blinded signatures you require client side validation. Every user that accepts a statechain payment, they need to go back and verify all the previous signatures. I will have an example later that makes clear how universal the statechain is without requiring any scripting. That doesn’t mean scripting couldn’t be added as an option but the nice thing is you can do a lot of things without the server actually needing to do anything.

The failure modes. How can this possibly fail? It is not perfect, there is a server involved. The server could disappear. It could break or catch fire. It could get compromised. While it is running maybe somebody tries to extract the private key or starts running some malicious software or something. Or the server could be actively malicious from the beginning. Out there as a honeypot waiting to take your coins. The first thing we can do to improve this is we take the server and we turn it into a federation. Pretty straightforward. In terms of implementation complexity now you have the federation that needs to communicate with each other, that is going to be complex. But very simple in terms of understanding how the model works. The federation is some kind of multisig, 7-of-10 or something like that. If the server disappears the nice thing about statechains specifically, this doesn’t apply to federated side chains, is it doesn’t matter. If they disappear you have this offchain transaction that you can send to the blockchain and you get your Bitcoin back. This is really nice and improves the security by quite a bit. The second is if the server gets compromised or the federation gets compromised there is this transitory key. So far my example has been just one key but there is a key that is also with the user. There is a trick that improves this. If the server or the federation is actively malicious sorry for your loss. Your coins are gone. This is not a trustless system. There is still a federation involved. Compare this to federated sidechains. What you have in federated sidechains, the federated sidechain is generally like 7-of-10 multisig or something along those lines. 70 percent needs to sign off on something but that also means 30 percent can refuse to sign something. They can refuse to sign a peg out where you get your coins back. In a federated sidechain model your security model is that if over 30 percent of the federation doesn’t like you and doesn’t want to give you your coins you are not getting your coins. What we’ve seen with Liquid in particular, they had this bug where there was a timeout which was supposed to be some kind of security measure for if the federation disappears. The coins go back to a 2-of-3 at Blockstream. For a while you had this issue where the coins were under control of Blockstream in a 2-of-3 with all keys they controlled. This would be completely unnecessary in the statechain model. In the statechain model not only do you need the majority to sign in order to try to steal your coins but you can increase that majority from 70 percent, 80 percent, 90 percent whatever you want. Worst case scenario, what you are trying to protect against is if one of the federation members disappears, you don’t want to lose your coins. Now what happens is if too many federation members disappear you just go onchain. This is not ideal, you still want some kind of threshold probably. You could even do a 10-of-10 if you only care about security and don’t mind being forced onchain if one of them disappears.

The transitory key T is only known by the users. This statechain federation should not be aware of this key. The onchain key on which the coins are locked is going to be a combination of the statechain key S and the transitory key T. This means custody is shared with the pool of users in a sense. There is this new thing that the Commerceblock guys who are building an implementation of statechains right now came up with.

What I was showing you earlier is a Bitcoin transfer from Alice to Bob. In my example here the key is controlled by S, the statechain. Now we change it where it is S and T. The statechain key and the transitory key that is with the users. Then when a transfer occurs Alice has to give the transitory key to Bob.

The weakness here is similar to what it was when it was just S. If somehow somebody learns the private key of both S and T they could potentially take the coins.

Bob generates and shares a key K. He tells the private key to the statechain. Originally with S and T, which is S+T under Schnorr using MuSig, what we do is we take S and we subtract K and we take T and we add K. Now we still have the combined key of ST but the relative keys are different. Now Bob instead of remembering T, he now remembers T\’ which is T+K. The statechain only remembers S\’ which is S-K. What this means is that when the transitory key T transferred over from Alice to Bob, Alice only knows T, Alice doesn’t know T\’. So if the statechain forgets their knowledge about S and K it means that T becomes harmless. Even if Alice later hacks the statechain Alice can still not take the coins from Bob despite learning S\’.

This is an interesting claim. My original explanation of this was a little vague. What I call this is regulatory non-custodial. I wouldn’t say it is non-custodial in a strict sense because there is still a key and we are still relying on a federation. As I said with the security model there is a way the statechain can fail. But the statechain doesn’t know what it is signing, it only controls one of two keys and it can’t confiscate coins even if it learns the transitory key from a previous user because it changes the key share with every transfer. What this means is that honest behavior equals no custody. If the server is actively malicious they can take your coins. If they are actively doing the right thing a third party or a hacker can’t come to them and take their key and start being malicious. The only way in which malicious behavior can occur is if the server is actively taken over, actively being malicious while it is running. From a practical perspective that is quite secure. From a regulatory perspective I am hopeful that regulators would have a hard time saying that the statechain is a custodian. Whereas with something like a federated sidechain you have this multisig address and they can literally take the coins from you. That is a harder claim to make in court.

Moving over to Lightning, how does that work? You can use Lightning through the statechains. Instead of transferring the ownership of UTXO from Alice to Bob you would transfer it from Alice to Alice and Bob. It becomes a double key. Remember this is MuSig so AB is a single key. From that point on to run the Lightning channel you don’t need the statechain. It becomes irrelevant at that point, at least until you want to transfer the UTXO again. Now you can open the channel and you can be on the Lightning Network. Alice and Bob would have some kind of separate offchain outputs that they change the balance of while using the Lightning Network.

The synergy here is you can open, close and rebalance Lightning channels offchain. The issue with Lightning is that you have this throughput limitation where you can only send as many coins as there is liquidity available on the network. But you can send any amount you want. It is very flexible in that regard, they are divisible amounts. Statechains has the opposite problem. The throughput is infinite. You can open a 100 BTC statechain output and you can transfer it over to somebody else without requiring any channels or routing. But it is not divisible unless you go over and connect to the Lightning Network.

Here is a more complex example. It is very interesting. This is just a Lightning channel, no statechain involved right now. You have a Lightning channel with Alice and Bob. They both have 5 Bitcoin. They change their state and now Alice has 3 Bitcoin and Bob has 7 Bitcoin. Now Alice says “I have these 3 Bitcoin. Can I give these 3 Bitcoin to somebody else without interacting with Bob?” If they did so over the Lightning Network then Alice would have to give those 3 Bitcoin to Bob and Bob would have to give those 3 Bitcoin to Carol. Then the channel balance would be 0 and 10. What I am trying to do here is I’m trying to swap out Alice without interacting with Bob. If you interact with Bob it is simple. You can just re-open a statechain or transfer over the entire UTXO and then give control from Alice to Carol. That requires Bob’s permission. We can do this without Bob’s permission. This is kind of weird. You have a Lightning channel. Somebody you have the Lightning channel open with, this person can change identity, can become somebody else without your permission. Whether this is a good or a bad thing, maybe you want to know who you have your channel open with. I will leave that out in the open.

Alice’s key becomes the statechain key and the transitory key. What is A here becomes ST. We have ST and B and the final output is ST, the statechain and the transitory key owned by Alice. Alice needs some kind of guarantee that if the statechain goes away she gets her coins. There needs to be yet another transaction like this. You can combine those two transactions into a single output by adding some scripting but for simplicity we don’t do this. Now Alice hands over control of the statechain key to Carol. The latest state is then signed over to Carol. On the right hand side this is eltoo so the transaction takes precedence. Carol has the final offchain transaction that she can send to the blockchain to claim these Bitcoin. What is important is she has to go and check every signature that the statechain key ever made to make sure there is no conflicting transaction and the final state is her receiving these coins. But the final result is you can literally swap out of a contract you have with someone and somebody else can take your place. These things layer, these stack. Particularly because the statechain doesn’t need to be aware of anything. If you go back to this state the A key can be yet another statechain key with yet another transitory key. It is a statechain inside of a statechain but Bob does not even have to be aware of that.

I think DLCs is gaining a little bit in popularity. This is an interesting way of doing DLC. Let’s say you have a DLC style bet. DLC means you have some kind of third party oracle. The third party oracle will hand out a signature based on an outcome and you utilize that signature to resolve a bet. You could for instance bet on the BTC/USD price at some time x, let’s say 1 month from now. Half way through, Alice and Bob have their position, Alice can switch out her position and give it to Carol for instance. This would be without requiring any interaction with Bob. If Bob has to be online you have difficulty. This enables you to have a position in any asset. If you can bet on the BTC/USD price you can have a position that is equivalent to holding US dollars. Instead of having 1 Bitcoin you would have 10,000 dollars and by the end of the month you would receive 10,000 dollars of Bitcoin. Because you are engaged in this bet you have the equivalent of dollars that is going to be paid out in Bitcoin. What you can even do is if assuming the person you are paying trusts both the DLC bet, the oracle, and the statechain you can give somebody a portion of this bet. From the 10,000 dollars you have in this bet you can give somebody 1000 dollars by co-owning the position. You have this whole offchain system where you have these derivatives that you can give people pieces of. It is non-interactive in the sense that it is non-interactive with Bob, it is interactive with the statechain.

Here I have got a bunch of short points I will go through quickly. You can add hardware security modules both at the federation side that makes things strictly more secure. The federation is limited, it would have to tamper with its own hardware. You could also put a hardware security module with your users. You get this transitory key be controlled by a hardware security module. This strictly only improves the security. If the hardware security module breaks you always have this offchain transaction that would have to be stored outside of the HSM. You can always get your coins. This solves a problem that traditionally exists with HSM key transfer where you could potentially transfer a regular private key from one HSM to another HSM. But if only one HSM is malicious and if it was in the chain of transfers it can take all the coins. The second thing is that if the HSM breaks your coins are gone. You have no backup but now you have this offchain backup. Lightning channel factories, my example is Lightning with a single user but you could have Lightning with 10 users inside a single UTXO. You can swap out these users without interacting with any of the other users just through interacting with the statechain by having a statechain inside of a statechain. Statechain history, you do need to make sure that the statechain does not keep double copies of a UTXO. It might pretend it has a single UTXO twice and then it will give one history to one person and one history to another person. As long as this UTXO does not get redeemed for a while the statechain could operate and cheat. We can get rid of that by having some kind of sparse Merkle tree where every UTXO is being recorded. This means that you have a proof that a UTXO only exists once inside of a single statechain. There is a watchtower requirement. Because you have these offchain transactions they become valid after a week or whatever you decide. You need to watch the blockchain and see if a prior user tries to claim the UTXO. If they do so you need to respond to it. The nice thing with eltoo is that the fees are entirely on the user trying to cheat. That makes that whole model a lot cleaner. But that is a potential downside. If somebody is willing to send the transaction and pay the fee you have to respond to that and also pay the fee. You want to pay fees to the statechain entity, to the federation. First I imagined that you’d open a Lightning channel with the federation and you would pay them through that. But the interesting thing about federations is you don’t really need some kind of onchain fee structure. You can do it out of band and that makes the whole blockchain itself a lot easier. In hindsight I think this is something that Liquid… having some kind of blockchain on a federation, I think in that model you would prefer not to have the fees onchain. You would prefer to have them out of band, paid through Lightning or using Chaumian e-cash. Chaumian e-cash, you buy tokens from the server and you can redeem the tokens for services. I think that will be a pretty good model. This is something that Warren Togami pointed out to me. I am warming up to the idea now and I like it. You could do RGB over statechains. RGB is a colored coin protocol. The problem is if you have non-fungible tokens you can’t transfer them over Lightning because they are non-fungible. You can’t have a channel from Alice to Bob to Carol because what is inside of the Alice, Bob channel is different from what is inside of the Bob, Carol channel because they use non-fungible tokens. You can’t do routing but with statechains you could transfer them and still redeem them onchain. Atomic swaps, there needs to be some kind of method where the statechain allows you to swap UTXOs on the statechain with other statechain users. You could do this through Bitcoin scripting. That would be acceptable but the problem is if the swap breaks down you are forced onchain. Ideally you would not need Bitcoin script for this but it is serviceable if you do it that way. Finally there is the Mercury statechain which is the implementation by Commerceblock. They are cool guys, it is an interesting implementation. They tried to do a MVP version of statechains that can work today. They don’t have eltoo, they don’t have Schnorr. You have the expiring timelocks. It is a little bit more iffy in terms of who is paying fees. You don’t have the ability to open Lightning channels but you do have this UTXO transfer. I think it makes a lot of sense in terms of wanting to get the technology out there today. You’d need to have these trade-offs. It is not the ideal model that I described here but if you are interested in what works today I would definitely check out their work and they have some great write ups. It is all open source.

In short, offchain UTXO transfer. Including the ability to open Lightning channels. You can even use statechains within channels. Lightning, DLC or anything else and swap out users non-interactively. It is more secure than federated sidechains and it is “regulatory non-custodial”. There is a risk that the federation can take your coins if they are really actively malicious. Thank you for listening. If you are interested in my other work you can check out tiny.cc/atomicswap. There are a bunch of links there. You can email me, you can reach me on Twitter or you can ask questions right now.

With Lightning and statechains what would go on in the case where you are doing a Lightning channel over a statechain but then the statechain breaks down? Does that mean the Lightning channel would then have to go back to chain?

If the statechain stops functioning it doesn’t really matter. You don’t need the statechain to keep the Lightning channel open. The only thing you cannot do if the statechain breaks down is cut through the transactions. Theoretically you could have the Lightning channel open. You could change the UTXO amounts. At the end of the day if you want to close your Lightning channel you can then close your statechain channel. It would save you 1 or 2 transactions. The worst case here is that the statechain disappears. Once you want to close your channel, you don’t have to, then it will cost you one or two transactions extra compared to a fully cooperative close with the statechain.

What are the requirements to get statechains, the version you are talking about? Is that ANYPREVOUT and eltoo and then we could have the statechains that you are talking about.

Schnorr and ANYPREVOUT. ANYPREVOUT enables eltoo. Eltoo is a method for Lightning that utilizes ANYPREVOUT.

With a sidechain like Liquid you could have a Lightning Network on the sidechain and then perhaps do an atomic swap between the Lightning Network on the main network and the Lightning Network on Liquid. But this is grey between that in that it is not really separate or distinct in the same way as a Lightning Network would be on Liquid. What does it look like when you close that Lightning channel and you’ve opened that Lightning channel on a statechain? Is it a process of not only closing the Lightning channel but also getting out of that statechain. There are two steps rather than just closing the channel.

There are two steps. What you have this channel open here where Alice and Bob have some coins. Let’s say Alice spends all her coins, she gives all her coins to Bob. What they can do is go back to this state and now Alice and Bob transfer over the coins to Bob. It becomes a regular statechain full UTXO with one owner except there are two keys. If you are cooperative Alice will back out of her key and give Bob full control by doing yet another transfer here from Alice and Bob to Bob.

If you are closing a Lightning channel on a statechain you close the Lightning channel and then get out of the statechain? You wouldn’t be able to get out of the statechain into a normal Lightning channel that is onchain?

Yes you can but it may be less efficient. You have this offchain transaction with which you can always exit out of the statechain. You can send that onchain and once you do so you are out of the statechain and you are onchain with a Lightning channel. The ideal way of doing so is to cooperatively exit out of the statechain, with another signature from the statechain you exit out while it does not affect your Lightning channel. Because of the way SIGHASH_ANYPREVOUT works even if the output changes as long as it is the same output, all your transactions that are building on top of it remain valid. You can close the statechain channel while keeping the Lightning channel open without any additional work.

If you had a long timelock that is going to get in the way of you settling an inflight HTLC. It could be very problematic, the relative timelocks between each hop. You’d probably have to take into account how long it takes the statechain to get onchain?

I think that doesn’t matter. This setup where you have this eltoo transaction in the middle and then you have the Lightning channel. This is literally what eltoo looks like. Despite us using a statechain in the middle the actual transaction structure does not change. Any of the issues you are describing are issues with eltoo.

That’s exactly what we were talking about last month, this problem of eltoo. This time it takes to settle to the right state means you have to account for that in the timelocks between each hop. AJ Towns has a proposal to fix that with eltoo. I think the same rules here, it is an issue with eltoo. If you put the statechain on top of it it exacerbates it.

I don’t think it exacerbates it but if the issue exists in eltoo it exists here. I would have to look into what AJ Towns is suggesting. I would assume that it wouldn’t really affect statechains.

You have got to the HTLC onchain in order to redeem it with the secret before the other guy gets the refund. You have to have it onchain “Here’s my secret.” But when I wait for the statechain the absolute timelock in Lightning gets pushed back and back.

With the channel counterparty changing, what would the motivation be to be transitioned into a Lightning channel that is on a statechain rather than just opening a channel normally onchain without being on a statechain? Fees? Do you get access to the Lightning Network with lower fees?

It makes more sense in different scenarios. For instance the Lightning channel factory where you have ten users and one of these users wants to swap out of it. Normally what you would need is all other 9 users to interact with you in order for the 10th person to swap out for somebody else. Now because you can do it non-interactively by communicating with the statechain you can do so without the permission of the other 9 users. It goes from interactive to non-interactive. The second example would be the DLC style bet thing where exiting in and out of a position is interesting. The price of that position changes over time. You could sell it midway at the price that it is. Normally you would be held hostage by whether or not your counterparty inside of the channel wants you to move out. They can prevent you from getting out. Here you don’t need their permission. You can move out and exit the position. It makes it a lot more practical to have these bets and very smoothly move in and out of them, sell these bets to other people without requiring all these people to be online. Those are the benefits. It is quite significant that you are able to do this without requiring the help of your counterparty.

In the case of a counterparty swapping into a Lightning channel the counterparty swapping in needs to trust the current state of the channel? In the case of Bob swapping out for Carol, Bob and Carol need to trust each other?

That’s a good point. Do you even want this? In Lightning Alice and Bob open a channel and what we are steering towards is that Alice and Bob kind of trust each other or at least think this person is not completely malicious. Now Alice can switch over and become Carol. Does Bob trust Carol? Maybe not. That’s certainly an issue. The funny thing is that it is not preventable. This just works. You can do this on Lightning once we have Schnorr and statechains people can start doing this. Bob has no say over it. Bob can’t recognize a statechain key from a regular Alice key. It is going to be a thing. Is it good? Is it bad? I don’t know. I don’t necessarily think it is good. The trust assumptions are for Carol to move in Carol needs to trust Bob. Be willing to have a channel with Bob. Not trust Bob but be willing to have a channel with him. Trust the statechain and if you have some kind of DLC bet going on you also have to trust the oracle. There are a couple of things but they are all very separated so that is nice.

c-lightning 0.9.0

https://medium.com/blockstream/new-release-c-lightning-0-9-0-e83b2a374183

We rewrote our whole pay plugin. c-lightning is evolving into this core plus all these plugins that do different things. The pay command that is important is a plugin. You can do a lot of interesting stuff. The one that broke the camel’s back is this multipart payment idea. You can split payments into multiple parts to try to get them all to the destination at the same time. As you can imagine there are an infinite number of ways you could do that. We didn’t want to put that in the core. Christian Decker did some research by probing the network and came up with a number of 10,000 sats. If you are trying to send 10,000 sats through most of the network you’ve got a 83 percent chance of it working. It declines pretty rapidly after that. The obvious thing with multipart payments is you try sending it, if you get a channel saying “I don’t have capacity” you try splitting it in half. We are a bit more aggressive than that. If it starts out really big we try to divide it into 10,000 sats chunks and send those. This works way better in real life. It turns out that the release a bit aggressive on that. 10,000 sats is roughly a dollar. If you try to send a 400 dollar payment it is pretty aggressive at splitting it. One of the cool things is that splits it into uneven parts. That is really nice for obfuscating how much we are sending. People tend to ask for round amounts. They ask for 10,000 sats or something. If you see 10,003 sats go by you can pretty easily guess how many hops it has got left to go before it gets to its destination. We would overpay for exactly that reason. We would create a shadow route and add some sats. The person gets free sats at the end. Even so it is still pretty obvious because we don’t want to add too many sats. With splitting you can split onto rough boundaries and there is no real information if you only see one part of the payment. It was a complete rework of our internal pay plugin. Christian Decker was the release manager and he decided with the agreement of the rest of us that should definitely go in the release. There were four release candidates because we found some bugs. Worked in the end, multipart payment worked. Any modern client will issue an invoice that has a bit in it to say “We accept multipart payments.” Multipart payments are pretty live on the network at the moment which is pretty nice. That was the big thing, a big rewrite for that.

There was also the coin movement stuff. This came out of Lisa (Neigut) looking at doing her tax in the US. You are supposed to declare every payment you make, incoming and outgoing, and theoretically all the fees that you charge for routing things. You should mark the value at the time you received them and stuff like that. Getting that information out of your node is kind of tricky. It is all there but having one nice place where you can get a ledger style, this amount moved in, this amount moved out and here’s why view was something that turned out to be pretty painful. She wrote this whole coin movements API. Everywhere in the code that we move coins whether it is on Lightning or onchain it gets accounted for. You can say “I paid this much in fees.” She has also got a plugin to go with that that stamps out all the payments. That’s still yet to be released because there are some issues with re-orgs and stuff that she wants to address. I am looking forward to that. Her next tax time, she will just be able to dump this out, hand it to her accountant with all the answers.

We did a whole internal rework. PSBTs, Partially Signed Bitcoin Transactions are the new hotness. We previously had them in a couple of APIs. You could get a PSBT out and give a PSBT in. But we reengineered all the guts of c-lightning to use them all over the shop. That continues in the next release. We completely reengineered some things, move them out to plugins and deprecate them because it is all PSBTs internally. The old things that gave you transactions and stuff, they are all new APIs using PSBTs. Get to be one of the cool kids. It makes life so much easier to deal with other wallets, hardware wallets, stuff like that. We are pretty much at the point where you can throw a PSBT at something and it will do the right thing. It will sign it, it will combine it with other things and stuff like that. It is particularly powerful for dual funding where you want to merge PSBTs. PSBTs have been great. That is not really visible in that release but that was a huge amount of work to rework everything. It was a pretty solid release, I’m pretty happy with that. We thought it was worth bumping the version number. I think the 0.9.0 release name we gave it was “Rat Poison Squared on Steroids”. It was named by the new contributor who contributed the most. That’s the c-lightning release.

PSBTs are ready to be used on Lightning? All the implementations are either using or thinking about using PSBTs? There are no niggly issues that need to be sorted out with PSBTs with the Lightning use case?

Everything is ready to go. I didn’t find any horrific bugs. We are dog fooding them a bit more, that is useful. There was some recent PSBT churn because of the issue with witness UTXOs and non-witness UTXOs, this issue of people worrying about double spending with hardware wallets. There was some churn in the PSBT spec recently. There is still a bit of movement in the ecosystem but generally it is pretty well designed. I expect as people roll out you generally find that you interoperate, it just works with everything which is pretty nice. For c-lightning we are in pretty good shape with PSBTs.

Announcing the lnprototest Alpha Release

lnprototest blog post: https://medium.com/blockstream/announcing-the-lnprototest-alpha-release-f43f46f2c05

lnprototest on GitHub: https://github.com/rustyrussell/lnprototest

Rusty presenting at Bitcoin Magazine Technical Tuesday on lnprototest: https://www.youtube.com/watch?v=oe1hQ7WaX4c

This started over 12 months ago. The idea was we should write some tests that take a Bitcoin node and feed it messages and check that it gives the correct responses according to the spec. It should be this test suite that goes with the spec. It seemed like a nice idea. It kind of worked reasonably well but it was really painful to write those tests. You’d do this and then “What will the commitment transaction look like? It is going to send the signatures …” As the spec involved there were implementation differences which are perfectly legitimate. It means that you couldn’t simply go “It will send exactly this message.” It would send a valid signature but you can’t say exactly what it would look like. What we did find were two bugs with the original implementation. One was that c-lightning had stopped ignoring unknown odd packets which was a dumb thing that we’d lost. Because you never send unknown packets to each other so a test suite never hit it. You are supposed to ignore them and that code had somehow got factored out. The other one was the CVE of course. I was testing the opening path and I realized we weren’t doing some checks that we needed to check in c-lightning. I spoke to the other implementations and they were exposed to the same bug in similar ways. It was a spec bug. The spec should have said “You must check this” and it didn’t. Everyone fell in the same hole. That definitely convinced me that we needed something like this but the original one was kind of a proof of concept and pretty crappy. I sat down for a month and rewrote it from scratch. The result is lnprototest. It is a pure Python3 test system and some packages to interface with the spec that currently live in the c-lightning repository. You run lnprototest and it has these scripts and goes “I will send this” and you will send back this. It can keep state and does some quite sophisticated things. It has a whole heap of scaffolding to understand commitment transactions, anchor outputs and a whole heap of other things. Then you write these scripts that say “If I send this it should send this” or “If I send this instead…”. You create this DAG, a graph of possible things that could happen and it runs through all of them and checks that happens. It has been really useful. It is really good for protocol development too not just testing existing stuff. When you want to modify the spec you can write that half and run it against your own node. It almost inevitably find bugs. Lisa (Neigut) has been using it for the dual funding testing. That protocol dev is really important. Both lnd and eclair are looking at integrating their stuff into lnprototest. You have to write a driver for lnprototest and I have no doubt that they will find bugs when they do it. It tests things that are really hard to test in real life. Things that don’t happen like sending unexpected packets at different times. There has been some really good interest in it and it is fantastic to see that taking off. Some good bug reports too. I spent yesterday fixing the README and fixing a few details. The documentation lied about how you’d get it to work. That is fixed now.

This testing suite allows people to develop a feature… would that help them check compatibility against another implementation for example?

Yes. It is Python, it is pretty easy to hack on. You can add things in pretty easily. You don’t have to worry about handling all the corner cases. You write your scripts and check that your implementation works. For example I used this to develop the anchor outputs stuff. I took the anchor spec, I implemented it in lnprototest first and then I implemented in c-lightning. The c-lightning one took a lot longer. It took me an afternoon in lnprototest. It took me several days in c-lightning. Once we had c-lightning working with the lnprototest side the first time I attached it to a lnd node in the field it just worked. It is definitely useful as a test suite but also for developing. When you want to add something to the protocol. It is a lot easier to hack it into lnprototest. “Here’s a new packet”, send it, see what happens. This is the response you should get. It is way easier than modifying a real implementation to do it. It is a really good way of playing with things.

Can you do one for the Bitcoin spec? (Joke)

With Bitcoin we don’t have a spec. The code is the spec. There are tonnes of unit tests and functional tests on Core. The test framework sets up a stripped down Bitcoin node so you can do testing between your node and this stripped down Python node. In this case the lnprototest is setting up a stripped down Lightning node that is coded up in Python and then you are interacting in a channel between your main c-lightning node or whatever implementation signs up to use lnprototest, with that stripped down Python lnprototest node?

The Python implementation isn’t even really a node. It understands how to construct a commitment transaction. It has got a helper to do that. “I send this and by the way my commitment transaction should now have a HTLC in it. Add a HTLC to my commitment transaction.” They send something and you go “Check that that is a valid signature on the commitment transaction.” It goes “Yes it is.” It has enough pieces to help you so you don’t have to figure things out by hand. It has a lot of stuff like it knows what a valid signature is rather than encoding the signature. What it is doing under the covers is it reaching into the implementation and grabbing the private keys out. It knows the private keys of the other side it is testing. That simplifies a whole pile of stuff. It can say “I know what the 13th commitment secret is going to give, I know what that is. I know what it should be.” It is a much simpler implementation to play with. Then you have these scripts that say “I should open a connection. It should reply with init. I should send init. I send open channel, it should say accept channel. Take those fields and produce a commitment transaction as agreed. Give me what the signature should be on the first one. I will send that across.” They send the reply and I go “Check that that is what I expect? Yes it is.” It has some helpers to construct these things as you go but you end up writing the test to say “Update the state. You should match what they send.” We have enough infrastructure to build commitment transactions and stuff like that. But we don’t have any logic in there to negotiate shutdown for example. There is none of that logic. That would be a script that says “If I offer this they should offer this” and stuff like that. There is a whole heap of scaffolding to help you with the base construction of the protocol. It does all the encrypted communication stuff. You just say “Send this message” and it worries about encrypting it as it needs to be on the wire, authenticating and all that stuff. You end up writing a whole heap of test cases that say “If I send this they should send this.” There are two parts. There is the scaffolding part that has the implementation bits we need to make it useful. Then there are all these tests that say “If I do this they’ll do this.”

Have you had to do much interoperability testing with some of the lesser known Lightning clients like Electrum or Nayuta? Maybe they could use this testing suite as well.

If you are trying to bring up a new implementation, bringing it to par with the others this is invaluable. It is a stepping stone. “I sent this but I was supposed to send this and I didn’t. What do we disagree on?” You might have found a bug in lnprototest. You know that at least one implementation passes lnprototest. Either there is a bug in c-lightning or I am doing something wrong. It is a much more controlled environment. We could test things like blockchain reorganizations to whatever depth. Stuff like that in canned tests is incredibly useful.

When you found that channel funding bug Rusty, you found that just by implementing it in Python rather a particular test failing in lnprototest?

I was writing the test and I went “I got that wrong and it still worked. Why did that work?” Then I went “Ohhh. That is bad.” I jumped on internal chat. “There is nowhere I’m missing that we check this that we are supposed to do this?” No it was a real bug. I immediately back channel pinged ACINQ and Lightning Labs, “I suspect you want to check if you are doing this as well.” I was writing the test, realized that I’d screwed up the test and it shouldn’t have worked but it did. The act of testing it was what drove me into this path.

If you are testing new features and you want to use lnprototest to test those new features you’d have to reimplement in Python?

Yes and no, it depends on what you are testing. You can tell lnprototest “Send this raw packet”, it doesn’t have to understand what it is doing. All the lnprototest message stuff is generated from the spec. You patch the spec, you run the generator thing and it generates the packets for you.

The spec is words, a lot of it is. How do you get code out?

The way the messages are implemented in the spec is they are machine readable as well human readable. We’ve always done that on c-lightning. There is a script in the spec itself that gives you a nice CSV file, comma separated values file, describing all the packets in the spec. That feeds into the c-lightning implementation. In c-lightning we have something that turns that into C code. For lnprototest we turn those into Python packages. It reads those Python packages and generates all the types. If you are just adding a message you can edit the spec, rebuild and you’ll get your new message type. I have no idea what that message type is supposed to do but you could make the lnprototest send that message type and expect whatever response. “If I send this you should send this.” Then test it against your node and see if it works. Obviously, with anchor outputs, if you offer anchor outputs and they offer anchor outputs then it changes the commitment structure in very well defined ways. Internally lnprototest knows how to build the commitments. I add a new flag and look through the diff of the spec, what they changed. If anchor outputs here, if anchor outputs here… It was a couple of hours work maximum to get that working.

It is not a replacement for the cross implementation testing for new features. It is additional testing and assurance. You still want to test those full implementations. Otherwise if you are reimplementing it in Python you could just give it to Electrum and Electrum will have all the new features first. Because that is written in Python.

I looked at some of their code actually. Again they have got too much stuff. We need this thin amount of stuff to implement things. And a whole heap of stuff to parse messages generically etc. Their stuff has too much stuff in it. Like any implementation there is a whole heap of other stuff you have to worry about like onchain handling and timeouts. lnprototest doesn’t care because it is not dealing with real money. It is a whole other ball game. I think every implementation will end up using lnprototest at some stage which it will make it much easier. The idea is eventually you’ll patch the spec. I’ve got this cool new thing, here is the spec patch, here is the new lnprototest test. You’ll run those together against your implemetation. You’ll be 90 percent of the way there. At least you are compatible with lnprototest so the chance of you being compatible with each other is now greatly increased.

They just need a runner, the new implementations, if they want to use lnprototest?

We’ve got our DummyRunner that passes all the tests. It always gives you what you expect. They need their own runner. Take the c-lightning one, it fires up a bitcoind in regtest mode and fires up a c-lightning node. It is kind of dumb. You have to have c-lightning run in developer mode because it uses some weird hacks. We are slowly pulling those out. Ideally you’d be able to run it in any off the shelf implementation. You have to get your hands dirty a bit and write some Python for your implementation, that is true.

Dynamic Commitments: Upgrading Channels Without Onchain Transactions (Laolu Osuntokun)

https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html

We had this item here on upgrading channels without an onchain transaction. This is a roasbeef post. It is talking about how you could upgrade a channel. One example was around changing the static remote key.

At the moment you set up your channel and that is it. It is how you built it. We chose not to put in any upgrading in the spec. Early on it was just “Close the channel and start again.” We’ve had two new channel types so far. One is the static remote key option which now as far as I know every implementation supports. The only time you will not get a static remote key channel is if you’ve got an old one that you opened beforehand. Anchor outputs is the new hotness that is coming through. You have now got three kinds of channels. What type of channel you get depends on what you both supported at the time you opened it. What would be cool is to be able to upgrade these things on the fly. You can always close and open again. We have this idea of splice that is not currently in the spec but it is on Lisa’s plate after she finishes dual funding. It is very similar. We can negotiate to spend the commitment transaction in some way that changes the channel. You might want to splice funds in. You might want to splice funds out. Have a new commitment that spends the commitment transaction and opens a new channel atomically. That is still an onchain transaction. What if you wanted to change it on the fly? For the first 100 commitments of this channel it was a vanilla one but after that point we both agreed that it would be option static remote key. We would use the modern style. That is perfectly possible. It is not as good from a code maintenance point of view. You still have to be able to handle those old channels because they could still drop an ancient commitment transaction on you. They drop commitment transaction 99, you need to be able to penalize that. You still need some code there to handle the old ones. But ideally at some point in the future if we have this dynamic upgrade we can insist everyone upgrade and then six months later we go “If anyone hasn’t upgraded their channels when you upgrade to the next version of c-lightning it will unilaterally close those old ones.” We have removed the code that can do all that stuff. This is a nice simplification. This proposal here is the set of the messages that you would send to negotiate with your peer that you are both ready to upgrade this channel on the fly. It went through a couple of revisions based on feedback from the list. The consensus in the end was that we would block the channel, once you’ve started this process we are going to drain out all the updates. That is the same thing we do on shutdown already. When you shutdown a channel any outstanding HTLCs have to be settled before it finally gets closed by mutual close. We would use the same kind of negotiation. If I want to upgrade the channel the other side would go “Great. We will upgrade the channel as soon as all the HTLCs are gone.” In the normal case this would be immediately but it could take a while. You only really have to worry about the case of upgrading empty channels. From that point onwards you’d be using new style not old style. This is definitely something people want. Static remote key is good because it is way nicer for backups. Because we used to rotate all the keys prior to static remote key without your peer’s help if you somehow lost your state you could forget how to spend your own output. Static remote key changed that. You don’t need the peer. If a commitment transaction from the future appears on the blockchain you are kind of screwed because it means you have lost track of things. At least now with static remote key you would be able to get your own money back without having to ask anyone. “I don’t know what the tweaking factor was for that commitment. Could you tell me?” Anchor outputs does even more. It makes it possible to low ball your fees and use the anchor outputs and child-pays-for-parent to push the commitment transaction into a block. That overrides the problem that we have at the moment which is that you have to put enough fees in your commitment transaction to pay for it later when you are going to use. You have no idea when that is so it is an impossible problem. Anchor outputs provide a way, not a perfect way, to top up the fees afterwards. This means you can go lower on your fees which I think is good for everyone. It is worth bearing in mind that you only care about what those fees are like if you get an unilateral close. If it is a mutual close you negotiate fees at that point. But should somebody need to go onchain it is nice if they are paying less fees than they are at the moment. Everyone overbids on fees at the moment because you don’t know when a fee spike will happen. Knowing my luck it will be the moment when you want to close the channel. We go for a multiple of the current fee rate.

On the anchor outputs, Lisa was talking about this fee concern because it makes it more costly. What is the thinking on that? Is it worth it?

There are two things that make it more costly. One is the scripts involved on a couple of the outputs are slightly bigger so it is a slightly bigger transaction. Each HTLC output is a little bit bigger. But the other thing is that the funder is also paying for these two 330 satoshi outputs, one for each side to use to push the transaction if necessary. It is 660 sats more expensive to start with. The flip side of that is you can lowball the fees. It is almost certainly worth it. We have implemented everything on c-lightning but it is currently behind a configuration option. Our config experimental, if it breaks you get to keep both pieces configuration. We haven’t turned that off for this release because we haven’t implemented child-pays-for-parent. So anchor outputs is all loss. It is just more expensive and it doesn’t help you in any way. As soon as we implement child-pays-for-parent it would be great to get it in there because you would probably come out ahead most of the time. Even though you are spending extra sats in the anchor outputs you won’t have to overbid on fees. This proposal here would less us then go “We see a path here to deprecating the non-anchor outputs output”. In the future everything would be anchor outputs. We do a release which has the upgradeability and then 12 months later we would start giving warnings. Then anchor outputs would become compulsory. If you happen to have any clients that haven’t upgraded and you haven’t spoken to them in months we would just close their channels. We could remove a whole heap of if statements in code and our test matrix. Having to test against the original channels and static remote key channels and anchor output channels multiplies your test matrix out. You can simply things if you start pruning some old stuff out.

It might be a good way to clear out the zombie channels. I hear there is a lot of them out there.

It turns out there are. We recently had a change to the spec. You have to update your channels every two weeks. After two weeks you could be forgotten if you haven’t updated. The spec said either end has to update and we just changed it so that both ends have to update. The reason for that is that you get some node that has fallen off the network. This node every two we’ll keep refreshing it. Keeping it alive when we might as well forget it, it is game over. That spec change has gone through and the new release of c-lightning 0.9.1 will implement that. It is in the tree now. That will help clear out some stuff as well.

This upgrading channels, this works for anchor outputs because you are changing the commitment transaction. You’ve got an open channel, the funding transaction is onchain and you can update the commitment transactions. For other things, there are so many potential channel update proposals swirling around. There are generalized payment channels, using Schnorr MuSig, that would be a funding thing so you wouldn’t be able to use this for that, there is eltoo, PTLCs. I think most of those you wouldn’t be able to use this for because they either use the funding transaction or it is a completely different configuration.

If your funding output is still a 2-of-2 you are good. But if you have to change the funding transaction then you are going to be going onchain anyway. The way I think we will end up doing that in future is with a splice. The splice proposal says “I propose a splice.” We both get to go to town on proposing what we want to change about that commitment transaction. That is useful. You might have a preference. You might have a low watermark and a high watermark. If you get more than a certain amount in fees in the channel “Sometime this week I’d like to move some off to cold storage. I’d like to splice them out.” Then you have a high watermark where you are like “This is getting ridiculous. I really have to do this.” If I am at the low watermark and you go “I would like to splice” then I will jump on that train. “While we are there I would also like to splice.” Throw this input and this output in and whatever. Opportunistically do these things. The splice negotiation will also then be an opportunity to do an upgrade. In fact I think that we would make the splice an implicit upgrade almost. Why wouldn’t you at that point? If we are going to splice, if we are going to spend a transaction let’s also do an upgrade. This is on the assumption that every improvement is considered an upgrade. If we ever ended up with two really different species of channels that were useful for different things then you might end up with two completely different ones. That is the kind of complexity I am hoping to avoid. Splicing will be the upgrade mechanism for anything that has to change the funding transaction. The cool thing about splicing is you can forget all your previous revocation things after. Once the splicing transaction is buried deeply enough you can forget the old state because it can never happen now. Those old commitment transactions are completely dead. That is nice. That lets you transfer to eltoo where you have only got a single commitment transaction you have to remember in a very clean way anyway. Even if you could do it with 2-of-2 you would want to splice in I think for eltoo.

Let’s say hypothetically that Taproot comes in, then we get ANYPREVOUT. All the channels that currently exist, they would be upgraded to eltoo channels using this splice method?

Yes you would have to use splice to upgrade those. We will get Taproot first. Would you want to upgrade? Maybe because now you start to look like single sig. It is all single sig, it is all cool. You probably want to do it just for that. On the other hand if you are a public channel anyway there is little benefit to doing that onchain transaction just for that. Maybe you don’t bother? If you are going to splice anyway then let’s upgrade, let’s save ourselves some bytes. You probably wouldn’t do it just in order to upgrade. If we get ANYPREVOUT and you’ve got eltoo then there’s a convincing reason to upgrade. No more toxic waste for your old states.

People can just opportunistically wait until it is down to 1 sat per byte. They could wait for a cheap time and do it then.

Splicing is kind of cool because a channel doesn’t stop while you are splicing. After every commitment transaction against both the spliced one and the old one, until the spliced one is buried to a sufficient depth. Your channel doesn’t stop. You can lowball your splice and just wait for it. It is annoying if you change your mind later and now you really want to splice it in. You’d have to child-pays-for-parent. We are probably not going to do a multilayer splice and let you have this pile of potential changes to the channel piling up. You can keep using the channel while the splicing is happening. You can absolutely lowball on your fees.

What is likely to happen post Taproot that unless you do this splicing you will keep your old channels open and any new channels you open you would use the latest MuSig, Taproot stuff?

That is happening with existing upgrades. Modern channels are all static remote key, old ones aren’t. We would end up with the same thing. We have to change the gossip protocol slightly because we nail the fact that there is a 2-of-2 in the gossip protocol. I want to rework some of the gossip protocol anyway. There will be a gossip v2 to match those. At some point in the far, far future we will deprecate old channels and splice them out or die. That would be a long way away. We will have two types of gossip messages, an old one and a new one for a long time.

Does the gossiping get complicated? I have got this channel on this version, this channel on this version and this channel on this version.

We have feature bits in the gossip so it is pretty easy for you to tell that. Most of this stuff, if Alice and Bob have a channel and we are using carrier pigeons or psychic waves or whatever to transfer funds it doesn’t matter to you. That’s between Alice and Bob. It is not externally visible. Some changes to channels are externally visible. If you are using PTLCs instead of HTLCs that is something you need to know about. But for much of the channel topology they are completely local. The gossiping doesn’t really need to know. Where the gossiping needs to know is we currently say “Here are the two keys” and you go and check that the transaction in the Bitcoin blockchain actually does pay to those keys. If you have a different style of funding transaction that will need to change. In the case of Schnorr there is only one key. “This is the key that it pays to” and yes I can tell that. You don’t need to tell it that, you just need to use that key to sign your message and it can verify it. You can literally pull the key out of the output which is really nice. The gossip messages get smaller which is a big win. Plus 32 byte pubkeys. We shave another byte off there. It is all nice round numbers. It is wonderful. I am hugely looking forward to it just to get rid of 33 byte pubkeys to be honest.

When does that discussion seriously kick off? Does Taproot have to be activated? I know you have got a thousand things that you could be working on.

We marked it out as out of bounds back in the Adelaide summit at the end of last year. This was a conscious decision. There is more than enough stuff on our front burner without going into Taproot. People like AJ Towns and ZmnSCPxj can think about the further possibilities. I am pushing off as far as possible. When it lands on my plate we will jump on it.

AJ and ZmnSCPxj are the brainstorming vision guys.

They will tell me what the answer is. That’s my plan of action. Ignore it until I am forced to.

BIP 8 - replacing FAILING with MUST_SIGNAL

https://github.com/bitcoin/bips/pull/950

I was going to avoid activation for two months but then Luke wrote on IRC “What is the latest update?” and I started to dig it into again. Luke’s perspective, and I think this is right, is that most people are leaning towards BIP 8 rather than Modern Soft Fork Activation (potentially 1 year). That is the majority view. Obviously this is not scientific in any way. This is gut feel from observing IRC conversations. Luke highlighted a couple of open PRs on BIP 8. One which is this one that AJ opened, there is another one that Jeremy opened. At some point depending on how big a priority activation is, there is still a lot of work to do in terms of review on Schnorr and Taproot, I think these PRs need to get looked at and reviewed.

That particular PR is waiting on updates from Luke as to what he thinks about I guess. I sent around a private survey to a bunch of people on what their thoughts on Taproot activation timelines are. I am still waiting on a couple of responses back from that before making some of it public. I am hoping that will help with what the actual parameters should be. As to 1 year or 2 years or an initial delay of a couple of months or 6 months… What the exact parameters are, they are just numbers in the code that can be changed pretty easily whereas the actual structure of the activation protocol which is what these PRs are about is a bit more complicated to decide. This particular change was mostly about getting back to the point where something gets locked in onchain. The way BIP 9 works is you’ve always got a retarget period, a couple of weeks before there is any actual impact on what the rules onchain are. Whereas the current BIP 8 in the BIPs repo, that can happen from one block to the next, the rules that the next block has to validate according to change instantly. This is a little bit rough. That’s what that PR is about.