Name: Matt Corallo

Topic: Lightning Development Kit (LDK)

Location: Chaincode Labs Podcast (Episode 13)

Date: May 12th 2021

Audio: https://podcast.chaincode.com/2021/05/12/matt-corallo-13.html

Matt Corallo presentation at Advancing Bitcoin 2019: https://btctranscripts.com/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/

rust-lightning repo: https://github.com/rust-bitcoin/rust-lightning

Intro

Adam Jonas (AJ): Welcome back to the office Matt, glad to have you back on the podcast.

Matt Corallo (MC): Thank you.

Update on LDK

AJ: We are going to start with LDK. Where are we at? What is going on?

MC: If listeners are aware LDK kind of grew out of a project that I started a few years ago when I was working at Chaincode called rust-lightning. It grew out of my desire to play around with Lightning and learn more about it. I had been contributing to Bitcoin Core and I didn’t really know anything about Lightning. It is a common problem, a lot of Bitcoin Core developers at least at the time were espousing. I started playing around with this, seeing what I could build and then it slowly morphed into this idea of “There is no easy way to integrate Lightning into an existing thing.” You can take lnd and run it, maybe you have a second wallet, maybe you are downloading the chain twice. Either way it is a separate process that you are running which isn’t tightly integrated into whatever system you have. The same is true for c-lightning of course. It evolved into this desire to help people integrate Lightning into existing platforms and especially existing mobile apps, existing non-custodial wallets. We have built a great product there. Square Crypto adopted the project and started running with it a year and a half ago, I guess is when we got started on it. We went around, spoke to a lot of wallets at the time, got a lot of really great feedback. A lot of people said “That would be great, I’d love to integrate Lightning but the current solutions for integrating Lightning into our platform, whatever that platform is, are not workable.” We started down that path but of course immediately got a lot of feedback of “I want it to run in my platform, my platform being React Native or some native Java thing on Android or some native Swift thing on iOS or whatever it is.” Everyone wants it to be native in their language.

Murch (M): Let me try to catch up everyone else. You are building a library in Rust and the idea is to make it usable for all sorts of other languages as if it were a native library for those.

Language bindings challenges

MC: Yeah, the core is written in Rust. Then we have different sample implementations, we have a number of different interfaces for it. It is a Lightning library not a Lightning node. Things like how you write the seed to disk, how it gets backed up, that kind of stuff is all dynamic and pluggable, there is just an interface for it. You have to do that yourself but then of course we have a number of different implementations that you might use off the shelf or sample implementations in a number of different languages. We invested an inordinate amount of time into building language binding support that are in a state we are really proud of. It took a lot more work than I ever guessed. It turns out all the language binding systems that exist that we came across in our research are really designed to stub out a function. You have some complicated function that takes a long time to run and you simplify it into a pure function probably, you stub that out into C or some other language and you call that from whatever your host language is. They are really not designed for a full object orientated interface with interfaces that the user might be able to plug into and that kind of thing. And especially, as far as I’m aware, no language binding systems except from the one that we built are really designed to map different memory models. LDK and rust-lightning are written in Rust and Rust has very clear object ownership semantics. Something owns an object, you can optionally have but you generally don’t have a lot of references flying around. You have references for short periods but they have clear ownership semantics. This being the exact opposite of languages like Java, Javascript etc where everything is basically an atomic reference counted pointer, a shared pointer in C++ terminology. It is assumed that everything just takes another reference count to this object. There is no clear ownership semantics anywhere so you really have to do a lot of work to map. If you have some Rust library and then you have Java objects that own these things you really have to map those ownership semantics correctly and do a lot of work with the FFI boundaries. We spent a lot of time building out stuff like that but we are really happy with where we ended up. We do have pretty good Java bindings now, we have Swift bindings, we have C and C++ bindings, we are working on Javascript bindings. We are happy with where we ended up but it took about a full year to really get there.

AJ: Is there ongoing maintenance as you upgrade the library? Is that something that you have to continue to pay attention to?

MC: Yeah over the last few months we’ve gone from having demo applications written in Rust to users working on integrating LDK into their system in other languages, at least in a few cases in Java. When you get the first external people playing with a rich API you find all kinds of stuff. “This API is confusing, this isn’t clear. The easy way to do this is wrong.” We have been doing a lot of work on that front as well over the past number of months to really clean up the API and make sure that the easy way to do things, or the way that you might naively do things if you don’t read the docs closely enough, is also the right way and the safe way to use the library. We have been doing a lot of work on that front. Of course also keeping up with the Lightning spec that changes although slowly. We have a lot of things that we are trying to juggle right now.

Interoperability of Lightning

AJ: How do you think about interoperability with the other projects because that is clearly something that is an issue in the ecosystem?

MC: It is interesting, the core of Lightning is pretty robustly interoperable across all the implementations that exist or that are actively maintained. But the edges have ended up very fragmented. There are brand names for all the different things. If you look at zero value invoices. You generate an invoice with a zero value attached to it and some clients will treat that as any value. You enter a value or the sender enters a value. Some clients treat that as zero, some clients fail. That is one thing. There are a lot of these little “features” that have been added to different clients and they may be interoperable, they may not be interoperable. It depends on the UX and what client you are using. There are some issues that have cropped up there but at the same time this is also the experimentation phase of Lightning. You have different clients doing different things and experimenting. Then slowly things go back to the spec. You can look at the push payment stuff, I think the common brand name for it is keysend. That is something that was experimented on, there are a few different designs. A number of clients have adopted now a common implementation of that but also it will probably be replaced with something else that is a little better that ends up in the formal spec. Part of it is that it is kind of a crappy UX right now in a lot of ways and also it is just these features that slowly migrate from experimentation and weird cross compatibility issues towards the spec. People agreeing with “We tested this, we’ve experimented with it, we’ve found something that works really well and now it is in the spec.” Because we’ve had so many other priorities we haven’t been as active in the experimentation area. That is also something where we have a fairly flexible API and you can do a lot of that experimentation at the next level. If you take LDK and use it to build a map you can experiment a lot with different potential Lightning features. That is something we haven’t spent as much time on as we probably should. We will get back to that as we round out some of the language binding features we’ve been working on. We have been keeping up with the spec.

LDK features

M: How does it plugin? You run a Bitcoin Core or some other Bitcoin software? You have your business logic somewhere else? That makes calls to the LDK library? You mentioned you have a broad and evolving interface? Have you thought about versioning that already because eventually Lightning will evolve to? How does that work?

MC: It is a little bit dependent on the exact thing. We’ve been doing it case by case largely. It depends a lot on “Here’s this new part of the spec or change in the spec and what are the requirements for doing that and using this new part of the spec?” The spec has negotiable features, you can negotiate with your peers. “I support this, I don’t support that” and you can use whatever features you commonly support. For each of those we have to address “What’s the point of this spec feature? Does it fix some problems? Does it prevent some attacks or is it just a nice thing to have? What are the drawbacks and complexities of adding that?” You look at something like anchor outputs. Lightning without anchor outputs, you are playing this weird game where you are trying to predict future fee rates. You are trying to predict the future fee rate that you need to hit at the time your counterparty selects to maybe cheat onchain. This is an impossible problem. There is this anchor proposal to use CPFP to allow you to broadcast a transaction that has a lower fee rate but then use CPFP to increase its effective fee rate.

M: Bring your own fees.

MC: Right, bring your own fees. But if you imagine a Lightning wallet, in order to bring your own fees you have to have an available onchain output to spend into this CPFP transaction. And so there is a lot of additional complexity in the API. We have to require users have available onchain outputs but at the same time it fixes a relatively, arguably critical bug in Lightning’s overall design. When you have any kind of material fee rate that we have been seeing in the mempool over the past whatever months, Lightning has relatively critical security issues. These kinds of decisions are hard and we have to weigh up do we want to require and only support anchor? Do we want to support both? This is common, lnd is asking a lot of their users right now whether they should only support anchor and require anchor or whether the cost of always having an onchain output is too much for their users. It is a very tough question but it is also very case by case because the trade-offs of the API complexity versus what is the benefit of it is really what we have to weigh.

M: You said that it is mostly mobile wallets that are currently playing around with LDK integration. Who tracks their onchain state? Is that the LDK, is that the mobile client?

MC: Our line in the sand is as long as the output onchain could potentially be spent by your counterparty we deal with it. The second that the output onchain is yours, just solely yours, whatever the script form is as long as it is just solely yours we hand it to the user and say “This is yours. Spend it as you see fit.” We don’t do a normal onchain wallet where you handle normal onchain payments, there are plenty of libraries for that in any language you want. We aren’t going to reinvent the wheel there. We certainly handle any kind of punishment if your counterparty broadcasts an old state, anything like that, but once we get it to a point where it is just our funds we just give you enough information to spend it and you do what you want.

M: The LDK takes care of tracking the channel commitment transactions? Does that happen via client side compact block filters? What do you use?

MC: That is another area where we have an API, we don’t demand anything specifically. We now have two different APIs, we have one for Electrum style where it is all about the transactions and then we have one for SPV or full node style where it is all about “I download all the headers and I connect them in order.” It is just a API at the end of the day. You can do it in this SPV form where you make a call “The block is connected” and you either you have transaction data or optionally you don’t. You might need to do compact block filters if you are doing that. We give you all the information for Electrum if you are doing something like that where you want to ask a server for a set of transactions related to your channel. Either way we give enough information. We do have a sample that will do a sync against Bitcoin Core’s RPC interface or Bitcoin Core’s REST interface. If you want to use that you can just take that off the shelf and run it. Otherwise we assume you have that because again there are libraries for that in nearly every language. We are not going to reinvent the wheel, you just have to integrate those.

M: Bring your own blockchain data.

Bitcoin not having a spec versus Lightning having a spec

AJ: You have spent most of your time in Bitcoin working on Bitcoin Core. Now you have transitioned to a different kind of project that has a spec which people open PRs to and have conversations about which is very different. Then there is also this environment of multiple implementations as opposed to one reference implementation that dominates the network. What are the pros and cons of that dynamic?

MC: It is interesting because on one hand it is still this living, breathing open source project where you have to work with a number of other people to agree on what makes sense. You make a proposal and then other people decide yes or no. It is maybe a little higher threshold for proposals because unlike Bitcoin Core where you might make a proposal in the form of a pull request and then people have to review it, now it is you make a proposal in the form of a pull request to the spec in English, presumably you have implemented it in your own implementation as well. But in order for other people to agree to it they have to go implement it themselves. It is no longer “Here I’ve got code, look at the code” it is “Here I’ve got English. If you agree with it now you have to go also write code.” There are several implementations and presumably several people have to go write code and also get that reviewed.

AJ: It might be interpreting that English in a very different way? Does that happen?

MC: Yeah and that has also been an issue for Lightning in the past. It has got a lot better but there were a number of issues early on, you still find things occasionally, where if two implementations disagree on the current state of a channel or the rules about what you can and can’t do in a channel they will force close. That can be a relatively critical denial of service vulnerability in the network. If you imagine c-lightning and lnd have understood the English of the spec slightly differently in some context and think the rules of the game are slightly different then a user might be able to forward a payment across any c-lightning to lnd or lnd to c-lightning channel and cause them to close the channel and potentially split the network. Split the Lightning Network such that no one can relay a payment between c-lighting and lnd or lnd and eclair or Electrum and eclair or whatever. It is fraught with these kinds of issues and it is something that is really tough to get right. There are a lot of things to do with Lightning. It is written, there’s this thing, it is working but there are a lot of edge cases and a lot of things that need tuning. Part of it can’t really be tuned until there is real world experience with people running this thing and people testing it, something that is being gathered. Part of it is a lot of work and really hard. Issues like this where you have to make sure that nodes exactly agree on what is going on, maybe you can reduce the likelihood of these kinds of issues with countermeasure strategies. It is a very different world from Bitcoin Core in that sense. With Bitcoin Core you are really worried about issues where you might split the network between old versions and new versions of Bitcoin Core but at least it is the same codebase so you can review an individual patch and say “This in the patch might split between something that does have this patch and something that doesn’t have this patch.” Not “I have to go read some Go code, some C code, some Rust code and some Python code and make sure that they are all doing the same thing.”

The state of the Lightning Network

AJ: I guess you have been talking about different vulnerabilities and different denial of service in Lightning. What is the state of the network now? Have we moved beyond reckless? How robust is this?

MC: I guess it is a question of what is your pain threshold? Is a denial of service vulnerability a critical issue? Well no, you are probably not going to lose money. The two ends of the channel aren’t losing money, they’re just force closing the channel to the chain and they are going to figure out who has the money. At the same time suddenly people’s payments might fail, you are going to break routing in the network. Luckily we haven’t seen these kinds of attacks exploited. Of course there are maybe more obvious ways you might denial of service the network, issues around filling the available capacity of channels are much more broadly discussed, much more readily visible and also difficult to protect against. Again not something we’ve seen materially exploited but if you are relying on this thing to always work 100 percent of the time maybe that is not where we are today. At the same time we talked a little bit about anchors, that is something that is really important for the ability to ensure if your counterparty does broadcast a transaction you can appropriately punish them if they are broadcasting an old state, a revoked state. That is something that is optionally or by default in many Lightning implementations if not most. People are going back and forth on whether to require it. It materially harms the UX of Lightning, you always have to have this onchain output. You can’t build a Lightning wallet that is only Lightning and that holds all of its balance in Lightning. This was a model that a lot of people wanted to do because it is great, you should always be able to send your entire balance if you want to. You should be able to have that entire balance available in the way that you expect, not have this weird small part of your balance that is always on the side, you can’t really use it, it is just sitting there. There are still a lot of issues to figure out in Lightning. For the most part people open channels with counterparties that they don’t really trust but trust to not go and deliberately modify their software and do all these crazy attacks that are fraught with requirements and edge cases. It is not really a problem in practice. But if you are talking about really opening a channel with someone who you think is potentially actively malicious you might want to think twice.

AJ: Are you observing cheaters on the network? Is that happening?

MC: I think for the most part we haven’t seen much. Certainly occasionally someone might restore an old backup and accidentally broadcast an old state. I am not aware of any cases where someone has performed any of these kinds of long range, poor fee estimation attacks that anchor and such prevents. They are much more complicated, there is lower hanging fruit. I don’t think we have seen that. We also haven’t seen channel jamming. It is something that gets talked about on the mailing list left and right and academic papers written about it. It is not like it is not out there. It is not like people aren’t aware of the potential for someone to denial of service attack the Lightning Network. But for the most part we haven’t seen it because why bother? It is a problem, people are working on fixing it. If it did happen payments would get stuck for a while and then people would work around it a little bit. But also it is one of these griefing attack denial of service things. There are a lot of issues to solve but it is also not something that is harming the immediate UX that people have today.

Eltoo and the punishment dynamic

AJ: I wanted to go back to the punishment dynamic. Clearly with eltoo those dynamics change. It doesn’t look like eltoo is going to be around the corner given how hard soft forks are. I would be interested in your take on punishment versus a little bit more forgiving dynamic with eltoo.

MC: That’s a really good question. I would hope that there is some cost. Certainly in order to update the state onchain in eltoo… For someone who broadcasts an old state the only cost of updating that state is on the person who is trying to update the state and go back against the counterparty because onchain fees can be non-trivial. There has to be something to offset that. Whether that is a high fee, 100 percent of the channel like punishment is now or whatever it is, there has to be something there. But how to find the right value is a tough question.

M: How would you even push for a punishment in eltoo where update transactions are symmetric? Would you have to reintroduce asymmetry there?

MC: That is a good question. I don’t know offhand. It is possible you might have to reintroduce asymmetry, I don’t know if that is necessarily the worst thing in the world. The big win with eltoo is of course is you don’t have to store old states. You can just store one state. Having to store two states is not very different from one state. As long as you don’t have to store n states. Maybe there is a way to do that, I haven’t dug into that kind of thing. You have to have something. You can’t just say “If someone broadcasts an old state the counterparty has to spend onchain fees and a CPFP in order to get to the latest state”. Because again onchain fees can be non-trivial and you are exposed to some amount there. You have to do something but it is unclear exactly how you would require that.

Regulatory and KYC issues

M: I have seen a few times the argument be made that you sort of have to trust your channel partners. They infer from that that the network is going to trend more towards trust relationships, at least for the major channels. It may become overlaid with KYC. Is that something that concerns you?

MC: Not really. A lot of the you have to trust your counterparty to not deliberately cheat in order to not lose money things are things that we can fix. Fixed by things like anchor and that kind of stuff. The more interesting cases that we’ve seen a lot more recently are around things like if you have a mobile wallet and you get an inbound payment that the vendor of that mobile wallet might run a node that opens a channel to you for that inbound payment. The UX of the mobile wallet just goes ahead and displays “Yes you’ve received this payment” even though the channel is still zero conf. Trusting the vendor of the mobile wallet to not double spend you. This is a fairly reasonable thing and significantly improves the UX. It doesn’t require any KYC, it is just trusting the vendor of the mobile wallet. We’ll see more stuff like that but it doesn’t necessarily require KYC for that type of interaction. Certainly if you are opening a very large channel yeah you probably want to know who your counterparty is but you don’t necessarily need to KYC them so much as “It is another business that I know” or “It is a friend of mine”. We are not really talking about that kind of level of issue. There are things you can do especially on mobile where potentially I know that you are actually running my app and so hopefully you are not going to be able to modify the app or something.

M: This week there was an update to FATF. It was very confusing on whether users are VASPs or not. Another thing that gets brought up with Lightning often is will users in the Lightning Network that forward transactions become regulated? Is that something that you guys are thinking about when you are building out your software?

MC: It is not something that we are worried about but it is also not our domain. It is more the domain of advocates. I think it is understood that that’s not really the current policy. As far as I’m aware no regulators have tried to push that as policy. There is uncertainty about it because regulators haven’t clarified that that’s absolutely never going to happen. That’s a question for policy advocates less us developers or potentially even lawyers. There’s a lot of issues with where the puck is going for regulation around Bitcoin payments. A lot of users, certainly almost everyone gets into Bitcoin by buying Bitcoin on some centralized platform. Those are really obvious chokepoints. That’s the key area that people like FATF are going after and targeting. Basically bringing any other named entity that they can under those kinds of regulations. There’s a world where they try to argue that any Lightning node is also bound by those regulations. That’s not really materially different from them arguing that every node is bound by those regulations or that anyone who has a wallet is bound by regulations that say they have to KYC their counterparty. From a technical perspective they can argue that but it doesn’t really change anything. If you are a Coinbase or someone and you are running a Lightning node and FATF says “Any payment you send or receive period including on your Lightning node needs KYC”, that is a problem in of itself. Lightning is unrelated. You see things like decentralized mixing services, regulators would love to shut that down but at the end of the day it is just this pure technical thing that exists. You can’t really go after that directly. The same is true for Lightning, the same is true for Bitcoin broadly, the same is true for decentralized mixers. The real question is what kind of KYC requirements are they going to try to apply to anyone making a Bitcoin transaction and to these exchanges that people buy from? Are you going to be able to withdraw your coins onto your own wallet without having to KYC anyone you ever transact with? Those are big questions. How that applies to Lightning, it could apply to Lightning but so what? It applies to everything and that is a problem in of itself. It is not specific to Lightning.

Future of LDK

AJ: Where does LDK go from here?

MC: We have spoken to a number of especially mobile wallets, I think currently that is our bread and butter because we do have a much lighter weight implementation. It is designed to integrate versus a lot of the other Lightning implementations which aren’t. We have a number of people who have been playing around with LDK and exploring integration and starting to work on it. It is all early. Obviously even if you have LDK doing most of the backend work for you and implementing Lightning for you, you still have to update your UX and update how you show payments, how you scan QR codes, all that kind of stuff. It is still work, it doesn’t come for free as much as we would like it to come as free as possible. We are continuing to talk to people, people who are interested in integrating Lightning into a system where they don’t already have Lightning. Or they want something that integrates a little tighter with their system than lnd or c-lightning where you run this binary and then you make RPC calls to it. We are interested in chatting to people, they can check out our website, come to our Slack. Because we are still tuning the API to ensure that it is good for users that are very hands on…. When someone comes to us and says “I want to integrate it in this way” we are pretty active in responding and helping them in many cases. If you want something to integrate Lightning, come chat with us. We are happy to active and very hands on if you want us to be. Some free engineering hours.

M: Thanks Matt, that was insightful.