2022-02-17

Austin Bitcoin Socratic Seminar 25

https://austinbitdevs.com/2022-02-17-socratic-seminar-25

We do have an open and public telegram group. https://t.me/BitcoinAustin There's also another community on Sphinx if you want, with spam protection built in. If you go to AustinBitDevs.com you can get all these links. Each month we put together a reading list, sometimes it's the night before, sometimes it's a little more ahead of time. If you want to follow-up on anything, you can find the reading list here. When I try to find things for the list, I use other Socratic Seminar's lists around the country. I know others use ours too. Also Jimmy Song's newsletter is one of my top resources for this as well. If you're not signed up to that, it's a great newsletter.

Everyone is hiring. If you're a bitcoin employer and you want to publish your job that you're hiring for, you can find it on Bitcoiner Jobs, or if you're looking for a new job and getting out of your fiat job, look at Bitcoiner Jobs. Unchained Capital is also hiring.

Coming up is The Bitcoin Conference. The coupon code is AUS25 for us. You get 25% off if you use the code AUS25. I was told that we want to beat Matt O'Dell's code, so please use it so that we can make fun of him. If you already bought a ticket? That sucks, I don't know. Buy another.

For Unconfiscatable in the beginning of March, we have two tickets to give away. They are non-transferable and you can't sell them. We have to figure out how to do this. Nobody responded in the group chat about how to do this. Maybe we can go by who traveled the farthest. We actually have to go to the conference... the first two people who find me afterwards who want to go... we'll figure it out. Something.

Last but not least, we have Sats by Southwest. There's a local Austin event called South by Southwest. They didn't want bitcoin to be part of it. So we're throwing Sats by Southwest. Kyle, I know you're involved in this? Yeah so, SatsBy is now known by SatsBy because SatsBySouthwest the official logo was SXW with a lightning bolt and we got a C&D letter and they were not fans of how much it looked similar to their logos. So we are explicitly not endorsed by them; if you like bitcoin, you can come participate in what we're doing. If you're a bitcoin company, or if you're a person hosting a bitcoin-focused event during SXSW, then you can go to the SatsXW website and fill out a form to show your event on this website. There is the first pleb-hosted hackathon; we have pleb lab wizards working here... SuperTestnet... these guys have a lot of cool ideas that they don't always have the time to build out. The SatsBySW hackathon- you will get personal mentorship from the person that created that idea. There might also be a .. block party on the toll. What about the BitABC? If you have been to Austin Bitcoin Club, we usually host that at The Capital Factory on Brazos which officially changes this month. Austin Bitcoin Club the first Thursday of every month will be hosted here at Bitcoin Commons Austin. Same general crowd, but the difference is that we're super non-technical, and I usually have philosophical discussions. Basically we just eat tacos, drink beer, and kick it with bitcoiners so come have a good time.

We also received news that we have coldcards to give away to the person who traveled the furthest to this event. Raise your hand if you traveled far. This is sponsored by Coinkite. They are provably sealed; don't trust, verify though. Raise your hand if you think you traveled the farthest. Mexico? Portland? Tampa? Portland is pretty far. Canada might now be the furthest border considering everything going on htere. I'll sort this out while you guys get into the content.

There's a ton of exciting topics to get through this month. It's fun particularly because there's some somewhat controversial topics which are the best way to learn in my opinion. We have a couple more newsy stuff before we get to those.

ASICs

https://ogbtc.substack.com/p/february-2022

The growth in ASIC manufacturers is interetsing. A lot of the mining in the world is coming to Texas right now. The Intel team... there's two main ones coming to market soon; one is by Intel and one is by Blockstream. I think the Intel team is Austin-based in fact. Does anyone here like know about those projects or have experience or comments on the mining and the using non-antminers or anything like that? Someone want to contribute to this?

... someone said the energy efficiency ... multiple orders of magnitude more energy efficient? The comment was that the Intel chips would be magnitudes more energy efficient but the caveat is that it's "than gpu's". The specs that they are advertising are presumably better than Antminer S9's. Are they going to be 5 nm? I don't know. It's not the most descriptive article. They haven't said. Some articles have said maybe manufactured by TMSNC. They are using TMSNC. It also said they won't start production for 18 months. No idea what the state of the market will be then. I think they are doing it so that later AMD doesn't.... So Intel is getting into it so that AMD doesn't get into it and compete with them later. I think this ties into everything going on in the world; lots of uncertainty around China, supply chain concerns, and a lot of companies want to bring manufacturing a little bit closer. 18 month lag times shows some of the domestic problems we're facing. But one of the advantages that Bitcoin brings to many markets is that you can start planning to do this; and once you have these miners in market, and if you manufacture in Texas with a relatively friendly government then that makes you can start making money on these pretty quickly. Even if you operate it at a loss, you can cover some of your costs early on. There's also a risk of domestic competition; Blockstream has publicly committed to getting into the market as welel. Competition is great, especially if it's based in Texas. Square/Block is also maybe thinking about it. I hope there are some people here from Block. Anyway, it's good to see other people talking about this. There needs to be competition and I hope people follow through on it.

Bitcoin bounties

https://github.com/futurepaul/awesome-bitcoin-bounties

This is a new list keeping track of bitcoin bounties. This helps to decentralize and distribute development a lot more. Having these, and making them a little more public. We talked a lot about these last month, but here's a way to contribute to the development ecosystem or if you have a project that is looking to raise a bounty. The author of this list is actually here tonight. There's there three HRF bounties, lightning tip jar, stabilizing the lightning ... the chaumian e-cash one; there's a bounty here for putting dark mode in the bitcoin wallet UI kit. There's a Wasabi bounty for a privacy-focused lightning wallet. There's a joinmarket one for 50 million sats to make a nice web UI, and then adding dual funding to lnd there is a 0.25 BTC bounty for that. Go try out for these.

We need more funding like this. Anyone who has been around bitcoin for a while, you might remember the Bitcoin Foundation and we have seen what happens to these foundations in other ecosystems. It's hard to get funding if you're working on open-source, and it's hard if you need to spend years developing before something can be brought to market. I hope these get funded in this more decentralized way.

Automatic fee bumping (eclair)

https://github.com/ACINQ/eclair/pull/2113

This one caught my eye because it's related to some other things that we have on the docket for this month. There's two related PRs. This is in the eclair lightning implementation. Replaceable transaction fee bumping-- this is a pair of PRs into eclair that created an automatic system for fee bumping. We're going to talk through the implications surrounding fees for layer 2 protocols. One of the challenges in lightning is that you have to pre-sign transactions and pre-commit to fees... you pre-sign transactions, you commit to the fees; this is a problem because if you open up channels, you have pre-signed txns, and then 6 months later maybe your counterparty disappears or leaves the channel or tries to steal from you, you're already pre-committed to fees that may be too low. There are some fee bumping mechanisms built into lightning. This is the second one to do anchor outputs properly? eclair and lnd? lnd was the first one to have anchor outputs which is basically the way to have a mechanism to do child-pays-for-parent in those closing transactions. eclair is the second implementation. In this PR, they added a mechanism to bump automatically. They had to add some heuristics to figure out if a transaction was stuck, or if the fees weren't high enough; and now it will handle this situation behind the scenes. I don't think people realize that those things aren't taken care of for you for a while, but it basically compromises the entire protocol if there isn't a legitimate threat of fee bumping. 2700 new lines of code, 1000 deletions, 43 files changed. I like seeing PRs with lots of lines deleted. The description is cool; it talks about the challenges that you have to plan through and what not. Any comments or questions about anchor outputs or fee bumping or lightning? Does everyone know what lightning is? You can find me after for that.

What's an anchor output? The anchor outputs are the way essentially that you pre-commit to having an output for each participant in the channel. Each participant now has a mechanism through which they can bump fees through CPFP. You have an output that you own, and you can bump that fee. CPFP is a way to pay for more fees; you spend more fees, but you pay for both transactions the parent and the child. With an anchor output, you have an output you can spend from so that you can bump the fee. The other output is an HTLC with a lot of conditions that requires participation from your counterparty. It's kind of like if a company is going bankrupt and another one is going to buy it out and pay for its liabilities... maybe that's the analogy. The miners get more fees, and so the person who is owed money will be getting it, so they are okay letting the bankrupt company off the hook, the miner gets the fee and everyone is happy.

TXHASH + CHECKSIGFROMSTACKVERIFY

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html

Taproot activated recently. People are asking, what's next? Some people were asking about CTV and ANYPREVOUT. There's a heated discussion with drama lately. Then Russell O'Connor dropped a bomb about TXHASH + CHECKSIGFROMSTACKVERIFY which kind of lets you do both things in a single proposal. Well, kind of.

There's two-- it's one proposal that requires two opcodes which you could also have two opcdoes with CTV that do both things kind of. It's a single proposal with two possible soft-forks. You would probably soft-fork both of them together. This is actually where some of the debate is; do we need both of these proposals or do we need something else? You're confused? Good. So are we.

OP_CTV is... OP stands for opcode. These are things that you put on the stack when you are doing bitcoin script. CTV is CHECKTEMPLATEVERIFY which is a way to do covenants. There are many podcasts and essays talking about that. Basically you can commit to-- you're locking in outputs to future types of spends in covenant type proposals. CTV and ANYPREVOUT are pseudo-covenant style things that both enable different feature sets and there's also an area of overlap between the two. The question that roconnor was putting forward was, can we have a proposal that covers both the areas overlapping and both not overlapping? That was his proposal. It's quite complex.

There's a lot of tradeoffs. One of the arguments against really any new soft-fork and it has been brought out for CTV and also to ANYPREVOUT even though it's not an opcode is that, is this the best way we can accomplish the goals that this proposal is setting out to do? It's always an open question. Even in the course of htis email thread, which seeks to solve the problems, there are new questions being brought up about not solving other problems so we will need more stuff later on.

To explain what these two separate proposals CTV and ANYPREVOUT do... CTV says here's a hash of a future transaction that this output has to be in in order for it to be valid. Think about locktimes where you say this utxo can only be spent if it's in a block after this time. CTV says it can only be spent in a transaction that looks like "this". ANYPREVOUT says it's signing everything on a future transaction but they don't care about the inputs, which means you can chain transactions together.

OP_TXHASH is roconnor's approach at combining them. TXHASH is basically saying, we're going to have this thing that says TXHASH which will accept a bunch of flags. It takes a hash of the transaction it is spent by; rather than CTV which pre-defines everything that will be hashed, it says give me a bunch of flags that tells me what's going to be hashed. That means that you then have to have not one, not two, not three, but 18 flags that this thing supports to piece together the different things. It's aiming for maximum utility here.

18 flags might sound like a lot, but you can do that with 3 bytes, so it's not a huge deal... The potential things that could occur is a lot, and it could get scary. Could we understand everything that can happen and put a security model around? There are certain combinations of the flags that can cause problems as well. The CTV proposal which has been out for a while, it has gone through a lot of these iterations, and it has been changed---- in hash cycles, certain fields aren't included in the OPCTV hash because if you hash them together you get into an infinite loop or you might add more burdens to nodes.

Another issue with the flags is that you have more caching responsibilities on nodes that are introduced. There are tradeoffs on every type of proposal. These flags are saying is that for each one of these, you would have that flag on the stack saying the txflag would include htat. So let's say the first one is, bit 1 is the version is covered, then it says the following hash is going to be a hash that covers the version flag of the transaction. Bit 2 might say blocktime and blocktime is going to be hashed, telling verifiers what fields from the transaction that you need to hash in order to validate that the thing is correct.

One interesting aspect of this is that it's reversing the... it's verify vs ... OP_CTV, there's push semantics vs verify semantics.

It's an interesting debate because there's a much broader debate in the bitcoin ecosystem right now about how to do upgrades. What's really interesting is that when you have a bunch of the smartest people in the room that all agree that we want some form of upgrades that maps to some set of requirements, like covenants, and hten none of them can agree on what that should look like? That's tough. Maybe that's good, maybe that's bad. I think it's an interesting case study in watching how these debates play out. In this debate, you have the guy who developed the proposal ANYPREVOUT, the one who developed the TXHASH proposal, and someone who built OP_CTV. And then someone pipes up and says, can't we all agree that we want general-purpose covenants that are recursive? There's a whole spinout conversation about this too. Basically a branch of the original conversation about TXHASH where it was asked, well, we can all agree on recursive covenants right? It was proposed whether recursive covenants are an issue or not... the response was, the last time this was discussed at length, nobody replied to say they were opposed to the idea. This was David Harding's reply.

If you don't know what recursive covenants are, well covenants put restrictions on the next transaction that happens, and a recursive covenant can put restrictions on all subsequent transactions. I think this is an interesting debate, where you know, you'd think there is consensus on these things and it turns out there's not. I think there was an assumption that there was a consensus on taproot being a good idea, and some people aren't super loud but they are also super smart that don't care as much about something going into bitcoin that might break it... there are always arguments against certain things. Whether those arguments are correct or not, is a totally different question. But then we base our progress on this to be whether anyone is disagreeing? That creates stop energy- every time a new proposal out, there's additional reasons to wait.

A lot of the support for ANYPREVOUT is for eltoo channels. This TXHASH proposal mashing together the ANYPREVOUT and CTV people both happy? It's funny- the CTV people and the anyprevout people are not mutually exclusive. Both people are happy with either proposal getting in, but there's a question about optimization. But that is the ultimate goal: let's do both things in one proposal but still two opcodes. This would also only be in tapscript.

In the post, he shows how to simulate CTV with this mechanism and how to simulate ANYPREVOUT with this. The CTV version uses more on-chain space and the ANYPREVOUT version has some other small issues. So do we want the clean version of ANYPREVOUT and it's not as explicit with having no opcodes?

There's a proposal out there with eltoo channels using CTV plus I think CHECKSIGFROMSTACK I think. With two opcodes, similar to this proposal, you can do both type things. It's, I don't know, it's interesting. It's kind of unfortunate. The conversations are great, but it's unfortunate that having these conversations can drive everything to a halt because there has been no decision. So all you have to do is email this list and kick up some dirt?

When we make links to the mailing list, we include links and you can click through to the next message. It does a good job of keeping things in order. It's nice to see how core development works and how these proposals happen. Something originally like this around taproot like MAST, taproot, graftroot, schnorr, and how do we do this? We eventually came to some kind of consensus about how to get it done.

So is TXHASH the taproot of all these combining everything together? Apparently not. That's the thing. It kind of doesn't achieve the thing it set out to do: the idea is that, we should solve all future problems, or as many as possible with this proposal, but then it turns out there are still more future things that we might want to do. So if the argument against CTV or ANYPREVOUT separately is that well we stil want to add things later on, and then ajtowns has this comment saying well if this gets us 2% of the advancement in scripting that we want, and 1.5% of the new features, and TXHASH gets us 2% when we want 10%... but basically ANYPREVOUT and CTV we know lots about so why not get it done now? On the unhashed podcast, Ruben Somsen talking about... it will take a minimum of two .... Simplicity is only a few years out... and then ten years later we're still not there yet.

Why does CHECKSIGFROMSTACK have anything to do with this? TXHASH just puts the hash on to the stack and then you need to check a signature against it to see if it's correct. CHECKSIG checks the signature against the actual transaction you're broadcasting but you need a way to check the signature against other things that aren't the transaction. Here, you need CHECKSIGFROMSTACKVERIFY to simulate OP_CTV.

If you want to think about it- an easier way to think about is that ANYPREVOUT, you're signaling that we're not going to put the inputs in this hash when we're checking the signature. If we have ANYPREVOUT, you would just have flags saying don't include those inputs but the hash is not on the transaction it's not in the stack so you need the extra opcode.

SIGHASHes in bitcoin are a way so that when you're signing a transaction you're signaling what you're committing to with that signature. We have a few different SIGHASH flags in bitcoin right now: ANYONECANPAY, SIGHASH_ALL which is the most common; ANYPREVOUT is a proposal for a new sighash that basically says literally any prevout.... SIGHASH_SINGLE is another one where you commit to only one of the inputs. Other people can add more inputs to the transaction on their own when you use that. Some protocols take advantage of that. CHECKSIGFROMSTACK and TXHASH creates a new part of this language; there's a set of flags for here are the things I'm going to hash which is what signatures do in the bitcoin transactions. So you push a signature on to the stack, you verify that, which is exactly how sighash flags work but it's abstracted a little bit away in this proposal and puts it into the scripting language explicitly. I think this is a good thing to add more flexibility because you can do a lot more with it.

Alright, let's move on.

CTV improve DLCs

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019808.html

DLCs are discrete log contracts. It's like a way to do DeFi on bitcoin. An oracle attests to an event and then you can form a bitcoin transaction on that and change the outcome based on what the oracle signs. You can do the same thing in bitcoin using adaptor signatures. For my super bowl bet, we had three transactions based on the possible outcomes. We had a signature for each of them, the oracle posted an event and when that oracle signature goes out they make one of the transactions valid and then it completes. Three outcomes is pretty simple, but with the bitcoin price there's technically infinite outcomes for what it could be but there's still millions of signatures required in DLCs. It's a nightmare.

The cool thing with DLCs is that one of the things they enable is that right now when we use oracles you're basically trusting them, you call their API and they give you a price. But with DLCs, an oracle can publish a signature for the result and they are attesting to that result. Anyone can take that signature, take that result, and use that to settle their bet. So this does add... decentralization may not be the right word, but it adds trustlessness to the system and certainly more privacy. If you don't want to rely on a single oracle, there's a way in the current protocol to get multiple signature sfrom multiple oracles, and CTV can improve this.

Lloyd is a Square Crypto grantee... good cryptographer guy. He had a proposal about- he threw out everything about DLCs and did it with CTV. It's a million times better. It's really cool. Instead of creating a million signatures or something, you instead create a tapleaf branch with every single possible outcome listed there. Then you check if the oracle's signature is correct, and then you have a CTV to do that.... you don't need to sign all these messages back and forth. Now, you can just build a contract and your counterparty can build it themselves and check that the contract is thet same. We can do this asynchronously and solve a lot of problems with DLCs.

Covenants in general: you are committing to a future version of a transaction. The way that DLCs work now is similar to lightning: we have an exchange of messages, we sign a bunch of stuff ahead of time. There's like a 10-35x estimate in terms of speed improvement with this proposal, and you go back from sending megabytes of data between participants down to like 100s of bytes which is a big improvement. The one downside is that if we don't do CTV then when you clos eit, even if your other party goes offline, it looks like a normal spend, but with this it looks obvious that it's a DLC. With taproot, yo ucould cooperatively close it and get your money out. You only lose some of the privacy if it's a non-cooperative close.

I didn't read this; but there are some potenitally doable ideas in here about lightning. I think jonasnick made a post that not just with CTV but you could do this with ANYPREVOUT. So with TXHASH you could probably do it as well.

The same thing that is possible is what makes another benefit of CTV possible is non-cooperative channel factories because again you know what you're committing to ahead of time so you don't need all those signatures flying around, so you can do non-cooperative channels and things like that.

CTV signet

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019925.html

There is now a signet for CTV that behaves in a more predictable manner. If you want to play around with it, you can test it on this signet.

pathcoin

This is probably not plausible... .it's like emulating opendime. You could transfer a utxo without doing any sends. That's basically, the way you would do things like that is by invalidating signatures. It's kind of cool. The proposal says this is probably not a good idea but it's a way to spark a conversation.

I think there's a part in here about fidelity bonds? There's a fidelity bond construction in here. Fidelity bonds I think are used in joinmarket today? Where basically you provably lock up some coins for a time that should say you would be more honest because you have coins at stake. I think the addition in this thanks to covenants is that you can lock up some coins and encumber their future spending conditions saying that their coins will be paid out to specific entities if they misbehave.

There are some problems with the penalty mechanisms and you don't want to rely on this for penalties which is why they intorduced fidelity bonds here, I think. With covenants, you commit to these future paths that forces you to behave and you literally get nothing from it if you don't.

Probabilistic channel scoring

https://github.com/lightningdevkit/rust-lightning/pull/1227

My understanding is that they- you don't actually know the actual bounds of a channel, but once you send throug hit you have a better idea. You can reason about how big the channel is. This is supposed to improve routing. That's a good summary. For people who are unfamiliar, taking a step back; when you want to pay someone in the lightning network, you need to find many paths to them in the network graph. The problem is, which nodes do you pick along the way? You can do the shortest path, the path wit hthe least amount of fees, something where you do the score on nodes so that you can figure out which are hte most reliable. This approach is something where based on past experience you learn essentially what the liquidity balance is of the channel and come up with a success probability on making a particular payment through that channel. As payments are successful or fail, you can update those probabilities and you keep an upper and lower obund for each channel. If the payment is successful through the channel, you can reduce the liquidity; if it fails on the other hand, it depends on where in the path it fails, and then you can determine any nodes upstream of that failure you can adjust their probability based on that, or downstream the same. That's how things work; happy to explain more about how this works offline.

There's a twitter thread where they are talking about latency in payment channels about how it should be factored into channel scoring. I think roasbeef had an email explaining this a while back. There was a lightning spec meeting in Zurich where there was a note in there about this that I thought was really cool about the latency thing. Nobody does this today; roasbeef talks about well this could be a centralization thing- if you optimize for latency then you end up with a bunc hof lightning nodes in the same data center which is probably not ideal. In the previous email, it was talking about adding a new cost function to payments that were the longer you're being asked to hold on to an HTLC potentially there's a dynamic or higher fee that would help... and what this would be, the motivation behind this was to help solve all the channel jamming stuff that joust has broughten up. Channel jamming is that theoretically you can send a payment that routes through the open node 20 times through 20 different channels and just sits there and you kind of just freeze the liquidity up for a certain amount of time it can't be used. It's a really asymmetric attack because you can lock up a lot of liquidity with not a bunch of funds, but if the fee was calculated another way then it could solve this problem. Not deployed, still very much R&D phase.

BOLT12

https://github.com/lightningnetwork/lnd/issues/5594#issuecomment-1042314431

When BOLT12? In lightning today, the invoice format is described in BOLT11. BOLT stands for Basics of Lightning Technology it's a spec that defines what all is in an invoice. BOLT11 was specced before anyone used lightning in the wild, and it has gotten us far, but there are definitely things that need to be improved. Rusty wrote BOLT11 and now he has written BOLT12. It's an upgrade in theory to the invoice format but a whole bunch of additional things with the outcome people like is a static invoice.

It's a big change. If you scroll down, roasbeef lays out a lot of the things included on this. BOLT12 is not only a new invoice format... what's the steelman argument for BOLT12? There's a blinded path scheme which is nice so that you don't dox yourself with an invoice, and also recurring payments. I think those are the things... not just recurring payments, just the static invoice. Push payments not using the hack of keysend. Push payments to a static invoice, but you still need to go and retrieve secrets from... we don't need to talk about that right now. This is the first time that Lightning Labs has taken a position on it; this is a really big change with a lot of stuff in there. There's just a lot of stuff in there. Not only basically hard-forking the invoice format and creating a duplicate one that all the wallets would need ot integrate, but also a new blinded path scheme which hasn't been finished, and introduction of free onion messaging which might be a DoS mechanism, an lnurl alternative, it's a large really big initiative that should probably be broken up into iteratively small changes. Roasbeef is not against this, but he's just saying it's a lot of work. Each individual thing on its own. He then talks about the current priorities of lnd as a team they have salaries to pay and everything they are focusing on.... this goes back to the bounty topic. Working in open-source is really hard, it's really hard to justify, a lot of people are involved, and we don't live in a perfect world. Roasbeef said, let's think about this practically, similar to TXHASH, in a perfect world maybe, but where do we live? What do we have, what do we want, and what do we need? This protocol discussion about how to move forward.. the value of lightning is interoperability; we all want to upgrade, but how do we do this practically? It will always be an interesting conversation. If people aren't upset then we have a centralized network built by a single company. Lisa had conversations on twitter that she ... can you describe that tweet thread? Lisa was talking about her accounting, she made an update to the BOLTs about how to do accounting, and she was saying how nice it is that she can make PRs to the BOLTs without doing a full BIPs. If you're doing a proposal for lightning vs a proposal for bitcoin. The downside is that she had to know, and she does because she has been very experienced with it, she had to know where in the spec, what to change, what not to effect, there's a large barrier to entry to that style of protocol development but there are benefits in that you don't have to. ... it's just an interesting thing; lightning kind of, because it's not a consensus protocol and different people can do different things across the network we can have more experimentation but at the same time the value of lightning is interoperability so we want to maintain some kind of consensus. In lightning, it's not always true that you need everyone to upgrade at the same time. .... Another interesting aspect of roasbeef's point is that there's a lot of things that we could do that we're simulating at the application layer, but thinking about priorities already, if you can do these things yeah it wolud be nice to fix them but you can already do it at an application layer anyway. Any thoughts on that? Any BOLT12 fans in the audience?

There were some people talking about dumping on Lightning Labs about BOLT12... I think onion messages are a spam vector, and I haven't heard why they aren't. Well, probing today is kind of like the same thing where you can send a transaction that you know will fail, it will route through the network and you don't have to pay any fees for it. There's still a cost where you have to encumber some funds or something...

You can make people pay for BOLT12 messages and you can ratelimit it by making people to route through your node. One way is that you purchase an authentication token on your node, and then when they make a message they have to include that token so that you let their messages through. With this, it wouldn't be a DoS vector. Couldn't this be done with LSATs right now? Lightning service authentication tokens.... it's macaroons that like tie authentication to your lightning node so that you can prove something was paid for by a specific lightning node. The TLV stuff is that you're encoding messages in that already. So we're kind of already doing that, and if we wanted to do that, we could do that without the extra work in BOLT12. Maybe we shouldn't try to be tor... maybe people disagree with that, but that's what the proposal is starting to approximate. That's my understanding.

Replace-by-fee

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html

We have like 10 minutes left; we can keep talking about this, but I want to move on to RBF. If the network looks like this in 10 years, then you could argue that bitcoin has failed. We need to be paying transaction fees. The base layer needs to have utility. We need to figure out how to deal with this stuff. There were two debates on the mailing list that were super interesting recently. This is back to bitcoin layer 1 which implicates lightning though because we're talking about these second layer protocols that rely on previously signed transactions because paying for fees is really difficult...

Basically, gloria made a good thread, and I think jamesob had a good thread as well. Just outlining, what do we want in an RBF policy and what are the qualifications we need? What are the features we want? What tradeoffs are we willing to make? Before deciding we need this solution or that. Who knows what RBF is? Well what's the actual goal here? RBF is replace-by-fee. Start with a transaction with 100 sats/byte which might be too low, so you can upgrade that to like 200 sats/byte. You can replace transactions with this. Who has had a transaction stuck in the mempool for more than two days? RBF is a way to replace that transaction. There's a lot of policy attached to whether something can be replaced or not.

When bitcoin was originally released, RBF was later soft-forked in, it was disabled originally. It's not consensus layer. Technically we can already replace transactions if we wanted to. You could. You could have a lower fee transaction out that doesn't actually.... I would recommend going to austinbitdevs.com and clicking on the link to read it. The gist is better actually, Gloria highlights what the current rules are in Bitcoin Core. There were a lot of problems that were brought up when RBF was being introduced again. The mempool is the set of transactions waiting to get mined into conseuss and nodes will check if something matches consensus before sending it out to the rest of the network but there are other things like what if you already saw a related transaction that spent the same inputs, are you going to relay another transaction that are spending those same utxos? It's not even getting to the point where it can get mined. Well, what if it's a valid transaction but you want to pay more fees? So we setup these policies that say okay nodes should be okay with re-transmitting transactions that are spending the same inputs as long as they are paying a higher fee and as long as they are not already mined previously. There's a reaosn why lightning when it runs into this problem like when a transaction was previously signed and not using RBF you run into fee issues; or pinning attacks where based on the policies previously agreed upon with RBF, you can do something, I think it's in the list here in the motivation- gloria in her post describes TLDR hat if your transaction is.... there's a bunch of problems and ways that pinning attacks can happen. Basically a malicious actor in your channel can pin the transaction they prefer in the mempool, and not let you correct it to rectify the problem. This is why two of the lightning implementations are opting to use anchor outputs.

Say that the mempool isn't full and there's less than a full block of transactions, well now miners aren't going to be filling up the block, there's misaligned incentives where you need to account for both of these. So that's hard to reason about. If you replace a whole package of transactions, ...... There are some RPC functions now to look at some transactions as a package. But right now if someone relays a transaction over relay, that's not as far along yet. The idea of package relay is to relay entire sets of transactions. If you're replacing one of these transactions by fee, there are situations where something htat optimizes the fees for miners in a package would get rejected based off of the current policy rules... there's an update to rulesets for packages for RBF that is different than the base one htat has been accepted and merged in, I believe. There's movement going on in there. What this post is talking about is that the system doesn't work as is, RBF was introduced before we had widely deployed lightning, before covenants, etc. If you commit to future shapes of transactions then you need to be able to pay for fees in the future somehow on those transactions right.

When was package relay put into Bitcoin Core? It's not fully in there. There are some RPC functions that instead of adding one transaction into your mempool you can add an entire package into your own mempool but when your mempool sends it to someone else, they won't treat it as a package. There's a set of PRs around this. That will still be useful; the eclair guys are building on top of that. For your own mempool, your node will have the correct transactions in its mempool. It would suck if you were trying to broadcast a transaction but your own node was rejecting because it thinks it violates policy or something.

Fee bumping

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019879.html

jamesob started a thread; he was noting that, wow this is complex. Why are we spending so much effort throwing in hacks? The way he described it is that CPFP and RBF are hacks because we're not dealing with fees in a sensical way for the way we're doing transactions today. The foundational problem being identified in all of these discussions is that right now when you're paying for fees, you're paying for fees- the entity paying for fees is the same thing that the fees are paying for. So a transaction has to pay for its own fees, and the propisition is... if you have a transaction you're setting in stone for lightning network for example where you don't know what the fee market is going to be in a few years if you're talking about inheritance protocols... you might be locking in too high of a fee or too low of a fee. jamesob gets into this; you should be able to pay separate from that thing you're paying for, so that you can prepare the transaction ahead of time and say okay I've signed and committed to spending these utxos; when I'm ready to broadcast it, then that's when I should decide how much to pay. This harkens back to transaction sponsors where you publish another transaction at the time that says I want to pay for this other transaction and if it doesn't get in then don't take my utxos to pay for that.

It helps solve the fundamental problem of committing to fees- the thing that is trying to "get in" is paying its own fees, so now you can have this other transaction that pays for other transactions. It's a soft-fork, there are some privacy concerns, but it's an interesting proposal.

Would the paying entity have to be online? That's an important question. One of the things that this proposal introduces is that- right now, you're paying fees yourself, but this opens up that third-parties could pay. But you're paying for someone's transaction- someone's online to publish that transaction. So it could be that you pay the transaction at 1 sat/byte, and you notice it's not getting in, then you come online, bump it, and then go offline. This PR that eclair just put forward- they could say, turn on this configuration, and it might be cheaper- you might need to add a chain of transactions and this might be more compact.

Another use case is that say there's an exchange trying to pay out to customers or for much bigger organizations not just customers, but you're transacting billions of dollars- the exchange would be willing to pay for that transaction. They want to pay for their customers. But right now the only way they can pay is by paying in the transaction themselves. But other cmpanies like Unchained might be willing to pay for transactions to settle but they don't have authorship control over the transaction.

Given the fact htat mining pools are typically concentrated, what about offline fee bumping? You can go to mining companies and pay them off-chain and pay by credit card. But we don't want that. There are proposals to do it over lightning, but hopefully that would be trustless. Maybe something with DLCs related to fee rate and mining pools. You don't have to do it trustlessly, but the whole point of doing it with lightning would be to get a certain amount of anonymity and trustlessness.... you kind of want these, you would do it with lightning you would want it to be atomic. You only want to pay if it gets in, which jamesob's proposal allows. c-lightning lisa put together a controversial proposal for the purposes of controversy to say that we should get rid of the mempool and she was highlighting this problem. If we don't deal with these problems with the mempool, then you will have a bunch of centralized mining pools and people will send their transactions directly to one or a few of them in the hopes of getting into a block.

Trustlessness, we can't just say it because we want it. It's a fundamental thing here: lightning is insecure if this is not trustless anymore. If we can't get transactions in, and the incentives around it can't be done with fee- then mining pools will screw you over. It's kind of a fundamental thing. If you have a channel open with the expectation that it will be open for a few years, and we don't solve this problem, then your channel is already broken today because the lightning incentive is based on the penalty mechanism that happens when the channel is broken. If that is broken in the untold future where fees go up, then it's broken today already.

.... The problem with pre-signed transactions... one of the problems being solved with HTLCs and PTLCs is that you're exchanging signatures with a counterparty that might be trying to screw you over. It's not jus tabout signing transactions with different fees. But with this one, say you have your closing transaction and it' sstuck in the mempool. So you could add a jamesob fee transaction... can you pre-sign the sponsoring transactions? You can definitely do that. We're getting cut off. Well, he can't screw you over because you have to pre-sign everything so don't co-sign if you don't agree. Well, the problem is thta the fees might get away from you.