Return-Path: Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 5C0FFC000B for ; Thu, 17 Mar 2022 02:02:47 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 28FD840ADF for ; Thu, 17 Mar 2022 02:02:47 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -2.098 X-Spam-Level: X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2DGKWgYA1zDC for ; Thu, 17 Mar 2022 02:02:44 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by smtp2.osuosl.org (Postfix) with ESMTPS id 49C8240111 for ; Thu, 17 Mar 2022 02:02:44 +0000 (UTC) Received: by mail-yb1-xb33.google.com with SMTP id u3so7731694ybh.5 for ; Wed, 16 Mar 2022 19:02:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=JN3l1Qltb55L28SFMrh4+Vd/n67UNC1IbILOmzxI4pc=; b=h5gU3wVXkPlGgsguSQm2Fe9PBlpqI8wIO/OCgT1q+MoWQxzWJXzxhAvBc9oE+zZhpy 4ouZ8NM84MdYAGd8CKSEy8KZA+yNosXcDDL3T1hdsMIll3jZCGNgfGGR1E5m92tyAMkI R44M7XVRtC5oHdgzZ4U9tuwug1ccGDiDnXe75muPvD50S/o0Lri3vhZnHKytT8Dd7Ugu SzRnEDcyy/G+zP1m6FibFOEsEmxy68uBbyKm0CpG+xYZYy+WexWD6MG8rjxNwLrWrwiE BVwDx+VGH0qJjh8zxDCIaioe7dAqJ9k0u4vniL9X8+8Cq6bgyG/taRCd5jeZwPpgmJ23 aMpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=JN3l1Qltb55L28SFMrh4+Vd/n67UNC1IbILOmzxI4pc=; b=sh1ubx1WMb75DQwhRFVce7pSNgII3bL0iP6Qiu7tamvrhH5til7INs9orNtYTPfYHg DH426wUpUpxaoEgvjkPfFs0iROw4OynVkFpr0k4E946zzmTQ6ah9zehA2/BmMdIyOo2S KOlmaJKFQn0VAcfbwgKBF0uxw2LmO3YqyXYLNEqT3xXXVUVoHewnGTAe/2K/XIF48R8H RYKqsEImYHFJMGkqJ/dkZV1GIyt6LLc7r6Pxz8vk5dD968eQOiG2BLeU4JMGrrm/1rJj pAHygWJCrAK9OHtqdio7EzZpRcr3mdKl/1T8F+kRAVv+cz3taa/LMuZhK1D2abxm45n6 +bng== X-Gm-Message-State: AOAM533BbCmjivC48J+RGSNiUvcZwRux8PbMSksei2UDFFrIYvqmgqnm p9vroZ4y5fgmMUNER1YffST6z0P3zNZPCJ+5Jgs= X-Google-Smtp-Source: ABdhPJxbgUCvC+3quhq6UKfbWyNmXmVgoj3BRGqHVhmR9k6xd+Fv0rBBXM5lyJ8EoJNLoOj479dhAeXk7y4WQ1TIazY= X-Received: by 2002:a05:6902:2d0:b0:625:2a16:2469 with SMTP id w16-20020a05690202d000b006252a162469mr3064549ybh.386.1647482563093; Wed, 16 Mar 2022 19:02:43 -0700 (PDT) MIME-Version: 1.0 References: <20220208045850.GA6538@erisian.com.au> In-Reply-To: From: Antoine Riard Date: Wed, 16 Mar 2022 22:02:30 -0400 Message-ID: To: Gloria Zhao , Bitcoin Protocol Discussion Content-Type: multipart/alternative; boundary="000000000000f2698e05da606c25" X-Mailman-Approved-At: Thu, 17 Mar 2022 09:32:33 +0000 Cc: Anthony Towns Subject: Re: [bitcoin-dev] Improving RBF Policy X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Mar 2022 02:02:47 -0000 --000000000000f2698e05da606c25 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Mempoololic Anonymous fellow, > 2. Staggered broadcast of replacement transactions: within some time > interval, maybe accept multiple replacements for the same prevout, but only > relay the original transaction. If the goal of replacement staggering is to save on bandwidth, I'm not sure it's going to be effective if you consider replacement done from a shared-utxo. E.g, Alice broadcasts a package to confirm her commitment, relay is staggered until T. At the same time, Bob broadcasts a package to confirm his version of the commitment at a slightly better feerate, relay is staggered until T. At T, package A gradually floods from Alice's peers and package B does the same from Bob's peers. When there is an intersection. B overrides A and starts to replace package A in the network mempools nearest to Alice. I think those peers won't have bandwidth saving from adopting a replacement staggering strategy. Or maybe that's something completely different if you have in mind ? I think it's worth more staggering detail to guess if it's robust against all the replacement propagations patterns. Though if we aim to save on replacement bandwidth I wonder if a "diff-only" strategy, assuming some new p2p mechanism, would be more interesting (as discussed in the recent "Thoughts on fee bumping thread"). > A lingering concern that I have about this idea is it would then be > possible to impact the propagation of another person=E2=80=99s transactio= n, i.e., > an attacker can censor somebody=E2=80=99s transaction from ever being ann= ounced by > a node if they send enough transactions to fill up the rate limit. > Obviously this would be expensive since they're spending a lot on fees, but > I imagine it could be profitable in some situations to spend a few thousand > dollars to prevent anyone from hearing about a transaction for a few hours. > This might be a non-issue in practice if the rate limit is generous and > traffic isn=E2=80=99t horrendous, but is this a problem? I think I share the concern too about an attacker exhausting a node transaction relay ressources to prevent another person's transaction to propagate, especially if the transaction targeted is a L2's time-sensitive one. In that latter context, an attacker would aim to delay the relay of a time-sensitive transaction (e.g a HTLC-success) to the miners, until the timelock expires. The malicious delay period should swallow the go-to-chain HTLC deadline ("the deadline for received HTLCs this node fulfilled" in bolt 2 parlance), in that current example 18 blocks. Let's say we allocate 10 MB of bandwidth per-block period. Once the 10 MB are exhausted, there is no more bandwidth allocated until the next block is issued. If the top mempool feerate is 1 sat/vb, such naive design would allow an attacker to buy all the p2p network bandwidth period for 0.1 BTC. If an attacker aims to jam a HTLC transaction for the 18 blocks period, the cost is of 1,8 BTC. If the attacker is a LN counterparty to a HTLC worth more than 1.8 BTC, the attack sounds economically profitable. Worst, the p2p network bandwidth is a public resource while a HTLC is a private, off-chain contract. An attacker could be counterparty to many HTLCs, where each HTLC individual value is far inferior to the global p2p bandwidth cost but the sum, only known to the attacker, is superior to. Therefore, it sounds to me that p2p network bandwidth might be attractive if the stealing are batched. Is the attacker scenario described credible ? Are the numbers sketched out realistic ? If yes, I think one design insight for eventual transaction relay rate limiting would be to make them "dynamic", and not naively fixed for a period. By making them dynamic, an attacker would have to compete with the effective feerate proposed by the victim transaction. E.g, if the HTLC-success feerate is of 10 sat/vb, an attacker would have to propose a stream of malicious transaction of more than 10 sat/vb during the whole HTLC deadline period for the transaction-relay jamming to be effective. Further, the attack might be invisible from the victim standpoint, the malicious flow of feerate competitive transactions can be hard to dissociate from an honest one. Thus, you can expect the HTLC transaction issuer to only slowly increase the feerate at each block, and those moves to be anticipated by the attacker. Even if the transaction issuer adopts a scorched-earth approach for the latest blocks of the deadline, the absolute value of the HTLC burnt in fees might still be less than the transaction relay bandwidth exhaustion paid by the attacker because the attack is batched by the attacker. I'm not sure if this reasoning is correct. Though if yes, the issue sounds really similar to "flood&loot" attack affecting LN previously researched on [0]. What worries me more with this "exhaust&loot" is that if we introduce bounded transaction relay rate limiting, it sounds a cheaper public ressource to buy than the mempool.. [0] https://arxiv.org/pdf/2006.08513.pdf Anyway, I would say it's worthy to investigate more transaction relay rate limiting designs and especially carefully weigh the implications for L2s. Those ones might have to adjust their fee-bumping and transaction rebroadcast strategies in consequence. > Suhas and Matt [proposed][0] adding a policy rule allowing users to specify > descendant limits on their transactions. For example, some nth bit of > nSequence with nVersion 3 means "this transaction won't have more than X > vbytes of descendants" where X =3D max(1000, vsizeof(tx)) or something. I= t > solves the pinning problem with package RBF where the attacker's package > contains a very large and high-fee descendant. Hey, what if the pinning transaction has a parent with a junk feerate ? Let's say you have commitment tx for a HTLC of value 500000 sats, with top mempool feerate of 50 sat/vbyte. The commitment tx is pinned by a malicious tx of size 1000 vbytes, matching top mempool feerate. This malicious tx has a second unconfirmed parent (in addition to the commitment) of size MAX_STANDARD_TX_WEIGHT offering a 1 sat/vb. I think the pinning transaction ancestor score would be less than 2 sat/vb and thus considered irrelevant for block template inclusion ? At the same time, as the pinning transaction is attached with a top mempool feerate, the honest user wouldn't be able to replace it with a better-feerate proposal ? Unless adopting a scorched-earth approach, although economically I don't think this fee-bumping strategy is safe in case of batch-pinning. It might be fixable if we make one additional requirement "The child transaction subject to the user-elected descendant limit must have only one unconfirmed parent" (here the commitment transaction) ? Though I'm not even sure of the robustness of this fix. The commitment transaction itself could be used as a junk parent to downgrade the pinning transaction ancestor score. E.g, using a revoked commitment transaction with `max_accepted_htlcs` on both sides, pre-signed with a feerate of 1 sat/vb. We might restrict the maximum number of pending HTLCs network-wise to make the worst commitment transaction size reasonable, though not sure if my LN colleagues are going to like the idea.. Is that reasoning correct and conform to our Ancestor Set Based algorithm approach ? Maybe more details are needed. > Also, coming back to the idea of "we can't just use {individual, ancestor= } > feerate," I'm interested in soliciting feedback on adding a =E2=80=9Cmini= ng score=E2=80=9D > calculator. I've implemented one [here][2] which takes the transaction in > question, grabs all of the connected mempool transactions (including > siblings, coparents, etc., as they wouldn=E2=80=99t be in the ancestor no= r > descendant sets), and builds a =E2=80=9Cblock template=E2=80=9D using our= current mining > algorithm. The mining score of a transaction is the ancestor feerate at > which it is included. I don't have a strong opinion there yet, though if we make this "block template" construction the default one, I would be really conservative to avoid malicious child attachment on multi-party transactions downgrading the block inclusion efficiency. Antoine Le mer. 9 mars 2022 =C3=A0 10:37, Gloria Zhao via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit : > Hi RBF friends, > > Posting a summary of RBF discussions at coredev (mostly on transaction > relay rate-limiting), user-elected descendant limit as a short term > solution to unblock package RBF, and mining score, all open for feedback: > > One big concept discussed was baking DoS protection into the p2p level > rather than policy level. TLDR: The fees are not paid to the node operato= r, > but to the miner. While we can use fees to reason about the cost of an > attack, if we're ultimately interested in preventing resource exhaustion, > maybe we want to "stop the bleeding" when it happens and bound the amount > of resources used in general. There were two main ideas: > > 1. Transaction relay rate limiting (i.e. the one you proposed above or > some variation) with a feerate-based priority queue > 2. Staggered broadcast of replacement transactions: within some time > interval, maybe accept multiple replacements for the same prevout, but on= ly > relay the original transaction. > > Looking to solicit feedback on these ideas and the concept in general. Is > it a good idea (separate from RBF) to add rate-limiting in transaction > relay? And is it the right direction to think about RBF DoS protection th= is > way? > > A lingering concern that I have about this idea is it would then be > possible to impact the propagation of another person=E2=80=99s transactio= n, i.e., > an attacker can censor somebody=E2=80=99s transaction from ever being ann= ounced by > a node if they send enough transactions to fill up the rate limit. > Obviously this would be expensive since they're spending a lot on fees, b= ut > I imagine it could be profitable in some situations to spend a few thousa= nd > dollars to prevent anyone from hearing about a transaction for a few hour= s. > This might be a non-issue in practice if the rate limit is generous and > traffic isn=E2=80=99t horrendous, but is this a problem? > > And if we don't require an increase in (i.e. addition of "new") absolute > fees, users are essentially allowed to =E2=80=9Crecycle=E2=80=9D fees. In= the scenario > where we prioritize relay based on feerate, users could potentially be > placed higher in the queue, ahead of other users=E2=80=99 transactions, m= ultiple > times, without ever adding more fees to the transaction. Again, maybe thi= s > isn=E2=80=99t a huge deal in practice if we set the parameters right, but= it seems=E2=80=A6 > not great, in principle. > > --------- > > It's probably also a good idea to point out that there's been some > discussion happening on the gist containing my original post on this thre= ad > (https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff). > > Suhas and Matt [proposed][0] adding a policy rule allowing users to > specify descendant limits on their transactions. For example, some nth bi= t > of nSequence with nVersion 3 means "this transaction won't have more than= X > vbytes of descendants" where X =3D max(1000, vsizeof(tx)) or something. I= t > solves the pinning problem with package RBF where the attacker's package > contains a very large and high-fee descendant. > > We could add this policy and deploy it with package RBF/package relay so > that LN can use it by setting the user-elected descendant limit flag on > commitment transactions. (Otherwise package RBF is blocked until we find = a > more comprehensive solution to the pinning attack). > > It's simple to [implement][1] as a mempool policy, but adds some > complexity for wallets that use it, since it limits their use of UTXOs fr= om > transactions with this bit set. > > --------- > > Also, coming back to the idea of "we can't just use {individual, ancestor= } > feerate," I'm interested in soliciting feedback on adding a =E2=80=9Cmini= ng score=E2=80=9D > calculator. I've implemented one [here][2] which takes the transaction in > question, grabs all of the connected mempool transactions (including > siblings, coparents, etc., as they wouldn=E2=80=99t be in the ancestor no= r > descendant sets), and builds a =E2=80=9Cblock template=E2=80=9D using our= current mining > algorithm. The mining score of a transaction is the ancestor feerate at > which it is included. > > This would be helpful for something like ancestor-aware funding and > fee-bumping in the wallet: [3], [4]. I think if we did the rate-limited > priority queue for transaction relay, we'd want to use something like thi= s > as the priority value. And for RBF, we probably want to require that a > replacement have a higher mining score than the original transactions. Th= is > could be computationally expensive to do all the time; it could be good t= o > cache it but that could make mempool bookkeeping more complicated. Also, = if > we end up trying to switch to a candidate set-based algorithm for mining, > we'd of course need a new calculator. > > [0]: > https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permalink= _comment_id=3D4058140#gistcomment-4058140 > [1]: https://github.com/glozow/bitcoin/tree/2022-02-user-desclimit > [2] https://github.com/glozow/bitcoin/tree/2022-02-mining-score > [3]: https://github.com/bitcoin/bitcoin/issues/9645 > [4]: https://github.com/bitcoin/bitcoin/issues/15553 > > Best, > Gloria > > On Tue, Feb 8, 2022 at 4:58 AM Anthony Towns wrote: > >> On Mon, Feb 07, 2022 at 11:16:26AM +0000, Gloria Zhao wrote: >> > @aj: >> > > I wonder sometimes if it could be sufficient to just have a relay ra= te >> > > limit and prioritise by ancestor feerate though. Maybe something lik= e: >> > > - instead of adding txs to each peers setInventoryTxToSend >> immediately, >> > > set a mempool flag "relayed=3Dfalse" >> > > - on a time delay, add the top N (by fee rate) "relayed=3Dfalse" txs= to >> > > each peer's setInventoryTxToSend and mark them as "relayed=3Dtrue"= ; >> > > calculate how much kB those txs were, and do this again after >> > > SIZE/RATELIMIT seconds >> >> > > - don't include "relayed=3Dfalse" txs when building blocks? >> >> The "?" was me not being sure that point is a good suggestion... >> >> Miners might reasonably decide to have no rate limit, and always relay, >> and never exclude txs -- but the question then becomes is whether they >> hear about the tx at all, so rate limiting behaviour could still be a >> potential problem for whoever made the tx. >> >> > Wow cool! I think outbound tx relay size-based rate-limiting and >> > prioritizing tx relay by feerate are great ideas for preventing spamme= rs >> > from wasting bandwidth network-wide. I agree, this would slow the low >> > feerate spam down, preventing a huge network-wide bandwidth spike. And >> it >> > would allow high feerate transactions to propagate as they should, >> > regardless of how busy traffic is. Combined with inbound tx request >> > rate-limiting, might this be sufficient to prevent DoS regardless of t= he >> > fee-based replacement policies? >> >> I think you only want to do outbound rate limits, ie, how often you send >> INV, GETDATA and TX messages? Once you receive any of those, I think >> you have to immediately process / ignore it, you can't really sensibly >> defer it (beyond the existing queues we have that just build up while >> we're busy processing other things first)? >> >> > One point that I'm not 100% clear on: is it ok to prioritize the >> > transactions by ancestor feerate in this scheme? As I described in the >> > original post, this can be quite different from the actual feerate we >> would >> > consider a transaction in a block for. The transaction could have a hi= gh >> > feerate sibling bumping its ancestor. >> > For example, A (1sat/vB) has 2 children: B (49sat/vB) and C (5sat/vB). >> If >> > we just received C, it would be incorrect to give it a priority equal = to >> > its ancestor feerate (3sat/vB) because if we constructed a block >> template >> > now, B would bump A, and C's new ancestor feerate is 5sat/vB. >> > Then, if we imagine that top N is >5sat/vB, we're not relaying C. If w= e >> > also exclude C when building blocks, we're missing out on good fees. >> >> I think you're right that this would be ugly. It's something of a >> special case: >> >> a) you really care about C getting into the next block; but >> b) you're trusting B not being replaced by a higher fee tx that >> doesn't have A as a parent; and >> c) there's a lot of txs bidding the floor of the next block up to a >> level in-between the ancestor fee rate of 3sat/vB and the tx fee >> rate of 5sat/vB >> >> Without (a), maybe you don't care about it getting to a miner quickly. >> If your trust in (b) was misplaced, then your tx's effective fee rate >> will drop and (because of (c)), you'll lose anyway. And if the spam ends >> up outside of (c)'s range, either the rate limiting won't take effect >> (spam's too cheap) and you'll be fine, or you'll miss out on the block >> anyway (spam's paying more than your tx rate) and you never had any hope >> of making it in. >> >> Note that we already rate limit via INVENTORY_BROADCAST_MAX / >> *_INVENTORY_BROADCAST_INTERVAL; which gets to something like 10,500 txs >> per 10 minutes for outbound connections. This would be a weight based >> rate limit instead-of/in-addition-to that, I guess. >> >> As far as a non-ugly approach goes, I think you'd have to be smarter abo= ut >> tracking the "effective fee rate" than the ancestor fee rate manages; >> maybe that's something that could fall out of Murch and Clara's candidat= e >> set blockbuilding ideas [0] ? >> >> Perhaps that same work would also make it possible to come up with >> a better answer to "do I care that this replacement would invalidate >> these descendents?" >> >> [0] https://github.com/Xekyo/blockbuilding >> >> > > - keep high-feerate evicted txs around for a while in case they get >> > > mined by someone else to improve compact block relay, a la the >> > > orphan pool? >> > Replaced transactions are already added to vExtraTxnForCompact :D >> >> I guess I was thinking that it's just a 100 tx LRU cache, which might >> not be good enough? >> >> Maybe it would be more on point to have a rate limit apply only to >> replacement transactions? >> >> > For wallets, AJ's "All you need is for there to be *a* path that follo= ws >> > the new relay rules and gets from your node/wallet to perhaps 10% of >> > hashpower" makes sense to me (which would be the former). >> >> Perhaps a corollarly of that is that it's *better* to have the mempool >> acceptance rule only consider economic incentives, and have the spam >> prevention only be about "shall I tell my peers about this?" >> >> If you don't have that split; then the anti-spam rules can prevent you >> from getting the tx in the mempool at all; whereas if you do have the >> split, then even if the bitcoind anti-spam rules are blocking you at >> every turn, you can still send your tx to miners by some other route, >> and then they can add it to their mempool directly without any hassle. >> >> Cheers, >> aj >> >> _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > --000000000000f2698e05da606c25 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Mempoololic Anonymous fellow,

> 2. Stagg= ered broadcast of replacement transactions: within some time
> interv= al, maybe accept multiple replacements for the same prevout, but only
&g= t; relay the original transaction.

If the goal of replacement stagge= ring is to save on bandwidth, I'm not sure it's going to be effecti= ve if you consider replacement done from a shared-utxo. E.g, Alice broadcas= ts a package to confirm her commitment, relay is staggered until T. At the = same time, Bob broadcasts a package to confirm his version of the commitmen= t at a slightly better feerate, relay is staggered until T.

At T, pa= ckage A gradually floods from Alice's peers and package B does the same= from Bob's peers. When there is an intersection. B overrides A and sta= rts to replace package A in the network mempools nearest to Alice. I think = those peers won't have bandwidth saving from adopting a replacement sta= ggering strategy.

Or maybe that's something completely different= if you have in mind ? I think it's worth more staggering detail to gue= ss if it's robust against all the replacement propagations patterns.
Though if we aim to save on replacement bandwidth I wonder if a "= diff-only" strategy, assuming some new p2p mechanism, would be more in= teresting (as discussed in the recent "Thoughts on fee bumping thread&= quot;).

> A lingering concern that I have about this idea is it w= ould then be
> possible to impact the propagation of another person= =E2=80=99s transaction, i.e.,
> an attacker can censor somebody=E2=80= =99s transaction from ever being announced by
> a node if they send e= nough transactions to fill up the rate limit.
> Obviously this would = be expensive since they're spending a lot on fees, but
> I imagin= e it could be profitable in some situations to spend a few thousand
>= dollars to prevent anyone from hearing about a transaction for a few hours= .
> This might be a non-issue in practice if the rate limit is genero= us and
> traffic isn=E2=80=99t horrendous, but is this a problem?
=
I think I share the concern too about an attacker exhausting a node tra= nsaction relay ressources to prevent another person's transaction to pr= opagate, especially if the transaction targeted is a L2's time-sensitiv= e one. In that latter context, an attacker would aim to delay the relay of = a time-sensitive transaction (e.g a HTLC-success) to the miners, until the = timelock expires. The malicious delay period should swallow the go-to-chain= HTLC deadline ("the deadline for received HTLCs this node fulfilled&q= uot; in bolt 2 parlance), in that current example 18 blocks.

Let'= ;s say we allocate 10 MB of bandwidth per-block period. Once the 10 MB are = exhausted, there is no more bandwidth allocated until the next block is iss= ued. If the top mempool feerate is 1 sat/vb, such naive design would allow = an attacker to buy all the p2p network bandwidth period for 0.1 BTC. If an = attacker aims to jam a HTLC transaction for the 18 blocks period, the cost = is of 1,8 BTC. If the attacker is a LN counterparty to a HTLC worth more th= an 1.8 BTC, the attack sounds economically profitable.

Worst, the p2= p network bandwidth is a public resource while a HTLC is a private, off-cha= in contract. An attacker could be counterparty to many HTLCs, where each HT= LC individual value is far inferior to the global p2p bandwidth cost but th= e sum, only known to the attacker, is superior to. Therefore, it sounds to = me that p2p network bandwidth might be attractive if the stealing are batch= ed.

Is the attacker scenario described credible ? Are the numbers sk= etched out realistic ?

If yes, I think one design insight for eventu= al transaction relay rate limiting would be to make them "dynamic"= ;, and not naively fixed for a period. By making them dynamic, an attacker = would have to compete with the effective feerate proposed by the victim tra= nsaction. E.g, if the HTLC-success feerate is of 10 sat/vb, an attacker wou= ld have to propose a stream of malicious transaction of more than 10 sat/vb= during the whole HTLC deadline period for the transaction-relay jamming to= be effective.

Further, the attack might be invisible from the victi= m standpoint, the malicious flow of feerate competitive transactions can be= hard to dissociate from an honest one. Thus, you can expect the
HTLC tr= ansaction issuer to only slowly increase the feerate at each block, and tho= se moves to be anticipated by the attacker. Even if the transaction issuer = adopts a scorched-earth approach for the latest blocks of the deadline, the= absolute value of the HTLC burnt in fees might still be less than the tran= saction relay bandwidth exhaustion paid by the attacker because the attack = is batched by the attacker.

I'm not sure if this reasoning is co= rrect. Though if yes, the issue sounds really similar to "flood&lo= ot" attack affecting LN previously researched on [0]. What worries me = more with this "exhaust&loot" is that if we introduce bounded= transaction relay rate limiting, it sounds a cheaper public ressource to b= uy than the mempool..

[0] https://arxiv.org/pdf/2006.08513.pdf

Anyway, I would say = it's worthy to investigate more transaction relay rate limiting designs= and especially carefully weigh the implications for L2s. Those ones might = have to adjust their fee-bumping and transaction rebroadcast strategies in = consequence.

> Suhas and Matt [proposed][0] adding a policy rule = allowing users to specify
> descendant limits on their transactions. = For example, some nth bit of
> nSequence with nVersion 3 means "= this transaction won't have more than X
> vbytes of descendants&q= uot; where X =3D max(1000, vsizeof(tx)) or something. It
> solves the= pinning problem with package RBF where the attacker's package
> = contains a very large and high-fee descendant.

Hey, what if the pinn= ing transaction has a parent with a junk feerate ?

Let's say yo= u have commitment tx for a HTLC of value 500000 sats, with top mempool feer= ate of 50 sat/vbyte. The commitment tx is pinned by a malicious tx of size = 1000 vbytes, matching top mempool feerate. This malicious tx has a second u= nconfirmed parent (in addition to the commitment) of size MAX_STANDARD_TX_W= EIGHT offering a 1 sat/vb. I think the pinning transaction ancestor score w= ould be less than 2 sat/vb and thus considered irrelevant for block templat= e inclusion ? At the same time, as the pinning transaction is attached with= a top mempool feerate, the honest user wouldn't be able to replace it = with a better-feerate proposal ? Unless adopting a scorched-earth approach,= =C2=A0 although economically I don't think this fee-bumping strategy is= safe in case of batch-pinning.

It might be fixable if we make one a= dditional requirement "The child transaction subject to the user-elect= ed descendant limit must have only one unconfirmed parent" (here the c= ommitment
transaction) ? Though I'm not even sure of the robustness = of this fix. The commitment transaction itself could be used as a junk pare= nt to downgrade the pinning transaction ancestor score. E.g, using a revoke= d commitment transaction with `max_accepted_htlcs` on both sides, pre-signe= d with a feerate of 1 sat/vb. We might restrict the maximum number of pendi= ng HTLCs network-wise to make the worst commitment transaction size reasona= ble, though not sure if my LN colleagues are going to like the idea..
Is that reasoning correct and conform to our Ancestor Set Based algorithm= approach ? Maybe more details are needed.

> Also, coming back to= the idea of "we can't just use {individual, ancestor}
> fee= rate," I'm interested in soliciting feedback on adding a =E2=80=9C= mining score=E2=80=9D
> calculator. I've implemented one [here][2= ] which takes the transaction in
> question, grabs all of the connect= ed mempool transactions (including
> siblings, coparents, etc., as th= ey wouldn=E2=80=99t be in the ancestor nor
> descendant sets), and bu= ilds a =E2=80=9Cblock template=E2=80=9D using our current mining
> al= gorithm. The mining score of a transaction is the ancestor feerate at
&g= t; which it is included.

I don't have a strong opinion there yet= , though if we make this "block template" construction the defaul= t one, I would be really conservative to avoid malicious child attachment o= n multi-party transactions downgrading the block inclusion efficiency.
<= br>
Antoine

Le=C2=A0mer. 9 mars 2022 =C3=A0=C2=A010:37, Gloria Zhao= via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit=C2=A0:
= Hi RBF friends,

Posting a summary of RBF discu= ssions at coredev (mostly on transaction relay rate-limiting), user-elected= descendant limit as a short term solution to unblock package RBF, and mini= ng score, all open for feedback:

One big concept discussed was bakin= g DoS protection into the p2p level rather than policy level. TLDR: The fee= s are not paid to the node operator, but to the miner. While we can use fee= s to reason about the cost of an attack, if we're ultimately interested= in preventing resource exhaustion, maybe we want to "stop the bleedin= g" when it happens and bound the amount of resources used in general. = There were two main ideas:

1. Transaction relay rate limiting (i.e. = the one you proposed above or some variation) with a feerate-based priority= queue
2. Staggered broadcast of replacement transactions: within some t= ime interval, maybe accept multiple replacements for the same prevout, but = only relay the original transaction.

Looking to solicit feedback on = these ideas and the concept in general. Is it a good idea (separate from RB= F) to add rate-limiting in transaction relay? And is it the right direction= to think about RBF DoS protection this way?

A lingering concern tha= t I have about this idea is it would then be possible to impact the propaga= tion of another person=E2=80=99s transaction, i.e., an attacker can censor = somebody=E2=80=99s transaction from ever being announced by a node if they = send enough transactions to fill up the rate limit. Obviously this would be= expensive since they're spending a lot on fees, but I imagine it could= be profitable in some situations to spend a few thousand dollars to preven= t anyone from hearing about a transaction for a few hours. This might be a = non-issue in practice if the rate limit is generous and traffic isn=E2=80= =99t horrendous, but is this a problem?

And if we don't require = an increase in (i.e. addition of "new") absolute fees, users are = essentially allowed to =E2=80=9Crecycle=E2=80=9D fees. In the scenario wher= e we prioritize relay based on feerate, users could potentially be placed h= igher in the queue, ahead of other users=E2=80=99 transactions, multiple ti= mes, without ever adding more fees to the transaction. Again, maybe this is= n=E2=80=99t a huge deal in practice if we set the parameters right, but it = seems=E2=80=A6 not great, in principle.

------= ---

It's probably also a good idea to poin= t out that there's been some discussion happening on the gist containin= g my original post on this thread (https://gist.github.c= om/glozow/25d9662c52453bd08b4b4b1d3783b9ff).

S= uhas and Matt [proposed][0] adding a policy rule allowing users to specify = descendant limits on their transactions. For example, some nth bit of nSequ= ence with nVersion 3 means "this transaction won't have more than = X vbytes of descendants" where X =3D max(1000, vsizeof(tx)) or somethi= ng. It solves the pinning problem with package RBF where the attacker's= package contains a very large and high-fee descendant.

We could add this policy and deploy it with package RBF/package relay= so that LN can use it by setting the user-elected descendant limit flag on= commitment transactions. (Otherwise package RBF is blocked until we find a= more comprehensive solution to the pinning attack).

It's simple to [implement][1] as a mempool policy, but adds some com= plexity for wallets that use it, since it limits their use of UTXOs from tr= ansactions with this bit set.

---------
<= div>
Also, coming back to the idea of "we can't just= use {individual, ancestor} feerate," I'm interested in soliciting= feedback on adding a =E2=80=9Cmining score=E2=80=9D calculator. I've i= mplemented one [here][2] which takes the transaction in question, grabs all= of the connected mempool transactions (including siblings, coparents, etc.= , as they wouldn=E2=80=99t be in the ancestor nor descendant sets), and bui= lds a =E2=80=9Cblock template=E2=80=9D using our current mining algorithm. = The mining score of a transaction is the ancestor feerate at which it is in= cluded.

This would be helpful for something li= ke ancestor-aware funding and fee-bumping in the wallet: [3], [4]. I think = if we did the rate-limited priority queue for transaction relay, we'd w= ant to use something like this as the priority value. And for RBF, we proba= bly want to require that a replacement have a higher mining score than the = original transactions. This could be computationally expensive to do all th= e time; it could be good to cache it but that could make mempool bookkeepin= g more complicated. Also, if we end up trying to switch to a candidate set-= based algorithm for mining, we'd of course need a new calculator.


Best,
Gloria

On Tue, Feb 8, 2022 at 4= :58 AM Anthony Towns <aj@erisian.com.au> wrote:
On Mon, Feb 07, 2022 at 11:16:26AM +0000, Gloria Zhao= wrote:
> @aj:
> > I wonder sometimes if it could be sufficient to just have a relay= rate
> > limit and prioritise by ancestor feerate though. Maybe something = like:
> > - instead of adding txs to each peers setInventoryTxToSend immedi= ately,
> >=C2=A0 =C2=A0set a mempool flag "relayed=3Dfalse"
> > - on a time delay, add the top N (by fee rate) "relayed=3Dfa= lse" txs to
> >=C2=A0 =C2=A0each peer's setInventoryTxToSend and mark them as= "relayed=3Dtrue";
> >=C2=A0 =C2=A0calculate how much kB those txs were, and do this aga= in after
> >=C2=A0 =C2=A0SIZE/RATELIMIT seconds

> > - don't include "relayed=3Dfalse" txs when building= blocks?

The "?" was me not being sure that point is a good suggestion...<= br>
Miners might reasonably decide to have no rate limit, and always relay,
and never exclude txs -- but the question then becomes is whether they
hear about the tx at all, so rate limiting behaviour could still be a
potential problem for whoever made the tx.

> Wow cool! I think outbound tx relay size-based rate-limiting and
> prioritizing tx relay by feerate are great ideas for preventing spamme= rs
> from wasting bandwidth network-wide. I agree, this would slow the low<= br> > feerate spam down, preventing a huge network-wide bandwidth spike. And= it
> would allow high feerate transactions to propagate as they should,
> regardless of how busy traffic is. Combined with inbound tx request > rate-limiting, might this be sufficient to prevent DoS regardless of t= he
> fee-based replacement policies?

I think you only want to do outbound rate limits, ie, how often you send INV, GETDATA and TX messages? Once you receive any of those, I think
you have to immediately process / ignore it, you can't really sensibly<= br> defer it (beyond the existing queues we have that just build up while
we're busy processing other things first)?

> One point that I'm not 100% clear on: is it ok to prioritize the > transactions by ancestor feerate in this scheme? As I described in the=
> original post, this can be quite different from the actual feerate we = would
> consider a transaction in a block for. The transaction could have a hi= gh
> feerate sibling bumping its ancestor.
> For example, A (1sat/vB) has 2 children: B (49sat/vB) and C (5sat/vB).= If
> we just received C, it would be incorrect to give it a priority equal = to
> its ancestor feerate (3sat/vB) because if we constructed a block templ= ate
> now, B would bump A, and C's new ancestor feerate is 5sat/vB.
> Then, if we imagine that top N is >5sat/vB, we're not relaying = C. If we
> also exclude C when building blocks, we're missing out on good fee= s.

I think you're right that this would be ugly. It's something of a special case:

=C2=A0a) you really care about C getting into the next block; but
=C2=A0b) you're trusting B not being replaced by a higher fee tx that =C2=A0 =C2=A0 doesn't have A as a parent; and
=C2=A0c) there's a lot of txs bidding the floor of the next block up to= a
=C2=A0 =C2=A0 level in-between the ancestor fee rate of 3sat/vB and the tx = fee
=C2=A0 =C2=A0 rate of 5sat/vB

Without (a), maybe you don't care about it getting to a miner quickly.<= br> If your trust in (b) was misplaced, then your tx's effective fee rate will drop and (because of (c)), you'll lose anyway. And if the spam end= s
up outside of (c)'s range, either the rate limiting won't take effe= ct
(spam's too cheap) and you'll be fine, or you'll miss out on th= e block
anyway (spam's paying more than your tx rate) and you never had any hop= e
of making it in.

Note that we already rate limit via INVENTORY_BROADCAST_MAX /
*_INVENTORY_BROADCAST_INTERVAL; which gets to something like 10,500 txs
per 10 minutes for outbound connections. This would be a weight based
rate limit instead-of/in-addition-to that, I guess.

As far as a non-ugly approach goes, I think you'd have to be smarter ab= out
tracking the "effective fee rate" than the ancestor fee rate mana= ges;
maybe that's something that could fall out of Murch and Clara's can= didate
set blockbuilding ideas [0] ?

Perhaps that same work would also make it possible to come up with
a better answer to "do I care that this replacement would invalidate these descendents?"

[0] https://github.com/Xekyo/blockbuilding

> > - keep high-feerate evicted txs around for a while in case they g= et
> >=C2=A0 =C2=A0mined by someone else to improve compact block relay,= a la the
> >=C2=A0 =C2=A0orphan pool?
> Replaced transactions are already added to vExtraTxnForCompact :D

I guess I was thinking that it's just a 100 tx LRU cache, which might not be good enough?

Maybe it would be more on point to have a rate limit apply only to
replacement transactions?

> For wallets, AJ's "All you need is for there to be *a* path t= hat follows
> the new relay rules and gets from your node/wallet to perhaps 10% of > hashpower" makes sense to me (which would be the former).

Perhaps a corollarly of that is that it's *better* to have the mempool<= br> acceptance rule only consider economic incentives, and have the spam
prevention only be about "shall I tell my peers about this?"

If you don't have that split; then the anti-spam rules can prevent you<= br> from getting the tx in the mempool at all; whereas if you do have the
split, then even if the bitcoind anti-spam rules are blocking you at
every turn, you can still send your tx to miners by some other route,
and then they can add it to their mempool directly without any hassle.

Cheers,
aj

_______________________________________________
bitcoin-dev mailing list
= bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mail= man/listinfo/bitcoin-dev
--000000000000f2698e05da606c25--