Return-Path: Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 58A73C000D for ; Thu, 23 Sep 2021 04:29:59 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 3D620402A2 for ; Thu, 23 Sep 2021 04:29:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -2.098 X-Spam-Level: X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3At55QXNaIU0 for ; Thu, 23 Sep 2021 04:29:54 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by smtp2.osuosl.org (Postfix) with ESMTPS id AB28740263 for ; Thu, 23 Sep 2021 04:29:53 +0000 (UTC) Received: by mail-wr1-x436.google.com with SMTP id u15so13222891wru.6 for ; Wed, 22 Sep 2021 21:29:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1IAsmrD+dCYf39+f+ItQspf2c+NB5v264calY44j1uA=; b=oHP7rxfr5R3ad6+w7dLKd/BBbdHS3X5wyiNnTceiE2ZvKxQcf51QeKiC4o7wvGcihy qmFnuyu7B2j3GwDOwSS2ZISFE5+qHVerklGuk09pH5P9nbu5tzrbSfd3Z0+sGu09+Y9n dZ4b+i01vLg5ZsXNWFIY3wXNoAV8Uwx+7AszC0GsQT2O3v6U/eqqQ+LSvLzmXeIzH83H Oj4wqHYQzibGfvUPczSWsZQv1fyt8nuJE9i1h83+QeYqF6A2Dj3Se2wkaerbOLgfjioe IewKK8EcuAzkpzWjubHbVyWSfEb9no4lGoo6u+9dsK6DlOKtqfdxLc59FZJsJM2oDfOc UC5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1IAsmrD+dCYf39+f+ItQspf2c+NB5v264calY44j1uA=; b=xJI9HJ5QBilqyQK3VQwUbMmLPD6mpe6Q/agQb8bnMXe9oCi3syjN6H/PLbt3wxpZY+ W3hnkzZnWyb2wZx/j75s37Z/m5xWS8WUP2XClTr3shW9ErTOq1FoHnb/YcR3H17u0ZGc VO9l5Jw/12+nHlyjTaT8xuJztO4QyRQzzowO/cbz7b9aK2FfVmi9m8DsRbat864Kh9KH /bxe01nJl7ktg+WCnuQHoShxI4trcoNgeLoFOY+LjqdJ37OZCjwi5rAS2O4z+2iJs1Zv z4C0M4AT1RKCDcT5zrVvetfdh/J1Imdk6671ZUZTgW4am7Qp20XSa2qJyLpHeR9+0kBU SFXw== X-Gm-Message-State: AOAM531k/h9yqkfMcfiP1HKV1tKcYEvDrlBVGqHkxwwvZWpRWPmaWG2r leS1nmqHfIZXQNgPFuWYHdMbQZtNZRam8ksniKv1HoTgdXk= X-Google-Smtp-Source: ABdhPJz6LPuK5y0Ra6A+JO8R7LPzN5gmrAGPUmbINgdFVcnExbyX3GqG177ICK14JJ7RjDB7tbnavXiYxk5gsXSZ/aA= X-Received: by 2002:a1c:2289:: with SMTP id i131mr13725318wmi.113.1632371391497; Wed, 22 Sep 2021 21:29:51 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Antoine Riard Date: Thu, 23 Sep 2021 00:29:39 -0400 Message-ID: To: Gloria Zhao Content-Type: multipart/alternative; boundary="000000000000ee8afa05cca214d1" X-Mailman-Approved-At: Thu, 23 Sep 2021 08:23:30 +0000 Cc: Bitcoin Protocol Discussion Subject: Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Sep 2021 04:29:59 -0000 --000000000000ee8afa05cca214d1 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable > Correct, if B+C is too low feerate to be accepted, we will reject it. I > prefer this because it is incentive compatible: A can be mined by itself, > so there's no reason to prefer A+B+C instead of A. > As another way of looking at this, consider the case where we do accept > A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches > capacity, we evict the lowest descendant feerate transactions, which are > B+C in this case. This gives us the same resulting mempool, with A and no= t > B+C. I agree here. Doing otherwise, we might evict other transactions mempool in `MempoolAccept::Finalize` with a higher-feerate than B+C while those evicted transactions are the most compelling for block construction. I thought at first missing this acceptance requirement would break a fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is attached to a child signed with SIGHASH_ANYONECANPAY but in this case the child fee is capturing the parent value. I can't think of other fee-bumping schemes potentially affected. If they do exist I would say they're wrong in their design assumptions. > If or when we have witness replacement, the logic is: if the individual > transaction is enough to replace the mempool one, the replacement will > happen during the preceding individual transaction acceptance, and > deduplication logic will work. Otherwise, we will try to deduplicate by > wtxid, see that we need a package witness replacement, and use the packag= e > feerate to evaluate whether this is economically rational. IIUC, you have package A+B, during the dedup phase early in `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A' and A' is higher feerate than A, you trim A and replace by A' ? I think this approach is safe, the one who appears unsafe to me is when A' has a _lower_ feerate, even if A' is already accepted by our mempool ? In that case iirc that would be a pinning. Good to see progress on witness replacement before we see usage of Taproot tree in the context of multi-party, where a malicious counterparty inflates its witness to jam a honest spending. (Note, the commit linked currently points nowhere :)) > Please note that A may replace A' even if A' has higher fees than A > individually, because the proposed package RBF utilizes the fees and size > of the entire package. This just requires E to pay enough fees, although > this can be pretty high if there are also potential B' and C' competing > commitment transactions that we don't know about. Ah right, if the package acceptance waives `PaysMoreThanConflicts` for the individual check on A, the honest package should replace the pinning attempt. I've not fully parsed the proposed implementation yet. Though note, I think it's still unsafe for a Lightning multi-commitment-broadcast-as-one-package as a malicious A' might have an absolute fee higher than E. It sounds uneconomical for an attacker but I think it's not when you consider than you can "batch" attack against multiple honest counterparties. E.g, Mallory broadcast A' + B' + C' + D' where A' conflicts with Alice's honest package P1, B' conflicts with Bob's honest package P2, C' conflicts with Caroll's honest package P3. And D' is a high-fee child of A' + B' + C'. If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacker ? > So far, my understanding is that multi-parent-1-child is desired for > batched fee-bumping ( > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and > I've also seen your response which I have less context on ( > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202). That > being said, I am happy to create a new proposal for 1 parent + 1 child > (which would be slightly simpler) and plan for moving to > multi-parent-1-child later if that is preferred. I am very interested in > hearing feedback on that approach. I think batched fee-bumping is okay as long as you don't have time-sensitive outputs encumbering your commitment transactions. For the reasons mentioned above, I think that's unsafe. What I'm worried about is L2 developers, potentially not aware about all the mempool subtleties blurring the difference and always batching their broadcast by default. IMO, a good thing by restraining to 1-parent + 1 child, we artificially constraint L2 design space for now and minimize risks of unsafe usage of the package API :) I think that's a point where it would be relevant to have the opinion of more L2 devs. > I think there is a misunderstanding here - let me describe what I'm > proposing we'd do in this situation: we'll try individual submission for A, > see that it fails due to "insufficient fees." Then, we'll try package > validation for A+B and use package RBF. If A+B pays enough, it can still > replace A'. If A fails for a bad signature, we won't look at B or A+B. Does > this meet your expectations? Yes there was a misunderstanding, I think this approach is correct, it's more a question of performance. Do we assume that broadcasted packages are "honest" by default and that the parent(s) always need the child to pass the fee checks, that way saving the processing of individual transactions which are expected to fail in 99% of cases or more ad hoc composition of packages at relay ? I think this point is quite dependent on the p2p packages format/logic we'll end up on and that we should feel free to revisit it later ? > What problem are you trying to solve by the package feerate *after* dedup rule ? > My understanding is that an in-package transaction might be already in the mempool. Therefore, to compute a correct RBF penalty replacement, the vsize of this transaction could be discarded lowering the cost of package RBF. > I'm proposing that, when a transaction has already been submitted to > mempool, we would ignore both its fees and vsize when calculating package > feerate. Yes, if you receive A+B, and A is already in-mempoo, I agree you can discard its feerate as B should pay for all fees checked on its own. Where I'm unclear is when you have in-mempool A+B and receive A+B'. Should B' have a fee high enough to cover the bandwidth penalty replacement (`PaysForRBF`, 2nd check) of both A+B' or only B' ? If you have a second-layer like current Lightning, you might have a counterparty commitment to replace and should always expect to have to pay for parent replacement bandwidth. Where a potential discount sounds interesting is when you have an univoque state on the first-stage of transactions. E.g DLC's funding transaction which might be CPFP by any participant iirc. > Note that, if C' conflicts with C, it also conflicts with D, since D is a > descendant of C and would thus need to be evicted along with it. Ah once again I think it's a misunderstanding without the code under my eyes! If we do C' `PreChecks`, solve the conflicts provoked by it, i.e mark for potential eviction D and don't consider it for future conflicts in the rest of the package, I think D' `PreChecks` should be good ? > More generally, this example is surprising to me because I didn't think > packages would be used to fee-bump replaceable transactions. Do we want the > child to be able to replace mempool transactions as well? If we mean when you have replaceable A+B then A'+B' try to replace with a higher-feerate ? I think that's exactly the case we need for Lightning as A+B is coming from Alice and A'+B' is coming from Bob :/ > I'm not sure what you mean? Let's say we have a package of parent A + child > B, where A is supposed to replace a mempool transaction A'. Are you sayin= g > that counterparties are able to malleate the package child B, or a child of > A'? The second option, a child of A', In the LN case I think the CPFP is attached on one's anchor output. I think it's good if we assume the solve-conflicts-after-parent's`'PreChecks` mentioned above or fixing inherited signaling or full-rbf ? > Sorry, I don't understand what you mean by "preserve the package > integrity?" Could you elaborate? After thinking the relaxation about the "new" unconfirmed input is not linked to trimming but I would say more to the multi-parent support. Let's say you have A+B trying to replace C+D where B is also spending already in-mempool E. To succeed, you need to waive the no-new-unconfirmed input as D isn't spending E. So good, I think we agree on the problem description here. > I am in agreement with your calculations but unsure if we disagree on the > expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's, > it fails the proposed package RBF Rule #2, so this package would be > rejected. Does this meet your expectations? Well what sounds odd to me, in my example, we fail D even if it has a higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute fees are 4500 sats ? Is this compatible with a model where a miner prioritizes absolute fees over ancestor score, in the case that mempools aren't full-enough to fulfill a block ? Let me know if I can clarify a point. Antoine Le lun. 20 sept. 2021 =C3=A0 11:10, Gloria Zhao a = =C3=A9crit : > > Hi Antoine, > > First of all, thank you for the thorough review. I appreciate your insigh= t > on LN requirements. > > > IIUC, you have a package A+B+C submitted for acceptance and A is alread= y > in your mempool. You trim out A from the package and then evaluate B+C. > > > I think this might be an issue if A is the higher-fee element of the AB= C > package. B+C package fees might be under the mempool min fee and will be > rejected, potentially breaking the acceptance expectations of the package > issuer ? > > Correct, if B+C is too low feerate to be accepted, we will reject it. I > prefer this because it is incentive compatible: A can be mined by itself, > so there's no reason to prefer A+B+C instead of A. > As another way of looking at this, consider the case where we do accept > A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches > capacity, we evict the lowest descendant feerate transactions, which are > B+C in this case. This gives us the same resulting mempool, with A and no= t > B+C. > > > > Further, I think the dedup should be done on wtxid, as you might have > multiple valid witnesses. Though with varying vsizes and as such offering > different feerates. > > I agree that variations of the same package with different witnesses is a > case that must be handled. I consider witness replacement to be a project > that can be done in parallel to package mempool acceptance because being > able to accept packages does not worsen the problem of a > same-txid-different-witness "pinning" attack. > > If or when we have witness replacement, the logic is: if the individual > transaction is enough to replace the mempool one, the replacement will > happen during the preceding individual transaction acceptance, and > deduplication logic will work. Otherwise, we will try to deduplicate by > wtxid, see that we need a package witness replacement, and use the packag= e > feerate to evaluate whether this is economically rational. > > See the #22290 "handle package transactions already in mempool" commit ( > https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf7614= 5242fecad7e274bfcb5ff), > which handles the case of same-txid-different-witness by simply using the > transaction in the mempool for now, with TODOs for what I just described. > > > > I'm not clearly understanding the accepted topologies. By "parent and > child to share a parent", do you mean the set of transactions A, B, C, > where B is spending A and C is spending A and B would be correct ? > > Yes, that is what I meant. Yes, that would a valid package under these > rules. > > > If yes, is there a width-limit introduced or we fallback on > MAX_PACKAGE_COUNT=3D25 ? > > No, there is no limit on connectivity other than "child with all > unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=3D25 and child's > in-mempool + in-package ancestor limits. > > > > Considering the current Core's mempool acceptance rules, I think CPFP > batching is unsafe for LN time-sensitive closure. A malicious tx-relay > jamming successful on one channel commitment transaction would contamine > the remaining commitments sharing the same package. > > > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment > transactions and E a shared CPFP. If a malicious A' transaction has a > better feerate than A, the whole package acceptance will fail. Even if A' > confirms in the following block, > the propagation and confirmation of B+C+D have been delayed. This could > carry on a loss of funds. > > Please note that A may replace A' even if A' has higher fees than A > individually, because the proposed package RBF utilizes the fees and size > of the entire package. This just requires E to pay enough fees, although > this can be pretty high if there are also potential B' and C' competing > commitment transactions that we don't know about. > > > > IMHO, I'm leaning towards deploying during a first phase > 1-parent/1-child. I think it's the most conservative step still improving > second-layer safety. > > So far, my understanding is that multi-parent-1-child is desired for > batched fee-bumping ( > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and > I've also seen your response which I have less context on ( > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202). > That being said, I am happy to create a new proposal for 1 parent + 1 chi= ld > (which would be slightly simpler) and plan for moving to > multi-parent-1-child later if that is preferred. I am very interested in > hearing feedback on that approach. > > > > If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats > and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance > fails. For this reason I think the individual RBF should be bypassed and > only the package RBF apply ? > > I think there is a misunderstanding here - let me describe what I'm > proposing we'd do in this situation: we'll try individual submission for = A, > see that it fails due to "insufficient fees." Then, we'll try package > validation for A+B and use package RBF. If A+B pays enough, it can still > replace A'. If A fails for a bad signature, we won't look at B or A+B. Do= es > this meet your expectations? > > > > What problem are you trying to solve by the package feerate *after* > dedup rule ? > > My understanding is that an in-package transaction might be already in > the mempool. Therefore, to compute a correct RBF penalty replacement, the > vsize of this transaction could be discarded lowering the cost of package > RBF. > > I'm proposing that, when a transaction has already been submitted to > mempool, we would ignore both its fees and vsize when calculating package > feerate. In example G2, we shouldn't count M1 fees after its submission t= o > mempool, since M1's fees have already been used to pay for its individual > bandwidth, and it shouldn't be used again to pay for P2 and P3's bandwidt= h. > We also shouldn't count its vsize, since it has already been paid for. > > > > I think this is a footgunish API, as if a package issuer send the > multiple-parent-one-child package A,B,C,D where D is the child of A,B,C. > Then try to broadcast the higher-feerate C'+D' package, it should be > rejected. So it's breaking the naive broadcaster assumption that a > higher-feerate/higher-fee package always replaces ? > > Note that, if C' conflicts with C, it also conflicts with D, since D is a > descendant of C and would thus need to be evicted along with it. > Implicitly, D' would not be in conflict with D. > More generally, this example is surprising to me because I didn't think > packages would be used to fee-bump replaceable transactions. Do we want t= he > child to be able to replace mempool transactions as well? This can be > implemented with a bit of additional logic. > > > I think this is unsafe for L2s if counterparties have malleability of > the child transaction. They can block your package replacement by > opting-out from RBF signaling. IIRC, LN's "anchor output" presents such a= n > ability. > > I'm not sure what you mean? Let's say we have a package of parent A + > child B, where A is supposed to replace a mempool transaction A'. Are you > saying that counterparties are able to malleate the package child B, or a > child of A'? If they can malleate a child of A', that shouldn't matter as > long as A' is signaling replacement. This would be handled identically wi= th > full RBF and what Core currently implements. > > > I think this is an issue brought by the trimming during the dedup phase= . > If we preserve the package integrity, only re-using the tx-level checks > results of already in-mempool transactions to gain in CPU time we won't > have this issue. Package childs can add unconfirmed inputs as long as > they're in-package, the bip125 rule2 is only evaluated against parents ? > > Sorry, I don't understand what you mean by "preserve the package > integrity?" Could you elaborate? > > > Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbyte= s > and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D > spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1 > sat/vb for 1000 vbytes. > > > Package A + B ancestor score is 10 sat/vb. > > > D has a higher feerate/absolute fee than B. > > > Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's > 1000 sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb) > > I am in agreement with your calculations but unsure if we disagree on the > expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's, > it fails the proposed package RBF Rule #2, so this package would be > rejected. Does this meet your expectations? > > Thank you for linking to projects that might be interested in package > relay :) > > Thanks, > Gloria > > On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard > wrote: > >> Hi Gloria, >> >> > A package may contain transactions that are already in the mempool. We >> > remove >> > ("deduplicate") those transactions from the package for the purposes o= f >> > package >> > mempool acceptance. If a package is empty after deduplication, we do >> > nothing. >> >> IIUC, you have a package A+B+C submitted for acceptance and A is already >> in your mempool. You trim out A from the package and then evaluate B+C. >> >> I think this might be an issue if A is the higher-fee element of the ABC >> package. B+C package fees might be under the mempool min fee and will be >> rejected, potentially breaking the acceptance expectations of the packag= e >> issuer ? >> >> Further, I think the dedup should be done on wtxid, as you might have >> multiple valid witnesses. Though with varying vsizes and as such offerin= g >> different feerates. >> >> E.g you're going to evaluate the package A+B and A' is already in your >> mempool with a bigger valid witness. You trim A based on txid, then you >> evaluate A'+B, which fails the fee checks. However, evaluating A+B would >> have been a success. >> >> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to >> avoid repeated signatures verification and parent UTXOs fetches ? Can we >> achieve the same goal by bypassing tx-level checks for already-in txn wh= ile >> conserving the package integrity for package-level checks ? >> >> > Note that it's possible for the parents to be >> > indirect >> > descendants/ancestors of one another, or for parent and child to share= a >> > parent, >> > so we cannot make any other topology assumptions. >> >> I'm not clearly understanding the accepted topologies. By "parent and >> child to share a parent", do you mean the set of transactions A, B, C, >> where B is spending A and C is spending A and B would be correct ? >> >> If yes, is there a width-limit introduced or we fallback on >> MAX_PACKAGE_COUNT=3D25 ? >> >> IIRC, one rationale to come with this topology limitation was to lower >> the DoS risks when potentially deploying p2p packages. >> >> Considering the current Core's mempool acceptance rules, I think CPFP >> batching is unsafe for LN time-sensitive closure. A malicious tx-relay >> jamming successful on one channel commitment transaction would contamine >> the remaining commitments sharing the same package. >> >> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment >> transactions and E a shared CPFP. If a malicious A' transaction has a >> better feerate than A, the whole package acceptance will fail. Even if A= ' >> confirms in the following block, >> the propagation and confirmation of B+C+D have been delayed. This could >> carry on a loss of funds. >> >> That said, if you're broadcasting commitment transactions without >> time-sensitive HTLC outputs, I think the batching is effectively a fee >> saving as you don't have to duplicate the CPFP. >> >> IMHO, I'm leaning towards deploying during a first phase >> 1-parent/1-child. I think it's the most conservative step still improvin= g >> second-layer safety. >> >> > *Rationale*: It would be incorrect to use the fees of transactions >> that are >> > already in the mempool, as we do not want a transaction's fees to be >> > double-counted for both its individual RBF and package RBF. >> >> I'm unsure about the logical order of the checks proposed. >> >> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats >> and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptanc= e >> fails. For this reason I think the individual RBF should be bypassed and >> only the package RBF apply ? >> >> Note this situation is plausible, with current LN design, your >> counterparty can have a commitment transaction with a better fee just by >> selecting a higher `dust_limit_satoshis` than yours. >> >> > Examples F and G [14] show the same package, but P1 is submitted >> > individually before >> > the package in example G. In example F, we can see that the 300vB >> package >> > pays >> > an additional 200sat in fees, which is not enough to pay for its own >> > bandwidth >> > (BIP125#4). In example G, we can see that P1 pays enough to replace M1= , >> but >> > using P1's fees again during package submission would make it look lik= e >> a >> > 300sat >> > increase for a 200vB package. Even including its fees and size would >> not be >> > sufficient in this example, since the 300sat looks like enough for the >> 300vB >> > package. The calculcation after deduplication is 100sat increase for a >> > package >> > of size 200vB, which correctly fails BIP125#4. Assume all transactions >> have >> > a >> > size of 100vB. >> >> What problem are you trying to solve by the package feerate *after* dedu= p >> rule ? >> >> My understanding is that an in-package transaction might be already in >> the mempool. Therefore, to compute a correct RBF penalty replacement, th= e >> vsize of this transaction could be discarded lowering the cost of packag= e >> RBF. >> >> If we keep a "safe" dedup mechanism (see my point above), I think this >> discount is justified, as the validation cost of node operators is paid = for >> ? >> >> > The child cannot replace mempool transactions. >> >> Let's say you issue package A+B, then package C+B', where B' is a child >> of both A and C. This rule fails the acceptance of C+B' ? >> >> I think this is a footgunish API, as if a package issuer send the >> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C. >> Then try to broadcast the higher-feerate C'+D' package, it should be >> rejected. So it's breaking the naive broadcaster assumption that a >> higher-feerate/higher-fee package always replaces ? And it might be unsa= fe >> in protocols where states are symmetric. E.g a malicious counterparty >> broadcasts first S+A, then you honestly broadcast S+B, where B pays bett= er >> fees. >> >> > All mempool transactions to be replaced must signal replaceability. >> >> I think this is unsafe for L2s if counterparties have malleability of th= e >> child transaction. They can block your package replacement by opting-out >> from RBF signaling. IIRC, LN's "anchor output" presents such an ability. >> >> I think it's better to either fix inherited signaling or move towards >> full-rbf. >> >> > if a package parent has already been submitted, it would >> > look >> >like the child is spending a "new" unconfirmed input. >> >> I think this is an issue brought by the trimming during the dedup phase. >> If we preserve the package integrity, only re-using the tx-level checks >> results of already in-mempool transactions to gain in CPU time we won't >> have this issue. Package childs can add unconfirmed inputs as long as >> they're in-package, the bip125 rule2 is only evaluated against parents ? >> >> > However, we still achieve the same goal of requiring the >> > replacement >> > transactions to have a ancestor score at least as high as the original >> > ones. >> >> I'm not sure if this holds... >> >> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes >> and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement = D >> spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1 >> sat/vb for 1000 vbytes. >> >> Package A + B ancestor score is 10 sat/vb. >> >> D has a higher feerate/absolute fee than B. >> >> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's 100= 0 >> sats + D's 1500 sats) / >> A's 100 vb + C's 1000 vb + D's 100 vb) >> >> Overall, this is a review through the lenses of LN requirements. I think >> other L2 protocols/applications >> could be candidates to using package accept/relay such as: >> * https://github.com/lightninglabs/pool >> * https://github.com/discreetlogcontracts/dlcspecs >> * https://github.com/bitcoin-teleport/teleport-transactions/ >> * https://github.com/sapio-lang/sapio >> * https://github.com/commerceblock/mercury/blob/master/doc/statechains.m= d >> * https://github.com/revault/practical-revault >> >> Thanks for rolling forward the ball on this subject. >> >> Antoine >> >> Le jeu. 16 sept. 2021 =C3=A0 03:55, Gloria Zhao via bitcoin-dev < >> bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit : >> >>> Hi there, >>> >>> I'm writing to propose a set of mempool policy changes to enable packag= e >>> validation (in preparation for package relay) in Bitcoin Core. These >>> would not >>> be consensus or P2P protocol changes. However, since mempool policy >>> significantly affects transaction propagation, I believe this is >>> relevant for >>> the mailing list. >>> >>> My proposal enables packages consisting of multiple parents and 1 child= . >>> If you >>> develop software that relies on specific transaction relay assumptions >>> and/or >>> are interested in using package relay in the future, I'm very intereste= d >>> to hear >>> your feedback on the utility or restrictiveness of these package >>> policies for >>> your use cases. >>> >>> A draft implementation of this proposal can be found in [Bitcoin Core >>> PR#22290][1]. >>> >>> An illustrated version of this post can be found at >>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a. >>> I have also linked the images below. >>> >>> ## Background >>> >>> Feel free to skip this section if you are already familiar with mempool >>> policy >>> and package relay terminology. >>> >>> ### Terminology Clarifications >>> >>> * Package =3D an ordered list of related transactions, representable by= a >>> Directed >>> Acyclic Graph. >>> * Package Feerate =3D the total modified fees divided by the total virt= ual >>> size of >>> all transactions in the package. >>> - Modified fees =3D a transaction's base fees + fee delta applied b= y >>> the user >>> with `prioritisetransaction`. As such, we expect this to vary >>> across >>> mempools. >>> - Virtual Size =3D the maximum of virtual sizes calculated using >>> [BIP141 >>> virtual size][2] and sigop weight. [Implemented here in Bitcoin >>> Core][3]. >>> - Note that feerate is not necessarily based on the base fees and >>> serialized >>> size. >>> >>> * Fee-Bumping =3D user/wallet actions that take advantage of miner >>> incentives to >>> boost a transaction's candidacy for inclusion in a block, including >>> Child Pays >>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention = in >>> mempool policy is to recognize when the new transaction is more >>> economical to >>> mine than the original one(s) but not open DoS vectors, so there are so= me >>> limitations. >>> >>> ### Policy >>> >>> The purpose of the mempool is to store the best (to be most >>> incentive-compatible >>> with miners, highest feerate) candidates for inclusion in a block. >>> Miners use >>> the mempool to build block templates. The mempool is also useful as a >>> cache for >>> boosting block relay and validation performance, aiding transaction >>> relay, and >>> generating feerate estimations. >>> >>> Ideally, all consensus-valid transactions paying reasonable fees should >>> make it >>> to miners through normal transaction relay, without any special >>> connectivity or >>> relationships with miners. On the other hand, nodes do not have unlimit= ed >>> resources, and a P2P network designed to let any honest node broadcast >>> their >>> transactions also exposes the transaction validation engine to DoS >>> attacks from >>> malicious peers. >>> >>> As such, for unconfirmed transactions we are considering for our >>> mempool, we >>> apply a set of validation rules in addition to consensus, primarily to >>> protect >>> us from resource exhaustion and aid our efforts to keep the highest fee >>> transactions. We call this mempool _policy_: a set of (configurable, >>> node-specific) rules that transactions must abide by in order to be >>> accepted >>> into our mempool. Transaction "Standardness" rules and mempool >>> restrictions such >>> as "too-long-mempool-chain" are both examples of policy. >>> >>> ### Package Relay and Package Mempool Accept >>> >>> In transaction relay, we currently consider transactions one at a time >>> for >>> submission to the mempool. This creates a limitation in the node's >>> ability to >>> determine which transactions have the highest feerates, since we cannot >>> take >>> into account descendants (i.e. cannot use CPFP) until all the >>> transactions are >>> in the mempool. Similarly, we cannot use a transaction's descendants wh= en >>> considering it for RBF. When an individual transaction does not meet th= e >>> mempool >>> minimum feerate and the user isn't able to create a replacement >>> transaction >>> directly, it will not be accepted by mempools. >>> >>> This limitation presents a security issue for applications and users >>> relying on >>> time-sensitive transactions. For example, Lightning and other protocols >>> create >>> UTXOs with multiple spending paths, where one counterparty's spending >>> path opens >>> up after a timelock, and users are protected from cheating scenarios as >>> long as >>> they redeem on-chain in time. A key security assumption is that all >>> parties' >>> transactions will propagate and confirm in a timely manner. This >>> assumption can >>> be broken if fee-bumping does not work as intended. >>> >>> The end goal for Package Relay is to consider multiple transactions at >>> the same >>> time, e.g. a transaction with its high-fee child. This may help us bett= er >>> determine whether transactions should be accepted to our mempool, >>> especially if >>> they don't meet fee requirements individually or are better RBF >>> candidates as a >>> package. A combination of changes to mempool validation logic, policy, >>> and >>> transaction relay allows us to better propagate the transactions with t= he >>> highest package feerates to miners, and makes fee-bumping tools more >>> powerful >>> for users. >>> >>> The "relay" part of Package Relay suggests P2P messaging changes, but a >>> large >>> part of the changes are in the mempool's package validation logic. We >>> call this >>> *Package Mempool Accept*. >>> >>> ### Previous Work >>> >>> * Given that mempool validation is DoS-sensitive and complex, it would = be >>> dangerous to haphazardly tack on package validation logic. Many >>> efforts have >>> been made to make mempool validation less opaque (see [#16400][4], >>> [#21062][5], >>> [#22675][6], [#22796][7]). >>> * [#20833][8] Added basic capabilities for package validation, test >>> accepts only >>> (no submission to mempool). >>> * [#21800][9] Implemented package ancestor/descendant limit checks for >>> arbitrary >>> packages. Still test accepts only. >>> * Previous package relay proposals (see [#16401][10], [#19621][11]). >>> >>> ### Existing Package Rules >>> >>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll >>> consider >>> them as "given" in the rest of this document, though they can be >>> changed, since >>> package validation is test-accept only right now. >>> >>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=3D25` count and >>> `MAX_PACKAGE_SIZE=3D101KvB` total size [8] >>> >>> *Rationale*: This is already enforced as mempool ancestor/descendant >>> limits. >>> Presumably, transactions in a package are all related, so exceeding thi= s >>> limit >>> would mean that the package can either be split up or it wouldn't pass >>> this >>> mempool policy. >>> >>> 2. Packages must be topologically sorted: if any dependencies exist >>> between >>> transactions, parents must appear somewhere before children. [8] >>> >>> 3. A package cannot have conflicting transactions, i.e. none of them ca= n >>> spend >>> the same inputs. This also means there cannot be duplicate transactions= . >>> [8] >>> >>> 4. When packages are evaluated against ancestor/descendant limits in a >>> test >>> accept, the union of all of their descendants and ancestors is >>> considered. This >>> is essentially a "worst case" heuristic where every transaction in the >>> package >>> is treated as each other's ancestor and descendant. [8] >>> Packages for which ancestor/descendant limits are accurately captured b= y >>> this >>> heuristic: [19] >>> >>> There are also limitations such as the fact that CPFP carve out is not >>> applied >>> to package transactions. #20833 also disables RBF in package validation= ; >>> this >>> proposal overrides that to allow packages to use RBF. >>> >>> ## Proposed Changes >>> >>> The next step in the Package Mempool Accept project is to implement >>> submission >>> to mempool, initially through RPC only. This allows us to test the >>> submission >>> logic before exposing it on P2P. >>> >>> ### Summary >>> >>> - Packages may contain already-in-mempool transactions. >>> - Packages are 2 generations, Multi-Parent-1-Child. >>> - Fee-related checks use the package feerate. This means that wallets c= an >>> create a package that utilizes CPFP. >>> - Parents are allowed to RBF mempool transactions with a set of rules >>> similar >>> to BIP125. This enables a combination of CPFP and RBF, where a >>> transaction's descendant fees pay for replacing mempool conflicts. >>> >>> There is a draft implementation in [#22290][1]. It is WIP, but feedback >>> is >>> always welcome. >>> >>> ### Details >>> >>> #### Packages May Contain Already-in-Mempool Transactions >>> >>> A package may contain transactions that are already in the mempool. We >>> remove >>> ("deduplicate") those transactions from the package for the purposes of >>> package >>> mempool acceptance. If a package is empty after deduplication, we do >>> nothing. >>> >>> *Rationale*: Mempools vary across the network. It's possible for a >>> parent to be >>> accepted to the mempool of a peer on its own due to differences in >>> policy and >>> fee market fluctuations. We should not reject or penalize the entire >>> package for >>> an individual transaction as that could be a censorship vector. >>> >>> #### Packages Are Multi-Parent-1-Child >>> >>> Only packages of a specific topology are permitted. Namely, a package i= s >>> exactly >>> 1 child with all of its unconfirmed parents. After deduplication, the >>> package >>> may be exactly the same, empty, 1 child, 1 child with just some of its >>> unconfirmed parents, etc. Note that it's possible for the parents to be >>> indirect >>> descendants/ancestors of one another, or for parent and child to share = a >>> parent, >>> so we cannot make any other topology assumptions. >>> >>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple >>> parents >>> makes it possible to fee-bump a batch of transactions. Restricting >>> packages to a >>> defined topology is also easier to reason about and simplifies the >>> validation >>> logic greatly. Multi-parent-1-child allows us to think of the package a= s >>> one big >>> transaction, where: >>> >>> - Inputs =3D all the inputs of parents + inputs of the child that come = from >>> confirmed UTXOs >>> - Outputs =3D all the outputs of the child + all outputs of the parents >>> that >>> aren't spent by other transactions in the package >>> >>> Examples of packages that follow this rule (variations of example A sho= w >>> some >>> possibilities after deduplication): ![image][15] >>> >>> #### Fee-Related Checks Use Package Feerate >>> >>> Package Feerate =3D the total modified fees divided by the total virtua= l >>> size of >>> all transactions in the package. >>> >>> To meet the two feerate requirements of a mempool, i.e., the >>> pre-configured >>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum >>> feerate, the >>> total package feerate is used instead of the individual feerate. The >>> individual >>> transactions are allowed to be below feerate requirements if the packag= e >>> meets >>> the feerate requirements. For example, the parent(s) in the package can >>> have 0 >>> fees but be paid for by the child. >>> >>> *Rationale*: This can be thought of as "CPFP within a package," solving >>> the >>> issue of a parent not meeting minimum fees on its own. This allows L2 >>> applications to adjust their fees at broadcast time instead of >>> overshooting or >>> risking getting stuck/pinned. >>> >>> We use the package feerate of the package *after deduplication*. >>> >>> *Rationale*: It would be incorrect to use the fees of transactions tha= t >>> are >>> already in the mempool, as we do not want a transaction's fees to be >>> double-counted for both its individual RBF and package RBF. >>> >>> Examples F and G [14] show the same package, but P1 is submitted >>> individually before >>> the package in example G. In example F, we can see that the 300vB >>> package pays >>> an additional 200sat in fees, which is not enough to pay for its own >>> bandwidth >>> (BIP125#4). In example G, we can see that P1 pays enough to replace M1, >>> but >>> using P1's fees again during package submission would make it look like >>> a 300sat >>> increase for a 200vB package. Even including its fees and size would no= t >>> be >>> sufficient in this example, since the 300sat looks like enough for the >>> 300vB >>> package. The calculcation after deduplication is 100sat increase for a >>> package >>> of size 200vB, which correctly fails BIP125#4. Assume all transactions >>> have a >>> size of 100vB. >>> >>> #### Package RBF >>> >>> If a package meets feerate requirements as a package, the parents in th= e >>> transaction are allowed to replace-by-fee mempool transactions. The >>> child cannot >>> replace mempool transactions. Multiple transactions can replace the sam= e >>> transaction, but in order to be valid, none of the transactions can try >>> to >>> replace an ancestor of another transaction in the same package (which >>> would thus >>> make its inputs unavailable). >>> >>> *Rationale*: Even if we are using package feerate, a package will not >>> propagate >>> as intended if RBF still requires each individual transaction to meet t= he >>> feerate requirements. >>> >>> We use a set of rules slightly modified from BIP125 as follows: >>> >>> ##### Signaling (Rule #1) >>> >>> All mempool transactions to be replaced must signal replaceability. >>> >>> *Rationale*: Package RBF signaling logic should be the same for package >>> RBF and >>> single transaction acceptance. This would be updated if single >>> transaction >>> validation moves to full RBF. >>> >>> ##### New Unconfirmed Inputs (Rule #2) >>> >>> A package may include new unconfirmed inputs, but the ancestor feerate >>> of the >>> child must be at least as high as the ancestor feerates of every >>> transaction >>> being replaced. This is contrary to BIP125#2, which states "The >>> replacement >>> transaction may only include an unconfirmed input if that input was >>> included in >>> one of the original transactions. (An unconfirmed input spends an outpu= t >>> from a >>> currently-unconfirmed transaction.)" >>> >>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement >>> transaction has a higher ancestor score than the original transaction(s= ) >>> (see >>> [comment][13]). Example H [16] shows how adding a new unconfirmed input >>> can lower the >>> ancestor score of the replacement transaction. P1 is trying to replace >>> M1, and >>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and >>> M2 pays >>> 100sat. Assume all transactions have a size of 100vB. While, in >>> isolation, P1 >>> looks like a better mining candidate than M1, it must be mined with M2, >>> so its >>> ancestor feerate is actually 4.5sat/vB. This is lower than M1's ancest= or >>> feerate, which is 6sat/vB. >>> >>> In package RBF, the rule analogous to BIP125#2 would be "none of the >>> transactions in the package can spend new unconfirmed inputs." Example = J >>> [17] shows >>> why, if any of the package transactions have ancestors, package feerate >>> is no >>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which i= s >>> the >>> replacement transaction in an RBF), we're actually interested in the >>> entire >>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1, >>> P2, and >>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to >>> only allow >>> the child to have new unconfirmed inputs, either, because it can still >>> cause us >>> to overestimate the package's ancestor score. >>> >>> However, enforcing a rule analogous to BIP125#2 would not only make >>> Package RBF >>> less useful, but would also break Package RBF for packages with parents >>> already >>> in the mempool: if a package parent has already been submitted, it woul= d >>> look >>> like the child is spending a "new" unconfirmed input. In example K [18]= , >>> we're >>> looking to replace M1 with the entire package including P1, P2, and P3. >>> We must >>> consider the case where one of the parents is already in the mempool (i= n >>> this >>> case, P2), which means we must allow P3 to have new unconfirmed inputs. >>> However, >>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not >>> replace M1 >>> with this package. >>> >>> Thus, the package RBF rule regarding new unconfirmed inputs is less >>> strict than >>> BIP125#2. However, we still achieve the same goal of requiring the >>> replacement >>> transactions to have a ancestor score at least as high as the original >>> ones. As >>> a result, the entire package is required to be a higher feerate mining >>> candidate >>> than each of the replaced transactions. >>> >>> Another note: the [comment][13] above the BIP125#2 code in the original >>> RBF >>> implementation suggests that the rule was intended to be temporary. >>> >>> ##### Absolute Fee (Rule #3) >>> >>> The package must increase the absolute fee of the mempool, i.e. the >>> total fees >>> of the package must be higher than the absolute fees of the mempool >>> transactions >>> it replaces. Combined with the CPFP rule above, this differs from BIP12= 5 >>> Rule #3 >>> - an individual transaction in the package may have lower fees than the >>> transaction(s) it is replacing. In fact, it may have 0 fees, and the >>> child >>> pays for RBF. >>> >>> ##### Feerate (Rule #4) >>> >>> The package must pay for its own bandwidth; the package feerate must be >>> higher >>> than the replaced transactions by at least minimum relay feerate >>> (`incrementalRelayFee`). Combined with the CPFP rule above, this differ= s >>> from >>> BIP125 Rule #4 - an individual transaction in the package can have a >>> lower >>> feerate than the transaction(s) it is replacing. In fact, it may have 0 >>> fees, >>> and the child pays for RBF. >>> >>> ##### Total Number of Replaced Transactions (Rule #5) >>> >>> The package cannot replace more than 100 mempool transactions. This is >>> identical >>> to BIP125 Rule #5. >>> >>> ### Expected FAQs >>> >>> 1. Is it possible for only some of the package to make it into the >>> mempool? >>> >>> Yes, it is. However, since we evict transactions from the mempool by >>> descendant score and the package child is supposed to be sponsoring the >>> fees of >>> its parents, the most common scenario would be all-or-nothing. This is >>> incentive-compatible. In fact, to be conservative, package validation >>> should >>> begin by trying to submit all of the transactions individually, and onl= y >>> use the >>> package mempool acceptance logic if the parents fail due to low feerate= . >>> >>> 2. Should we allow packages to contain already-confirmed transactions? >>> >>> No, for practical reasons. In mempool validation, we actually aren'= t >>> able to >>> tell with 100% confidence if we are looking at a transaction that has >>> already >>> confirmed, because we look up inputs using a UTXO set. If we have >>> historical >>> block data, it's possible to look for it, but this is inefficient, not >>> always >>> possible for pruning nodes, and unnecessary because we're not going to = do >>> anything with the transaction anyway. As such, we already have the >>> expectation >>> that transaction relay is somewhat "stateful" i.e. nobody should be >>> relaying >>> transactions that have already been confirmed. Similarly, we shouldn't = be >>> relaying packages that contain already-confirmed transactions. >>> >>> [1]: https://github.com/bitcoin/bitcoin/pull/22290 >>> [2]: >>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97= d040fe/bip-0141.mediawiki#transaction-size-calculations >>> [3]: >>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746= f392eb75e/src/policy/policy.cpp#L282 >>> [4]: https://github.com/bitcoin/bitcoin/pull/16400 >>> [5]: https://github.com/bitcoin/bitcoin/pull/21062 >>> [6]: https://github.com/bitcoin/bitcoin/pull/22675 >>> [7]: https://github.com/bitcoin/bitcoin/pull/22796 >>> [8]: https://github.com/bitcoin/bitcoin/pull/20833 >>> [9]: https://github.com/bitcoin/bitcoin/pull/21800 >>> [10]: https://github.com/bitcoin/bitcoin/pull/16401 >>> [11]: https://github.com/bitcoin/bitcoin/pull/19621 >>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki >>> [13]: >>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3= cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104 >>> [14]: >>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0= 619-4339-9168-b41fd2b90c28.png >>> [15]: >>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f= 875-44bb-b954-cb7a1725cc0d.png >>> [16]: >>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-a= e9c-49f8-abb9-81e8e0aba224.png >>> [17]: >>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-3= 6c8-4831-b1a8-706634540af3.png >>> [18]: >>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-4= 39f-4547-800a-2ba2b0242bcb.png >>> [19]: >>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-d= cb4-4a31-b9c1-7d86205b68bc.png >>> [20]: >>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-a= f96-44c8-8d78-983719888c19.png >>> _______________________________________________ >>> bitcoin-dev mailing list >>> bitcoin-dev@lists.linuxfoundation.org >>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >>> >> --000000000000ee8afa05cca214d1 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
> Correct, if B+C is too low feerate to be accepted, we= will reject it. I
> prefer this because it is incentive compatible: = A can be mined by itself,
> so there's no reason to prefer A+B+C = instead of A.
> As another way of looking at this, consider the case = where we do accept
> A+B+C and it sits at the "bottom" of o= ur mempool. If our mempool reaches
> capacity, we evict the lowest de= scendant feerate transactions, which are
> B+C in this case. This giv= es us the same resulting mempool, with A and not
> B+C.

I agre= e here. Doing otherwise, we might evict other transactions mempool in `Memp= oolAccept::Finalize` with a higher-feerate than B+C while those evicted tra= nsactions are the most compelling for block construction.

I thought = at first missing this acceptance requirement would break a fee-bumping sche= me like Parent-Pay-For-Child where a high-fee parent is attached to a child= signed with SIGHASH_ANYONECANPAY but in this case the child fee is capturi= ng the parent value. I can't think of other fee-bumping schemes potenti= ally affected. If they do exist I would say they're wrong in their desi= gn assumptions.

> If or when we have witness replacement, the log= ic is: if the individual
> transaction is enough to replace the mempo= ol one, the replacement will
> happen during the preceding individual= transaction acceptance, and
> deduplication logic will work. Otherwi= se, we will try to deduplicate by
> wtxid, see that we need a package= witness replacement, and use the package
> feerate to evaluate wheth= er this is economically rational.

IIUC, you have package A+B, during= the dedup phase early in `AcceptMultipleTransactions` if you observe same-= txid-different-wtixd A' and A' is higher feerate than A, you trim A= and replace by A' ?

I think this approach is safe, the one who = appears unsafe to me is when A' has a _lower_ feerate, even if A' i= s already accepted by our mempool ? In that case iirc that would be a pinni= ng.

Good to see progress on witness replacement before we see usage = of Taproot tree in the context of multi-party, where a malicious counterpar= ty inflates its witness to jam a honest spending.

(Note, the commit = linked currently points nowhere :))


> Please note that A may = replace A' even if A' has higher fees than A
> individually, = because the proposed package RBF utilizes the fees and size
> of the = entire package. This just requires E to pay enough fees, although
> t= his can be pretty high if there are also potential B' and C' compet= ing
> commitment transactions that we don't know about.

Ah= right, if the package acceptance waives `PaysMoreThanConflicts` for the in= dividual check on A, the honest package should replace the pinning attempt.= I've not fully parsed the proposed implementation yet.

Though n= ote, I think it's still unsafe for a Lightning multi-commitment-broadca= st-as-one-package as a malicious A' might have an absolute fee higher t= han E. It sounds uneconomical for
an attacker but I think it's not w= hen you consider than you can "batch" attack against multiple hon= est counterparties. E.g, Mallory broadcast A' + B' + C' + D'= ; where A' conflicts with Alice's honest package P1, B' conflic= ts with Bob's honest package P2, C' conflicts with Caroll's hon= est package P3. And D' is a high-fee child of A' + B' + C'.=

If D' is higher-fee than P1 or P2 or P3 but inferior to the sum= of HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacke= r ?

> So far, my understanding is that multi-parent-1-child is de= sired for
> batched fee-bumping (
> https://github.com/bi= tcoin/bitcoin/pull/22674#issuecomment-897951289) and
> I've a= lso seen your response which I have less context on (
> https:/= /github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202). That> being said, I am happy to create a new proposal for 1 parent + 1 chil= d
> (which would be slightly simpler) and plan for moving to
> = multi-parent-1-child later if that is preferred. I am very interested in> hearing feedback on that approach.

I think batched fee-bumping= is okay as long as you don't have time-sensitive outputs encumbering y= our commitment transactions. For the reasons mentioned above, I think that&= #39;s unsafe.

What I'm worried about is=C2=A0 L2 developers, pot= entially not aware about all the mempool subtleties blurring the difference= and always batching their broadcast by default.

IMO, a good thing b= y restraining to 1-parent + 1 child, =C2=A0we artificially constraint L2 de= sign space for now and minimize risks of unsafe usage of the package API :)=

I think that's a point where it would be relevant to have the o= pinion of more L2 devs.

> I think there is a misunderstanding her= e - let me describe what I'm
> proposing we'd do in this situ= ation: we'll try individual submission for A,
> see that it fails= due to "insufficient fees." Then, we'll try package
> = validation for A+B and use package RBF. If A+B pays enough, it can still> replace A'. If A fails for a bad signature, we won't look at = B or A+B. Does
> this meet your expectations?

Yes there was a = misunderstanding, I think this approach is correct, it's more a questio= n of performance. Do we assume that broadcasted packages are "honest&q= uot; by default and that the parent(s) always need the child to pass the fe= e checks, that way saving the processing of individual transactions which a= re expected to fail in 99% of cases or more ad hoc composition of packages = at relay ?

I think this point is quite dependent on the p2p packages= format/logic we'll end up on and that we should feel free to revisit i= t later ?


> What problem are you trying to solve by the packa= ge feerate *after* dedup
rule ?
> My understanding is that an in-p= ackage transaction might be already in
the mempool. Therefore, to comput= e a correct RBF penalty replacement, the
vsize of this transaction could= be discarded lowering the cost of package
RBF.

> I'm prop= osing that, when a transaction has already been submitted to
> mempoo= l, we would ignore both its fees and vsize when calculating package
>= feerate.

Yes, if you receive A+B, and A is already in-mempoo, I ag= ree you can discard its feerate as B should pay for all fees checked on its= own. Where I'm unclear is when you have in-mempool A+B and receive A+B= '. Should B' have a fee high enough to cover the bandwidth penalty = replacement (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
=
If you have a second-layer like current Lightning, you might have a cou= nterparty commitment to replace and should always expect to have to pay for= parent replacement bandwidth.

Where a potential discount sounds int= eresting is when you have an univoque state on the first-stage of transacti= ons. E.g DLC's funding transaction which might be CPFP by any participa= nt iirc.

> Note that, if C' conflicts with C, it also conflic= ts with D, since D is a
> descendant of C and would thus need to be e= victed along with it.

Ah once again I think it's a misunderstand= ing without the code under my eyes! If we do C' `PreChecks`, solve the = conflicts provoked by it, i.e mark for potential eviction D and don't c= onsider it for future conflicts in the rest of the package, I think D' = `PreChecks` should be good ?

> More generally, this example is su= rprising to me because I didn't think
> packages would be used to= fee-bump replaceable transactions. Do we want the
> child to be able= to replace mempool transactions as well?

If we mean when you have r= eplaceable A+B then A'+B' try to replace with a higher-feerate ? I = think that's exactly the case we need for Lightning as A+B is coming fr= om Alice and A'+B' is coming from Bob :/

> I'm not su= re what you mean? Let's say we have a package of parent A + child
&g= t; B, where A is supposed to replace a mempool transaction A'. Are you = saying
> that counterparties are able to malleate the package child B= , or a child of
> A'?

The second option, a child of A'= ;, In the LN case I think the CPFP is attached on one's anchor output.<= br>
I think it's good if we assume the solve-conflicts-after-parent&= #39;s`'PreChecks` mentioned above or fixing inherited signaling or full= -rbf ?

> Sorry, I don't understand what you mean by "pre= serve the package
> integrity?" Could you elaborate?

Afte= r thinking the relaxation about the "new" unconfirmed input is no= t linked to trimming but I would say more to the multi-parent support.
<= br>Let's say you have A+B trying to replace C+D where B is also spendin= g already in-mempool E. To succeed, you need to waive the no-new-unconfirme= d input as D isn't spending E.

So good, I think we agree on the = problem description here.

> I am in agreement with your calculati= ons but unsure if we disagree on the
> expected outcome. Yes, B has a= n ancestor score of 10sat/vb and D has an
> ancestor score of ~2.9sat= /vb. Since D's ancestor score is lower than B's,
> it fails t= he proposed package RBF Rule #2, so this package would be
> rejected.= Does this meet your expectations?

Well what sounds odd to me, in my= example, we fail D even if it has a higher-fee than B. Like A+B absolute f= ees are 2000 sats and A+C+D absolute fees are 4500 sats ?

Is this co= mpatible with a model where a miner prioritizes absolute fees over ancestor= score, in the case that mempools aren't full-enough to fulfill a block= ?

Let me know if I can clarify a point.

Antoine
Le=C2=A0l= un. 20 sept. 2021 =C3=A0=C2=A011:10, Gloria Zhao <gloriajzhao@gmail.com> a =C3=A9crit=C2=A0:

H= i Antoine,

First of all, thank you for the thorough review. I apprec= iate your insight on LN requirements.

> IIUC, you have a package = A+B+C submitted for acceptance and A is already in your mempool. You trim o= ut A from the package and then evaluate B+C.

> I think this might= be an issue if A is the higher-fee element of the ABC package. B+C package= fees might be under the mempool min fee and will be rejected, potentially = breaking the acceptance expectations of the package issuer ?

Correct= , if B+C is too low feerate to be accepted, we will reject it. I prefer thi= s because it is incentive compatible: A can be mined by itself, so there= 9;s no reason to prefer A+B+C instead of A.
As another way of looking at= this, consider the case where we do accept A+B+C and it sits at the "= bottom" of our mempool. If our mempool reaches capacity, we evict the = lowest descendant feerate transactions, which are B+C in this case. This gi= ves us the same resulting mempool, with A and not B+C.


> Further, I think the dedup should be done on wtxid, as yo= u might have multiple valid witnesses. Though with varying vsizes and as su= ch offering different feerates.

I agree that variations of the same = package with different witnesses is a case that must be handled. I consider= witness replacement to be a project that can be done in parallel to packag= e mempool acceptance because being able to accept packages does not worsen = the problem of a same-txid-different-witness "pinning" attack.
If or when we have witness replacement, the logic is: if the individua= l transaction is enough to replace the mempool one, the replacement will ha= ppen during the preceding individual transaction acceptance, and deduplicat= ion logic will work. Otherwise, we will try to deduplicate by wtxid, see th= at we need a package witness replacement, and use the package feerate to ev= aluate whether this is economically rational.

See the #22290 "h= andle package transactions already in mempool" commit (https://github.com/bitcoin/bitcoin/pull/= 22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff), which handles = the case of same-txid-different-witness by simply using the transaction in = the mempool for now, with TODOs for what I just described.


> = I'm not clearly understanding the accepted topologies. By "parent = and child to share a parent", do you mean the set of transactions A, B= , C, where B is spending A and C is spending A and B would be correct ?
=
Yes, that is what I meant. Yes, that would a valid package under these = rules.

> If yes, is there a width-limit introduced or we fallback= on MAX_PACKAGE_COUNT=3D25 ?

No, there is no limit on connectivity o= ther than "child with all unconfirmed parents." We will enforce M= AX_PACKAGE_COUNT=3D25 and child's in-mempool + in-package ancestor limi= ts.


> Considering the current Core's mempool acceptance r= ules, I think CPFP batching is unsafe for LN time-sensitive closure. A mali= cious tx-relay jamming successful on one channel commitment transaction wou= ld contamine the remaining commitments sharing the same package.

>= ; E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment tra= nsactions and E a shared CPFP. If a malicious A' transaction has a bett= er feerate than A, the whole package acceptance will fail. Even if A' c= onfirms in the following block,
the propagation and confirmation of B+C+= D have been delayed. This could carry on a loss of funds.

Please not= e that A may replace A' even if A' has higher fees than A individua= lly, because the proposed package RBF utilizes the fees and size of the ent= ire package. This just requires E to pay enough fees, although this can be = pretty high if there are also potential B' and C' competing commitm= ent transactions that we don't know about.


> IMHO, I'= m leaning towards deploying during a first phase 1-parent/1-child. I think = it's the most conservative step still improving second-layer safety.
So far, my understanding is that multi-parent-1-child is desired for b= atched fee-bumping (https://github.com/bitcoin/bitc= oin/pull/22674#issuecomment-897951289) and I've also seen your resp= onse which I have less context on (https://github.c= om/bitcoin/bitcoin/pull/22674#issuecomment-900352202). That being said,= I am happy to create a new proposal for 1 parent + 1 child (which would be= slightly simpler) and plan for moving to multi-parent-1-child later if tha= t is preferred. I am very interested in hearing feedback on that approach.<= br>

> If A+B is submitted to replace A', where A pays 0 sats,= B pays 200 sats and A' pays 100 sats. If we apply the individual RBF o= n A, A+B acceptance fails. For this reason I think the individual RBF shoul= d be bypassed and only the package RBF apply ?

I think there is a mi= sunderstanding here - let me describe what I'm proposing we'd do in= this situation: we'll try individual submission for A, see that it fai= ls due to "insufficient fees." Then, we'll try package valida= tion for A+B and use package RBF. If A+B pays enough, it can still replace = A'. If A fails for a bad signature, we won't look at B or A+B. Does= this meet your expectations?


> What problem are you trying t= o solve by the package feerate *after* dedup rule ?
> My understandin= g is that an in-package transaction might be already in the mempool. Theref= ore, to compute a correct RBF penalty replacement, the vsize of this transa= ction could be discarded lowering the cost of package RBF.

I'm p= roposing that, when a transaction has already been submitted to mempool, we= would ignore both its fees and vsize when calculating package feerate. In = example G2, we shouldn't count M1 fees after its submission to mempool,= since M1's fees have already been used to pay for its individual bandw= idth, and it shouldn't be used again to pay for P2 and P3's bandwid= th. We also shouldn't count its vsize, since it has already been paid f= or.


> I think this is a footgunish API, as if a package issue= r send the multiple-parent-one-child package A,B,C,D where D is the child o= f A,B,C. Then try to broadcast the higher-feerate C'+D' package, it= should be rejected. So it's breaking the naive broadcaster assumption = that a higher-feerate/higher-fee package always replaces ?

Note= that, if C' conflicts with C, it also conflicts with D, since D is a d= escendant of C and would thus need to be evicted along with it. Implicitly,= D' would not be in conflict with D.
More generally, this= example is surprising to me because I didn't think packages would be u= sed to fee-bump replaceable transactions. Do we want the child to be able t= o replace mempool transactions as well? This can be implemented with a bit = of additional logic.

> I think this is unsafe for L2s if counte= rparties have malleability of the child transaction. They can block your pa= ckage replacement by opting-out from RBF signaling. IIRC, LN's "an= chor output" presents such an ability.

I'm not sure what yo= u mean? Let's say we have a package of parent A + child B, where A is s= upposed to replace a mempool transaction A'. Are you saying that counte= rparties are able to malleate the package child B, or a child of A'? If= they can malleate a child of A', that shouldn't matter as long as = A' is signaling replacement. This would be handled identically with ful= l RBF and what Core currently implements.

> I think this is an is= sue brought by the trimming during the dedup phase. If we preserve the pack= age integrity, only re-using the tx-level checks results of already in-memp= ool transactions to gain in CPU time we won't have this issue. Package = childs can add unconfirmed inputs as long as they're in-package, the bi= p125 rule2 is only evaluated against parents ?

Sorry, I don't un= derstand what you mean by "preserve the package integrity?" Could= you elaborate?

> Let's say you have in-mempool A, B where A = pays 10 sat/vb for 100 vbytes and B pays 10 sat/vb for 100 vbytes. You have= the candidate replacement D spending both A and C where D pays 15sat/vb fo= r 100 vbytes and C pays 1 sat/vb for 1000 vbytes.

> Package A + B= ancestor score is 10 sat/vb.

> D has a higher feerate/absolute f= ee than B.

> Package A + C + D ancestor score is ~ 3 sat/vb ((A&#= 39;s 1000 sats + C's 1000 sats + D's 1500 sats) / A's 100 vb + = C's 1000 vb + D's 100 vb)

I am in agreement with your c= alculations but unsure if we disagree on the expected outcome. Yes, B has a= n ancestor score of 10sat/vb and D has an ancestor score of ~2.9sat/vb. Sin= ce D's ancestor score is lower than B's, it fails the proposed pack= age RBF Rule #2, so this package would be rejected. Does this meet your exp= ectations?

Thank you for linking to projects that = might be interested in package relay :)

Thanks,
Gloria
<= /div>
O= n Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <antoine.riard@gmail.com> wrote:<= br>
Hi Gloria,

> A package may contain transactions that are already= in the mempool. We
> remove
> ("deduplicate") those = transactions from the package for the purposes of
> package
> m= empool acceptance. If a package is empty after deduplication, we do
>= nothing.

IIUC, you have a package A+B+C submitted for acceptance an= d A is already in your mempool. You trim out A from the package and then ev= aluate B+C.

I think this might be an issue if A is the higher-fee el= ement of the ABC package. B+C package fees might be under the mempool min f= ee and will be rejected, potentially breaking the acceptance expectations o= f the package issuer ?

Further, I think the dedup should be done on = wtxid, as you might have multiple valid witnesses. Though with varying vsiz= es and as such offering different feerates.

E.g you're going to = evaluate the package A+B and A' is already in your mempool with a bigge= r valid witness. You trim A based on txid, then you evaluate A'+B, whic= h fails the fee checks. However, evaluating A+B would have been a success.<= br>
AFAICT, the dedup rationale would be to save on CPU time/IO disk, to= avoid repeated signatures verification and parent UTXOs fetches ? Can we a= chieve the same goal by bypassing tx-level checks for already-in txn while = conserving the package integrity for package-level checks ?

> Not= e that it's possible for the parents to be
> indirect
> des= cendants/ancestors of one another, or for parent and child to share a
&g= t; parent,
> so we cannot make any other topology assumptions.
I'm not clearly understanding the accepted topologies. By "parent= and child to share a parent", do you mean the set of transactions A, = B, C, where B is spending A and C is spending A and B would be correct ?
If yes, is there a width-limit introduced or we fallback on MAX_PACKAG= E_COUNT=3D25 ?

IIRC, one rationale to come with this topology limita= tion was to lower the DoS risks when potentially deploying p2p packages.
Considering the current Core's mempool acceptance rules, I think C= PFP batching is unsafe for LN time-sensitive closure. A malicious tx-relay = jamming successful on one channel commitment transaction would contamine th= e remaining commitments sharing the same package.

E.g, you broadcast= the package A+B+C+D+E where A,B,C,D are commitment transactions and E a sh= ared CPFP. If a malicious A' transaction has a better feerate than A, t= he whole package acceptance will fail. Even if A' confirms in the follo= wing block,
the propagation and confirmation of B+C+D have been delayed= . This could carry on a loss of funds.

That said, if you're broa= dcasting commitment transactions without time-sensitive HTLC outputs, I thi= nk the batching is effectively a fee saving as you don't have to duplic= ate the CPFP.

IMHO, I'm leaning towards deploying during a first= phase 1-parent/1-child. I think it's the most conservative step still = improving second-layer safety.

> *Rationale*: =C2=A0It would be i= ncorrect to use the fees of transactions that are
> already in the me= mpool, as we do not want a transaction's fees to be
> double-coun= ted for both its individual RBF and package RBF.

I'm unsure abou= t the logical order of the checks proposed.

If A+B is submitted to r= eplace A', where A pays 0 sats, B pays 200 sats and A' pays 100 sat= s. If we apply the individual RBF on A, A+B acceptance fails. For this reas= on I think the individual RBF should be bypassed and only the package RBF a= pply ?

Note this situation is plausible, with current LN design, yo= ur counterparty can have a commitment transaction with a better fee just by= selecting a higher `dust_limit_satoshis` than yours.

> Examples = F and G [14] show the same package, but P1 is submitted
> individuall= y before
> the package in example G. In example F, we can see that th= e 300vB package
> pays
> an additional 200sat in fees, which is= not enough to pay for its own
> bandwidth
> (BIP125#4). In exa= mple G, we can see that P1 pays enough to replace M1, but
> using P1&= #39;s fees again during package submission would make it look like a
>= ; 300sat
> increase for a 200vB package. Even including its fees and = size would not be
> sufficient in this example, since the 300sat look= s like enough for the 300vB
> package. The calculcation after dedupli= cation is 100sat increase for a
> package
> of size 200vB, whic= h correctly fails BIP125#4. Assume all transactions have
> a
> = size of 100vB.

What problem are you trying to solve by the package f= eerate *after* dedup rule ?

My understanding is that an in-package t= ransaction might be already in the mempool. Therefore, to compute a correct= RBF penalty replacement, the vsize of this transaction could be discarded = lowering the cost of package RBF.

If we keep a "safe" dedu= p mechanism (see my point above), I think this discount is justified, as th= e validation cost of node operators is paid for ?

> The child can= not replace mempool transactions.

Let's say you issue package A+= B, then package C+B', where B' is a child of both A and C. This rul= e fails the acceptance of C+B' ?

I think this is a footgunish AP= I, as if a package issuer send the multiple-parent-one-child package A,B,C,= D where D is the child of A,B,C. Then try to broadcast the higher-feerate C= '+D' package, it should be rejected. So it's breaking the naive= broadcaster assumption that a higher-feerate/higher-fee package always rep= laces ? And it might be unsafe in protocols where states are symmetric. E.g= a malicious counterparty broadcasts first S+A, then you honestly broadcast= S+B, where B pays better fees.

> All mempool transactions to be = replaced must signal replaceability.

I think this is unsafe for L2s = if counterparties have malleability of the child transaction. They can bloc= k your package replacement by opting-out from RBF signaling. IIRC, LN's= "anchor output" presents such an ability.

I think it'= s better to either fix inherited signaling or move towards full-rbf.
> if a package parent has already been submitted, it would
> look=
>like the child is spending a "new" unconfirmed input.
=
I think this is an issue brought by the trimming during the dedup phase= . If we preserve the package integrity, only re-using the tx-level checks r= esults of already in-mempool transactions to gain in CPU time we won't = have this issue. Package childs can add unconfirmed inputs as long as they&= #39;re in-package, the bip125 rule2 is only evaluated against parents ?
=
> However, we still achieve the same goal of requiring the
> r= eplacement
> transactions to have a ancestor score at least as high a= s the original
> ones.

I'm not sure if this holds...
Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vby= tes and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement= D spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1 = sat/vb for 1000 vbytes.

Package A + B ancestor score is 10 sat/vb.
D has a higher feerate/absolute fee than B.

Package A + C + D = ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's 1000 sats + D= 9;s 1500 sats) /
A's 100 vb + C's 1000 vb + D's 100 vb)
=
Overall, this is a review through the lenses of LN requirements. I thin= k other L2 protocols/applications
could be candidates to using package a= ccept/relay such as:
* https://github.com/lightninglabs/pool
* htt= ps://github.com/discreetlogcontracts/dlcspecs
* https:= //github.com/bitcoin-teleport/teleport-transactions/
* https://github.com/sap= io-lang/sapio
* https://github.com/commer= ceblock/mercury/blob/master/doc/statechains.md
* https://github.com/= revault/practical-revault

Thanks for rolling forward the ball on= this subject.

Antoine

Le=C2=A0jeu. 16 sept. 2021 =C3=A0=C2=A003:= 55, Gloria Zhao via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit=C2=A0:
Hi there,

I'm writing to propose a s= et of mempool policy changes to enable package
validation (in preparatio= n for package relay) in Bitcoin Core. These would not
be consensus or P2= P protocol changes. However, since mempool policy
significantly affects = transaction propagation, I believe this is relevant for
the mailing list= .

My proposal enables packages consisting of multiple parents and 1 = child. If you
develop software that relies on specific transaction relay= assumptions and/or
are interested in using package relay in the future,= I'm very interested to hear
your feedback on the utility or restric= tiveness of these package policies for
your use cases.

A draft im= plementation of this proposal can be found in [Bitcoin Core
PR#22290][1]= .

An illustrated version of this post can be found at
I have also linked the images below.

## Backgroun= d

Feel free to skip this section if you are already familiar with me= mpool policy
and package relay terminology.

### Terminology Clari= fications

* Package =3D an ordered list of related transactions, rep= resentable by a Directed
=C2=A0 Acyclic Graph.
* Package Feerate =3D = the total modified fees divided by the total virtual size of
=C2=A0 all = transactions in the package.
=C2=A0 =C2=A0 - Modified fees =3D a transac= tion's base fees + fee delta applied by the user
=C2=A0 =C2=A0 =C2= =A0 with `prioritisetransaction`. As such, we expect this to vary acrossmempools.
=C2=A0 =C2=A0 - Virtual Size =3D the maximum of virtual sizes= calculated using [BIP141
=C2=A0 =C2=A0 =C2=A0 virtual size][2] and sigo= p weight. [Implemented here in Bitcoin Core][3].
=C2=A0 =C2=A0 - Note th= at feerate is not necessarily based on the base fees and serialized
=C2= =A0 =C2=A0 =C2=A0 size.

* Fee-Bumping =3D user/wallet actions that t= ake advantage of miner incentives to
=C2=A0 boost a transaction's ca= ndidacy for inclusion in a block, including Child Pays
for Parent (CPFP)= and [BIP125][12] Replace-by-Fee (RBF). Our intention in
mempool policy = is to recognize when the new transaction is more economical to
mine than= the original one(s) but not open DoS vectors, so there are some
limitat= ions.

### Policy

The purpose of the mempool is to store the b= est (to be most incentive-compatible
with miners, highest feerate) candi= dates for inclusion in a block. Miners use
the mempool to build block te= mplates. The mempool is also useful as a cache for
boosting block relay = and validation performance, aiding transaction relay, and
generating fee= rate estimations.

Ideally, all consensus-valid transactions paying r= easonable fees should make it
to miners through normal transaction relay= , without any special connectivity or
relationships with miners. On the = other hand, nodes do not have unlimited
resources, and a P2P network des= igned to let any honest node broadcast their
transactions also exposes t= he transaction validation engine to DoS attacks from
malicious peers.
As such, for unconfirmed transactions we are considering for our mempo= ol, we
apply a set of validation rules in addition to consensus, primari= ly to protect
us from resource exhaustion and aid our efforts to keep th= e highest fee
transactions. We call this mempool _policy_: a set of (con= figurable,
node-specific) rules that transactions must abide by in order= to be accepted
into our mempool. Transaction "Standardness" r= ules and mempool restrictions such
as "too-long-mempool-chain"= are both examples of policy.

### Package Relay and Package Mempool = Accept

In transaction relay, we currently consider transactions one = at a time for
submission to the mempool. This creates a limitation in th= e node's ability to
determine which transactions have the highest fe= erates, since we cannot take
into account descendants (i.e. cannot use C= PFP) until all the transactions are
in the mempool. Similarly, we cannot= use a transaction's descendants when
considering it for RBF. When a= n individual transaction does not meet the mempool
minimum feerate and t= he user isn't able to create a replacement transaction
directly, it = will not be accepted by mempools.

This limitation presents a securit= y issue for applications and users relying on
time-sensitive transaction= s. For example, Lightning and other protocols create
UTXOs with multiple= spending paths, where one counterparty's spending path opens
up aft= er a timelock, and users are protected from cheating scenarios as long asthey redeem on-chain in time. A key security assumption is that all parti= es'
transactions will propagate and confirm in a timely manner. This= assumption can
be broken if fee-bumping does not work as intended.
<= br>The end goal for Package Relay is to consider multiple transactions at t= he same
time, e.g. a transaction with its high-fee child. This may help = us better
determine whether transactions should be accepted to our mempo= ol, especially if
they don't meet fee requirements individually or a= re better RBF candidates as a
package. A combination of changes to mempo= ol validation logic, policy, and
transaction relay allows us to better p= ropagate the transactions with the
highest package feerates to miners, a= nd makes fee-bumping tools more powerful
for users.

The "rel= ay" part of Package Relay suggests P2P messaging changes, but a large<= br>part of the changes are in the mempool's package validation logic. W= e call this
*Package Mempool Accept*.

### Previous Work

* = Given that mempool validation is DoS-sensitive and complex, it would be
= =C2=A0 dangerous to haphazardly tack on package validation logic. Many effo= rts have
been made to make mempool validation less opaque (see [#16400][= 4], [#21062][5],
[#22675][6], [#22796][7]).
* [#20833][8] Added basic= capabilities for package validation, test accepts only
=C2=A0 (no submi= ssion to mempool).
* [#21800][9] Implemented package ancestor/descendant= limit checks for arbitrary
=C2=A0 packages. Still test accepts only.* Previous package relay proposals (see [#16401][10], [#19621][11]).
### Existing Package Rules

These are in master as introduced in [#= 20833][8] and [#21800][9]. I'll consider
them as "given" i= n the rest of this document, though they can be changed, since
package v= alidation is test-accept only right now.

1. A package cannot exceed = `MAX_PACKAGE_COUNT=3D25` count and
`MAX_PACKAGE_SIZE=3D101KvB` total siz= e [8]

=C2=A0 =C2=A0*Rationale*: This is already enforced as mempool = ancestor/descendant limits.
Presumably, transactions in a package are al= l related, so exceeding this limit
would mean that the package can eithe= r be split up or it wouldn't pass this
mempool policy.

2. Pac= kages must be topologically sorted: if any dependencies exist between
tr= ansactions, parents must appear somewhere before children. [8]

3. A = package cannot have conflicting transactions, i.e. none of them can spend
the same inputs. This also means there cannot be duplicate transacti= ons. [8]

4. When packages are evaluated against ancesto= r/descendant limits in a test
accept, the union of all of their descenda= nts and ancestors is considered. This
is essentially a "worst case&= quot; heuristic where every transaction in the package
is treated as eac= h other's ancestor and descendant. [8]
Packages for which ancestor/= descendant limits are accurately captured by this
heuristic: [19]
There are also limitations such as the fact that CPFP carve out is = not applied
to package transactions. #20833 also disables RBF in package= validation; this
proposal overrides that to allow packages to use RBF.<= br>
## Proposed Changes

The next step in the Package Mempool Acce= pt project is to implement submission
to mempool, initially through RPC = only. This allows us to test the submission
logic before exposing it on = P2P.

### Summary

- Packages may contain already-in-mempool tr= ansactions.
- Packages are 2 generations, Multi-Parent-1-Child.
- Fee= -related checks use the package feerate. This means that wallets can
cre= ate a package that utilizes CPFP.
- Parents are allowed to RBF mempool t= ransactions with a set of rules similar
=C2=A0 to BIP125. This enables a= combination of CPFP and RBF, where a
transaction's descendant fees = pay for replacing mempool conflicts.

There is a draft implementation= in [#22290][1]. It is WIP, but feedback is
always welcome.

### D= etails

#### Packages May Contain Already-in-Mempool Transactions
=
A package may contain transactions that are already in the mempool. We = remove
("deduplicate") those transactions from the package for= the purposes of package
mempool acceptance. If a package is empty after= deduplication, we do nothing.

*Rationale*: Mempools vary across the= network. It's possible for a parent to be
accepted to the mempool o= f a peer on its own due to differences in policy and
fee market fluctuat= ions. We should not reject or penalize the entire package for
an individ= ual transaction as that could be a censorship vector.

#### Packages = Are Multi-Parent-1-Child

Only packages of a specific topology are pe= rmitted. Namely, a package is exactly
1 child with all of its unconfirme= d parents. After deduplication, the package
may be exactly the same, emp= ty, 1 child, 1 child with just some of its
unconfirmed parents, etc. Not= e that it's possible for the parents to be indirect
descendants/ance= stors of one another, or for parent and child to share a parent,
so we c= annot make any other topology assumptions.

*Rationale*: This allows = for fee-bumping by CPFP. Allowing multiple parents
makes it possible to = fee-bump a batch of transactions. Restricting packages to a
defined topo= logy is also easier to reason about and simplifies the validation
logic = greatly. Multi-parent-1-child allows us to think of the package as one big<= br>transaction, where:

- Inputs =3D all the inputs of parents + inpu= ts of the child that come from
=C2=A0 confirmed UTXOs
- Outputs =3D a= ll the outputs of the child + all outputs of the parents that
=C2=A0 are= n't spent by other transactions in the package

Examples of packa= ges that follow this rule (variations of example A show some
possibiliti= es after deduplication): ![image][15]

#### Fee-Related Checks Use Pa= ckage Feerate

Package Feerate =3D the total modified fees divided by= the total virtual size of
all transactions in the package.

To me= et the two feerate requirements of a mempool, i.e., the pre-configured
m= inimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum feerate,= the
total package feerate is used instead of the individual feerate. Th= e individual
transactions are allowed to be below feerate requirements i= f the package meets
the feerate requirements. For example, the parent(s)= in the package can have 0
fees but be paid for by the child.

*Ra= tionale*: This can be thought of as "CPFP within a package," solv= ing the
issue of a parent not meeting minimum fees on its own. This allo= ws L2
applications to adjust their fees at broadcast time instead of ove= rshooting or
risking getting stuck/pinned.

We use the package fee= rate of the package *after deduplication*.

*Rationale*: =C2=A0It wou= ld be incorrect to use the fees of transactions that are
already in the = mempool, as we do not want a transaction's fees to be
double-counted= for both its individual RBF and package RBF.

Examples F and G [14] = show the same package, but P1 is submitted individually before
the packa= ge in example G. In example F, we can see that the 300vB package pays
an= additional 200sat in fees, which is not enough to pay for its own bandwidt= h
(BIP125#4). In example G, we can see that P1 pays enough to replace M1= , but
using P1's fees again during package submission would make it = look like a 300sat
increase for a 200vB package. Even including its fees= and size would not be
sufficient in this example, since the 300sat look= s like enough for the 300vB
package. The calculcation after deduplicatio= n is 100sat increase for a package
of size 200vB, which correctly fails = BIP125#4. Assume all transactions have a
size of 100vB.

#### Pack= age RBF

If a package meets feerate requirements as a package, the pa= rents in the
transaction are allowed to replace-by-fee mempool transacti= ons. The child cannot
replace mempool transactions. Multiple transaction= s can replace the same
transaction, but in order to be valid, none of th= e transactions can try to
replace an ancestor of another transaction in = the same package (which would thus
make its inputs unavailable).

= *Rationale*: Even if we are using package feerate, a package will not propa= gate
as intended if RBF still requires each individual transaction to me= et the
feerate requirements.

We use a set of rules slightly modif= ied from BIP125 as follows:

##### Signaling (Rule #1)

All mem= pool transactions to be replaced must signal replaceability.

*Ration= ale*: Package RBF signaling logic should be the same for package RBF andsingle transaction acceptance. This would be updated if single transaction=
validation moves to full RBF.

##### New Unconfirmed Inputs (Rule= #2)

A package may include new unconfirmed inputs, but the ancestor = feerate of the
child must be at least as high as the ancestor feerates o= f every transaction
being replaced. This is contrary to BIP125#2, which = states "The replacement
transaction may only include an unconfirmed= input if that input was included in
one of the original transactions. (= An unconfirmed input spends an output from a
currently-unconfirmed trans= action.)"

*Rationale*: The purpose of BIP125#2 is to ensure tha= t the replacement
transaction has a higher ancestor score than the origi= nal transaction(s) (see
[comment][13]). Example H [16] shows how adding = a new unconfirmed input can lower the
ancestor score of the replacement = transaction. P1 is trying to replace M1, and
spends an unconfirmed outpu= t of M2. P1 pays 800sat, M1 pays 600sat, and M2 pays
100sat. Assume all = transactions have a size of 100vB. While, in isolation, P1
looks like a = better mining candidate than M1, it must be mined with M2, so its
ancest= or feerate is actually 4.5sat/vB.=C2=A0 This is lower than M1's ancesto= r
feerate, which is 6sat/vB.

In package RBF, the rule analogous t= o BIP125#2 would be "none of the
transactions in the package can sp= end new unconfirmed inputs." Example J [17] shows
why, if any of th= e package transactions have ancestors, package feerate is no
longer accu= rate. Even though M2 and M3 are not ancestors of P1 (which is the
replac= ement transaction in an RBF), we're actually interested in the entirepackage. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1, P= 2, and
P3, which is only 4sat/vB. The Package RBF rule cannot be loosene= d to only allow
the child to have new unconfirmed inputs, either, becaus= e it can still cause us
to overestimate the package's ancestor score= .

However, enforcing a rule analogous to BIP125#2 would not only mak= e Package RBF
less useful, but would also break Package RBF for packages= with parents already
in the mempool: if a package parent has already be= en submitted, it would look
like the child is spending a "new"= unconfirmed input. In example K [18], we're
looking to replace M1 w= ith the entire package including P1, P2, and P3. We must
consider the ca= se where one of the parents is already in the mempool (in this
case, P2)= , which means we must allow P3 to have new unconfirmed inputs. However,
= M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not replace M= 1
with this package.

Thus, the package RBF rule regarding new unc= onfirmed inputs is less strict than
BIP125#2. However, we still achieve = the same goal of requiring the replacement
transactions to have a ancest= or score at least as high as the original ones. As
a result, the entire = package is required to be a higher feerate mining candidate
than each of= the replaced transactions.

Another note: the [comment][13] above th= e BIP125#2 code in the original RBF
implementation suggests that the rul= e was intended to be temporary.

##### Absolute Fee (Rule #3)

= The package must increase the absolute fee of the mempool, i.e. the total f= ees
of the package must be higher than the absolute fees of the mempool = transactions
it replaces. Combined with the CPFP rule above, this differ= s from BIP125 Rule #3
- an individual transaction in the package may hav= e lower fees than the
=C2=A0 transaction(s) it is replacing. In fact, it= may have 0 fees, and the child
pays for RBF.

##### Feerate (Rule= #4)

The package must pay for its own bandwidth; the package feerate= must be higher
than the replaced transactions by at least minimum relay= feerate
(`incrementalRelayFee`). Combined with the CPFP rule above, thi= s differs from
BIP125 Rule #4 - an individual transaction in the package= can have a lower
feerate than the transaction(s) it is replacing. In fa= ct, it may have 0 fees,
and the child pays for RBF.

##### Total N= umber of Replaced Transactions (Rule #5)

The package cannot replace = more than 100 mempool transactions. This is identical
to BIP125 Rule #5.=

### Expected FAQs

1. Is it possible for only some of the pac= kage to make it into the mempool?

=C2=A0 =C2=A0Yes, it is. However, = since we evict transactions from the mempool by
descendant score and the= package child is supposed to be sponsoring the fees of
its parents, the= most common scenario would be all-or-nothing. This is
incentive-compati= ble. In fact, to be conservative, package validation should
begin by try= ing to submit all of the transactions individually, and only use the
pac= kage mempool acceptance logic if the parents fail due to low feerate.
2. Should we allow packages to contain already-confirmed transactions?
=C2=A0 =C2=A0 No, for practical reasons. In mempool validation, we act= ually aren't able to
tell with 100% confidence if we are looking at = a transaction that has already
confirmed, because we look up inputs usin= g a UTXO set. If we have historical
block data, it's possible to loo= k for it, but this is inefficient, not always
possible for pruning nodes= , and unnecessary because we're not going to do
anything with the tr= ansaction anyway. As such, we already have the expectation
that transact= ion relay is somewhat "stateful" i.e. nobody should be relayingtransactions that have already been confirmed. Similarly, we shouldn'= t be
relaying packages that contain already-confirmed transactions.
<= br>[1]: https://github.com/bitcoin/bitcoin/pull/22290
[2]: = https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040= fe/bip-0141.mediawiki#transaction-size-calculations
[3]: https://github.com/bitc= oin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy= .cpp#L282
[4]: https://github.com/bitcoin/bitcoin/pull/16400[5]: https://github.com/bitcoin/bitcoin/pull/21062
[6]: https://g= ithub.com/bitcoin/bitcoin/pull/22675
[7]: https://github.com/bitcoi= n/bitcoin/pull/22796
[8]: https://github.com/bitcoin/bitcoin/pull/2= 0833
[9]: https://github.com/bitcoin/bitcoin/pull/21800
[10]= : https://github.com/bitcoin/bitcoin/pull/16401
[11]: https://gith= ub.com/bitcoin/bitcoin/pull/19621
[12]: https://gi= thub.com/bitcoin/bips/blob/master/bip-0125.mediawiki
[13]: https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea= 3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
[14]: = https://user-image= s.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b9= 0c28.png
[15]: https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f= 875-44bb-b954-cb7a1725cc0d.png
[16]: https://user-images.githubusercontent.com/2518300= 1/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
[17]: https://user-images.githu= busercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.pn= g
[18]: htt= ps://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-454= 7-800a-2ba2b0242bcb.png
[19]: https://user-images.githubusercontent.com/25183001/13345= 6219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
[20]: https://user-images.githubusercont= ent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
=
_______________________________________________
bitcoin-dev mailing list
= bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mail= man/listinfo/bitcoin-dev
--000000000000ee8afa05cca214d1--