Return-Path: Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id DD67EC000D for ; Tue, 28 Sep 2021 22:59:31 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id CDC6B4062D for ; Tue, 28 Sep 2021 22:59:31 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -2.098 X-Spam-Level: X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id S47N5P5BV3co for ; Tue, 28 Sep 2021 22:59:26 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by smtp2.osuosl.org (Postfix) with ESMTPS id 3DC5440108 for ; Tue, 28 Sep 2021 22:59:26 +0000 (UTC) Received: by mail-wr1-x436.google.com with SMTP id d21so952662wra.12 for ; Tue, 28 Sep 2021 15:59:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=M+tya7c9q+8egy77HyEa3QRuna5qEg/74xWe9HF0o8w=; b=TgRkXDQO2pFb7Zqiurcnw5io37UEbhqcyuQvOAu+Z2bx11H6YtVA1FmC0L9LrobCis rsu1hBdP5a9fNhEV9cy7z+A/+pBYIvhCm7SAfpjFZqa+V4QCwxFprCGJ7gAqC9xoo9Y6 ika0KPJ1xOjyiY2GW37d0BHtdwHX8Kda5JG3bVAYVZyrQ6spFwuwk0Vm6Nca6PIILiJk Bg9OKJa9WIkMjtop62X5/YXPkRmwwdOclWYuB1+KT8/fRdRIdDAODTMldDFcxQ8euqd7 EBsYR0uyjLBxaVHdGzMkviTcVwvk2RpPWsJqnBiLBHx0Uzd/RxvRINaZNKCyPSE12+y/ LLug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=M+tya7c9q+8egy77HyEa3QRuna5qEg/74xWe9HF0o8w=; b=DaUsdew6RToO2sc1rv0UHJOl5hoU6gncVar+qW2BH77FracYyRFds4e6DsjV5fIXXm lfhfi7wKMaAXonuznUMAYd2dvHTBRcwCrmSPluOhyYPZvFMNlTJdjSVuaj+kMKJXXQM3 G3I/iNkjKZI5ePf94fngmCQr4Z9gOjJiLZVekt//nhsBAV40gCnsVEdaddiQW2ux9Rh0 65ldvlfG3HVS0ueqD1/16SVcgULhSGEemOOvjVZSz+namAXDQLcYU88FVZQCcJA/xId+ O44qiXrFyfEVOKFHd5QYmTah1l4boc5MKLhygXbI7FYFQvb+qY35zo97i9ebhLZ7ZiAm voPA== X-Gm-Message-State: AOAM532rUDGgzsLqgZL5crQwawK24WSDIqhkMWKclM6nAcxld5lUSjai 8XnspnJZSS3j0mx65k27lEO+UUm+/mRfeKsqzsY= X-Google-Smtp-Source: ABdhPJwyGl/QOeaBV/bQpERVzUXEgvQ7yZss7AqGa1oQgyt7UXM58+z0W2oXFp9tg4G6ol4/8zOiW8pt69J0NVmRmn4= X-Received: by 2002:adf:f782:: with SMTP id q2mr3045803wrp.203.1632869964172; Tue, 28 Sep 2021 15:59:24 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Antoine Riard Date: Tue, 28 Sep 2021 18:59:11 -0400 Message-ID: To: Bastien TEINTURIER Content-Type: multipart/alternative; boundary="0000000000002dd02d05cd162aa1" X-Mailman-Approved-At: Wed, 29 Sep 2021 07:39:24 +0000 Cc: Bitcoin Protocol Discussion Subject: Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Sep 2021 22:59:32 -0000 --0000000000002dd02d05cd162aa1 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Bastien > In the case of LN, an attacker can game this and heavily restrict your RBF attempts if you're only allowed to use confirmed inputs and have many channels (and a limited number of confirmed inputs). Otherwise you'll need node operators to pre-emptively split their utxos into many small utxos just for fee bumping, which is inefficient... I share the concern about splitting utxos into smaller ones. IIRC, the carve-out tolerance is only 2txn/10_000 vb. If one of your counterparties attach a junk branch on her own anchor output, are you allowed to chain your self-owned unconfirmed CPFP ? I'm thinking about the topology "Chained CPFPs" exposed here : https://github.com/rust-bitcoin/rust-lightning/issues/989. Or if you have another L2 broadcast topology which could be safe w.r.t our current mempool logic :) ? Le lun. 27 sept. 2021 =C3=A0 03:15, Bastien TEINTURIER a =C3=A9crit : > I think we could restrain package acceptance to only confirmed inputs for >> now and revisit later this point ? For LN-anchor, you can assume that th= e >> fee-bumping UTXO feeding the CPFP is already >> confirmed. Or are there currently-deployed use-cases which would benefit >> from your proposed Rule #2 ? >> > > I think constraining package acceptance to only confirmed inputs > is very limiting and quite dangerous for L2 protocols. > > In the case of LN, an attacker can game this and heavily restrict > your RBF attempts if you're only allowed to use confirmed inputs > and have many channels (and a limited number of confirmed inputs). > Otherwise you'll need node operators to pre-emptively split their > utxos into many small utxos just for fee bumping, which is inefficient... > > Bastien > > Le lun. 27 sept. 2021 =C3=A0 00:27, Antoine Riard via bitcoin-dev < > bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit : > >> Hi Gloria, >> >> Thanks for your answers, >> >> > In summary, it seems that the decisions that might still need >> > attention/input from devs on this mailing list are: >> > 1. Whether we should start with multiple-parent-1-child or >> 1-parent-1-child. >> > 2. Whether it's ok to require that the child not have conflicts with >> > mempool transactions. >> >> Yes 1) it would be good to have inputs of more potential users of packag= e >> acceptance . And 2) I think it's more a matter of clearer wording of the >> proposal. >> >> However, see my final point on the relaxation around "unconfirmed inputs= " >> which might in fact alter our current block construction strategy. >> >> > Right, the fact that we essentially always choose the first-seen >> witness is >> > an unfortunate limitation that exists already. Adding package mempool >> > accept doesn't worsen this, but the procedure in the future is to >> replace >> > the witness when it makes sense economically. We can also add logic to >> > allow package feerate to pay for witness replacements as well. This is >> > pretty far into the future, though. >> >> Yes I agree package mempool doesn't worsen this. And it's not an issue >> for current LN as you can't significantly inflate a spending witness for >> the 2-of-2 funding output. >> However, it might be an issue for multi-party protocol where the spendin= g >> script has alternative branches with asymmetric valid witness weights. >> Taproot should ease that kind of script so hopefully we would deploy >> wtxid-replacement not too far in the future. >> >> > I could be misunderstanding, but an attacker wouldn't be able to >> > batch-attack like this. Alice's package only conflicts with A' + D', >> not A' >> > + B' + C' + D'. She only needs to pay for evicting 2 transactions. >> >> Yeah I can be clearer, I think you have 2 pinning attacks scenarios to >> consider. >> >> In LN, if you're trying to confirm a commitment transaction to time-out >> or claim on-chain a HTLC and the timelock is near-expiration, you should= be >> ready to pay in commitment+2nd-stage HTLC transaction fees as much as th= e >> value offered by the HTLC. >> >> Following this security assumption, an attacker can exploit it by >> targeting together commitment transactions from different channels by >> blocking them under a high-fee child, of which the fee value >> is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't >> overbid as it's not worthy to offer fees beyond their competed HTLCs. Ap= art >> from observing mempools state, victims can't learn they're targeted by t= he >> same attacker. >> >> To draw from the aforementioned topology, Mallory broadcasts A' + B' + C= ' >> + D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C' >> conflicts with Caroll's P3. Let's assume P1 is confirming the top-value >> HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational = for >> Alice or Bob or Caroll to keep offering competing feerates. Mallory will= be >> at loss on stealing P1, as she has paid more in fees but will realize a >> gain on P2+P3. >> >> In this model, Alice is allowed to evict those 2 transactions (A' + D') >> but as she is economically-bounded she won't succeed. >> >> Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think >> this 1st pinning scenario is correct and "lucractive" when you sum the >> global gain/loss. >> >> There is a 2nd attack scenario where A + B + C + D, where D is the child >> of A,B,C. All those transactions are honestly issued by Alice. Once A + = B + >> C + D are propagated in network mempools, Mallory is able to replace A += D >> with A' + D' where D' is paying a higher fee. This package A' + D' will >> confirm soon if D feerate was compelling but Mallory succeeds in delayin= g >> the confirmation >> of B + C for one or more blocks. As B + C are pre-signed commitments wit= h >> a low-fee rate they won't confirm without Alice issuing a new child E. >> Mallory can repeat the same trick by broadcasting >> B' + E' and delay again the confirmation of C. >> >> If the remaining package pending HTLC has a higher-value than all the >> malicious fees over-bid, Mallory should realize a gain. With this 2nd >> pinning attack, the malicious entity buys confirmation delay of your >> packaged-together commitments. >> >> Assuming those attacks are correct, I'm leaning towards being >> conservative with the LDK broadcast backend. Though once again, other L2 >> devs have likely other use-cases and opinions :) >> >> > B' only needs to pay for itself in this case. >> >> Yes I think it's a nice discount when UTXO is single-owned. In the >> context of shared-owned UTXO (e.g LN), you might not if there is an >> in-mempool package already spending the UTXO and have to assume the >> worst-case scenario. I.e have B' committing enough fee to pay for A' >> replacement bandwidth. I think we can't do that much for this case... >> >> > If a package meets feerate requirements as a >> package, the parents in the transaction are allowed to replace-by-fee >> mempool transactions. The child cannot replace mempool transactions." >> >> I agree with the Mallory-vs-Alice case. Though if Alice broadcasts A+B' >> to replace A+B because the first broadcast isn't satisfying anymore due = to >> mempool spikes ? Assuming B' fees is enough, I think that case as child = B' >> replacing in-mempool transaction B. Which I understand going against "T= he >> child cannot replace mempool transactions". >> >> Maybe wording could be a bit clearer ? >> >> > While it would be nice to have full RBF, malleability of the child won= 't >> > block RBF here. If we're trying to replace A', we only require that A' >> > signals replaceability, and don't mind if its child doesn't. >> >> Yes, it sounds good. >> >> > Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A >> miner >> > should prefer to utilize their block space more effectively. >> >> If your mempool is empty and only composed of A+C+D or A+B, I think >> taking A+C+D is the most efficient block construction you can come up wi= th >> as a miner ? >> >> > No, because we don't use that model. >> >> Can you describe what miner model we are using ? Like the block >> construction strategy implemented by `addPackagesTxs` or also encompassi= ng >> our current mempool acceptance policy, which I think rely on absolute fe= e >> over ancestor score in case of replacement ? >> >> I think this point is worthy to discuss as otherwise we might downgrade >> the efficiency of our current block construction strategy in periods of >> near-empty mempools. A knowledge which could be discreetly leveraged by = a >> miner to gain an advantage on the rest of the mining ecosystem. >> >> Note, I think we *might* have to go in this direction if we want to >> replace replace-by-fee by replace-by-feerate or replace-by-ancestor and >> solve in-depth pinning attacks. Though if we do so, >> IMO we would need more thoughts. >> >> I think we could restrain package acceptance to only confirmed inputs fo= r >> now and revisit later this point ? For LN-anchor, you can assume that th= e >> fee-bumping UTXO feeding the CPFP is already >> confirmed. Or are there currently-deployed use-cases which would benefit >> from your proposed Rule #2 ? >> >> Antoine >> >> Le jeu. 23 sept. 2021 =C3=A0 11:36, Gloria Zhao = a >> =C3=A9crit : >> >>> Hi Antoine, >>> >>> Thanks as always for your input. I'm glad we agree on so much! >>> >>> In summary, it seems that the decisions that might still need >>> attention/input from devs on this mailing list are: >>> 1. Whether we should start with multiple-parent-1-child or >>> 1-parent-1-child. >>> 2. Whether it's ok to require that the child not have conflicts with >>> mempool transactions. >>> >>> Responding to your comments... >>> >>> > IIUC, you have package A+B, during the dedup phase early in >>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A= ' >>> and A' is higher feerate than A, you trim A and replace by A' ? >>> >>> > I think this approach is safe, the one who appears unsafe to me is >>> when A' has a _lower_ feerate, even if A' is already accepted by our >>> mempool ? In that case iirc that would be a pinning. >>> >>> Right, the fact that we essentially always choose the first-seen witnes= s >>> is an unfortunate limitation that exists already. Adding package mempoo= l >>> accept doesn't worsen this, but the procedure in the future is to repla= ce >>> the witness when it makes sense economically. We can also add logic to >>> allow package feerate to pay for witness replacements as well. This is >>> pretty far into the future, though. >>> >>> > It sounds uneconomical for an attacker but I think it's not when you >>> consider than you can "batch" attack against multiple honest >>> counterparties. E.g, Mallory broadcast A' + B' + C' + D' where A' confl= icts >>> with Alice's honest package P1, B' conflicts with Bob's honest package = P2, >>> C' conflicts with Caroll's honest package P3. And D' is a high-fee chil= d of >>> A' + B' + C'. >>> >>> > If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of >>> HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacker ? >>> >>> I could be misunderstanding, but an attacker wouldn't be able to >>> batch-attack like this. Alice's package only conflicts with A' + D', no= t A' >>> + B' + C' + D'. She only needs to pay for evicting 2 transactions. >>> >>> > Do we assume that broadcasted packages are "honest" by default and >>> that the parent(s) always need the child to pass the fee checks, that w= ay >>> saving the processing of individual transactions which are expected to = fail >>> in 99% of cases or more ad hoc composition of packages at relay ? >>> > I think this point is quite dependent on the p2p packages format/logi= c >>> we'll end up on and that we should feel free to revisit it later ? >>> >>> I think it's the opposite; there's no way for us to assume that p2p >>> packages will be "honest." I'd like to have two things before we expose= on >>> P2P: (1) ensure that the amount of resources potentially allocated for >>> package validation isn't disproportionately higher than that of single >>> transaction validation and (2) only use package validation when we're >>> unsatisifed with the single validation result, e.g. we might get better >>> fees. >>> Yes, let's revisit this later :) >>> >>> > Yes, if you receive A+B, and A is already in-mempoo, I agree you can >>> discard its feerate as B should pay for all fees checked on its own. Wh= ere >>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B' >>> have a fee high enough to cover the bandwidth penalty replacement >>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ? >>> >>> B' only needs to pay for itself in this case. >>> >>> > > Do we want the child to be able to replace mempool transactions as >>> well? >>> >>> > If we mean when you have replaceable A+B then A'+B' try to replace >>> with a higher-feerate ? I think that's exactly the case we need for >>> Lightning as A+B is coming from Alice and A'+B' is coming from Bob :/ >>> >>> Let me clarify this because I can see that my wording was ambiguous, an= d >>> then please let me know if it fits Lightning's needs? >>> >>> In my proposal, I wrote "If a package meets feerate requirements as a >>> package, the parents in the transaction are allowed to replace-by-fee >>> mempool transactions. The child cannot replace mempool transactions." W= hat >>> I meant was: the package can replace mempool transactions if any of the >>> parents conflict with mempool transactions. The child cannot not confli= ct >>> with any mempool transactions. >>> The Lightning use case this attempts to address is: Alice and Mallory >>> are LN counterparties, and have packages A+B and A'+B', respectively. A= and >>> A' are their commitment transactions and conflict with each other; they >>> have shared inputs and different txids. >>> B spends Alice's anchor output from A. B' spends Mallory's anchor outpu= t >>> from A'. Thus, B and B' do not conflict with each other. >>> Alice can broadcast her package, A+B, to replace Mallory's package, >>> A'+B', since B doesn't conflict with the mempool. >>> >>> Would this be ok? >>> >>> > The second option, a child of A', In the LN case I think the CPFP is >>> attached on one's anchor output. >>> >>> While it would be nice to have full RBF, malleability of the child won'= t >>> block RBF here. If we're trying to replace A', we only require that A' >>> signals replaceability, and don't mind if its child doesn't. >>> >>> > > B has an ancestor score of 10sat/vb and D has an >>> > > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower tha= n >>> B's, >>> > > it fails the proposed package RBF Rule #2, so this package would be >>> > > rejected. Does this meet your expectations? >>> >>> > Well what sounds odd to me, in my example, we fail D even if it has a >>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absol= ute >>> fees are 4500 sats ? >>> >>> Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A >>> miner should prefer to utilize their block space more effectively. >>> >>> > Is this compatible with a model where a miner prioritizes absolute >>> fees over ancestor score, in the case that mempools aren't full-enough = to >>> fulfill a block ? >>> >>> No, because we don't use that model. >>> >>> Thanks, >>> Gloria >>> >>> On Thu, Sep 23, 2021 at 5:29 AM Antoine Riard >>> wrote: >>> >>>> > Correct, if B+C is too low feerate to be accepted, we will reject it= . >>>> I >>>> > prefer this because it is incentive compatible: A can be mined by >>>> itself, >>>> > so there's no reason to prefer A+B+C instead of A. >>>> > As another way of looking at this, consider the case where we do >>>> accept >>>> > A+B+C and it sits at the "bottom" of our mempool. If our mempool >>>> reaches >>>> > capacity, we evict the lowest descendant feerate transactions, which >>>> are >>>> > B+C in this case. This gives us the same resulting mempool, with A >>>> and not >>>> > B+C. >>>> >>>> I agree here. Doing otherwise, we might evict other transactions >>>> mempool in `MempoolAccept::Finalize` with a higher-feerate than B+C wh= ile >>>> those evicted transactions are the most compelling for block construct= ion. >>>> >>>> I thought at first missing this acceptance requirement would break a >>>> fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent i= s >>>> attached to a child signed with SIGHASH_ANYONECANPAY but in this case = the >>>> child fee is capturing the parent value. I can't think of other fee-bu= mping >>>> schemes potentially affected. If they do exist I would say they're wro= ng in >>>> their design assumptions. >>>> >>>> > If or when we have witness replacement, the logic is: if the >>>> individual >>>> > transaction is enough to replace the mempool one, the replacement wi= ll >>>> > happen during the preceding individual transaction acceptance, and >>>> > deduplication logic will work. Otherwise, we will try to deduplicate >>>> by >>>> > wtxid, see that we need a package witness replacement, and use the >>>> package >>>> > feerate to evaluate whether this is economically rational. >>>> >>>> IIUC, you have package A+B, during the dedup phase early in >>>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd = A' >>>> and A' is higher feerate than A, you trim A and replace by A' ? >>>> >>>> I think this approach is safe, the one who appears unsafe to me is whe= n >>>> A' has a _lower_ feerate, even if A' is already accepted by our mempoo= l ? >>>> In that case iirc that would be a pinning. >>>> >>>> Good to see progress on witness replacement before we see usage of >>>> Taproot tree in the context of multi-party, where a malicious counterp= arty >>>> inflates its witness to jam a honest spending. >>>> >>>> (Note, the commit linked currently points nowhere :)) >>>> >>>> >>>> > Please note that A may replace A' even if A' has higher fees than A >>>> > individually, because the proposed package RBF utilizes the fees and >>>> size >>>> > of the entire package. This just requires E to pay enough fees, >>>> although >>>> > this can be pretty high if there are also potential B' and C' >>>> competing >>>> > commitment transactions that we don't know about. >>>> >>>> Ah right, if the package acceptance waives `PaysMoreThanConflicts` for >>>> the individual check on A, the honest package should replace the pinni= ng >>>> attempt. I've not fully parsed the proposed implementation yet. >>>> >>>> Though note, I think it's still unsafe for a Lightning >>>> multi-commitment-broadcast-as-one-package as a malicious A' might have= an >>>> absolute fee higher than E. It sounds uneconomical for >>>> an attacker but I think it's not when you consider than you can "batch= " >>>> attack against multiple honest counterparties. E.g, Mallory broadcast = A' + >>>> B' + C' + D' where A' conflicts with Alice's honest package P1, B' >>>> conflicts with Bob's honest package P2, C' conflicts with Caroll's hon= est >>>> package P3. And D' is a high-fee child of A' + B' + C'. >>>> >>>> If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of >>>> HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacker ? >>>> >>>> > So far, my understanding is that multi-parent-1-child is desired for >>>> > batched fee-bumping ( >>>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289= ) >>>> and >>>> > I've also seen your response which I have less context on ( >>>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202= ). >>>> That >>>> > being said, I am happy to create a new proposal for 1 parent + 1 chi= ld >>>> > (which would be slightly simpler) and plan for moving to >>>> > multi-parent-1-child later if that is preferred. I am very intereste= d >>>> in >>>> > hearing feedback on that approach. >>>> >>>> I think batched fee-bumping is okay as long as you don't have >>>> time-sensitive outputs encumbering your commitment transactions. For t= he >>>> reasons mentioned above, I think that's unsafe. >>>> >>>> What I'm worried about is L2 developers, potentially not aware about >>>> all the mempool subtleties blurring the difference and always batching >>>> their broadcast by default. >>>> >>>> IMO, a good thing by restraining to 1-parent + 1 child, we >>>> artificially constraint L2 design space for now and minimize risks of >>>> unsafe usage of the package API :) >>>> >>>> I think that's a point where it would be relevant to have the opinion >>>> of more L2 devs. >>>> >>>> > I think there is a misunderstanding here - let me describe what I'm >>>> > proposing we'd do in this situation: we'll try individual submission >>>> for A, >>>> > see that it fails due to "insufficient fees." Then, we'll try packag= e >>>> > validation for A+B and use package RBF. If A+B pays enough, it can >>>> still >>>> > replace A'. If A fails for a bad signature, we won't look at B or >>>> A+B. Does >>>> > this meet your expectations? >>>> >>>> Yes there was a misunderstanding, I think this approach is correct, >>>> it's more a question of performance. Do we assume that broadcasted pac= kages >>>> are "honest" by default and that the parent(s) always need the child t= o >>>> pass the fee checks, that way saving the processing of individual >>>> transactions which are expected to fail in 99% of cases or more ad hoc >>>> composition of packages at relay ? >>>> >>>> I think this point is quite dependent on the p2p packages format/logic >>>> we'll end up on and that we should feel free to revisit it later ? >>>> >>>> >>>> > What problem are you trying to solve by the package feerate *after* >>>> dedup >>>> rule ? >>>> > My understanding is that an in-package transaction might be already = in >>>> the mempool. Therefore, to compute a correct RBF penalty replacement, >>>> the >>>> vsize of this transaction could be discarded lowering the cost of >>>> package >>>> RBF. >>>> >>>> > I'm proposing that, when a transaction has already been submitted to >>>> > mempool, we would ignore both its fees and vsize when calculating >>>> package >>>> > feerate. >>>> >>>> Yes, if you receive A+B, and A is already in-mempoo, I agree you can >>>> discard its feerate as B should pay for all fees checked on its own. W= here >>>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B= ' >>>> have a fee high enough to cover the bandwidth penalty replacement >>>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ? >>>> >>>> If you have a second-layer like current Lightning, you might have a >>>> counterparty commitment to replace and should always expect to have to= pay >>>> for parent replacement bandwidth. >>>> >>>> Where a potential discount sounds interesting is when you have an >>>> univoque state on the first-stage of transactions. E.g DLC's funding >>>> transaction which might be CPFP by any participant iirc. >>>> >>>> > Note that, if C' conflicts with C, it also conflicts with D, since D >>>> is a >>>> > descendant of C and would thus need to be evicted along with it. >>>> >>>> Ah once again I think it's a misunderstanding without the code under m= y >>>> eyes! If we do C' `PreChecks`, solve the conflicts provoked by it, i.e= mark >>>> for potential eviction D and don't consider it for future conflicts in= the >>>> rest of the package, I think D' `PreChecks` should be good ? >>>> >>>> > More generally, this example is surprising to me because I didn't >>>> think >>>> > packages would be used to fee-bump replaceable transactions. Do we >>>> want the >>>> > child to be able to replace mempool transactions as well? >>>> >>>> If we mean when you have replaceable A+B then A'+B' try to replace wit= h >>>> a higher-feerate ? I think that's exactly the case we need for Lightni= ng as >>>> A+B is coming from Alice and A'+B' is coming from Bob :/ >>>> >>>> > I'm not sure what you mean? Let's say we have a package of parent A = + >>>> child >>>> > B, where A is supposed to replace a mempool transaction A'. Are you >>>> saying >>>> > that counterparties are able to malleate the package child B, or a >>>> child of >>>> > A'? >>>> >>>> The second option, a child of A', In the LN case I think the CPFP is >>>> attached on one's anchor output. >>>> >>>> I think it's good if we assume the >>>> solve-conflicts-after-parent's`'PreChecks` mentioned above or fixing >>>> inherited signaling or full-rbf ? >>>> >>>> > Sorry, I don't understand what you mean by "preserve the package >>>> > integrity?" Could you elaborate? >>>> >>>> After thinking the relaxation about the "new" unconfirmed input is not >>>> linked to trimming but I would say more to the multi-parent support. >>>> >>>> Let's say you have A+B trying to replace C+D where B is also spending >>>> already in-mempool E. To succeed, you need to waive the no-new-unconfi= rmed >>>> input as D isn't spending E. >>>> >>>> So good, I think we agree on the problem description here. >>>> >>>> > I am in agreement with your calculations but unsure if we disagree o= n >>>> the >>>> > expected outcome. Yes, B has an ancestor score of 10sat/vb and D has >>>> an >>>> > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than >>>> B's, >>>> > it fails the proposed package RBF Rule #2, so this package would be >>>> > rejected. Does this meet your expectations? >>>> >>>> Well what sounds odd to me, in my example, we fail D even if it has a >>>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D abso= lute >>>> fees are 4500 sats ? >>>> >>>> Is this compatible with a model where a miner prioritizes absolute fee= s >>>> over ancestor score, in the case that mempools aren't full-enough to >>>> fulfill a block ? >>>> >>>> Let me know if I can clarify a point. >>>> >>>> Antoine >>>> >>>> Le lun. 20 sept. 2021 =C3=A0 11:10, Gloria Zhao a >>>> =C3=A9crit : >>>> >>>>> >>>>> Hi Antoine, >>>>> >>>>> First of all, thank you for the thorough review. I appreciate your >>>>> insight on LN requirements. >>>>> >>>>> > IIUC, you have a package A+B+C submitted for acceptance and A is >>>>> already in your mempool. You trim out A from the package and then eva= luate >>>>> B+C. >>>>> >>>>> > I think this might be an issue if A is the higher-fee element of th= e >>>>> ABC package. B+C package fees might be under the mempool min fee and = will >>>>> be rejected, potentially breaking the acceptance expectations of the >>>>> package issuer ? >>>>> >>>>> Correct, if B+C is too low feerate to be accepted, we will reject it. >>>>> I prefer this because it is incentive compatible: A can be mined by i= tself, >>>>> so there's no reason to prefer A+B+C instead of A. >>>>> As another way of looking at this, consider the case where we do >>>>> accept A+B+C and it sits at the "bottom" of our mempool. If our mempo= ol >>>>> reaches capacity, we evict the lowest descendant feerate transactions= , >>>>> which are B+C in this case. This gives us the same resulting mempool,= with >>>>> A and not B+C. >>>>> >>>>> >>>>> > Further, I think the dedup should be done on wtxid, as you might >>>>> have multiple valid witnesses. Though with varying vsizes and as such >>>>> offering different feerates. >>>>> >>>>> I agree that variations of the same package with different witnesses >>>>> is a case that must be handled. I consider witness replacement to be = a >>>>> project that can be done in parallel to package mempool acceptance be= cause >>>>> being able to accept packages does not worsen the problem of a >>>>> same-txid-different-witness "pinning" attack. >>>>> >>>>> If or when we have witness replacement, the logic is: if the >>>>> individual transaction is enough to replace the mempool one, the >>>>> replacement will happen during the preceding individual transaction >>>>> acceptance, and deduplication logic will work. Otherwise, we will try= to >>>>> deduplicate by wtxid, see that we need a package witness replacement,= and >>>>> use the package feerate to evaluate whether this is economically rati= onal. >>>>> >>>>> See the #22290 "handle package transactions already in mempool" commi= t >>>>> ( >>>>> https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf= 76145242fecad7e274bfcb5ff), >>>>> which handles the case of same-txid-different-witness by simply using= the >>>>> transaction in the mempool for now, with TODOs for what I just descri= bed. >>>>> >>>>> >>>>> > I'm not clearly understanding the accepted topologies. By "parent >>>>> and child to share a parent", do you mean the set of transactions A, = B, C, >>>>> where B is spending A and C is spending A and B would be correct ? >>>>> >>>>> Yes, that is what I meant. Yes, that would a valid package under thes= e >>>>> rules. >>>>> >>>>> > If yes, is there a width-limit introduced or we fallback on >>>>> MAX_PACKAGE_COUNT=3D25 ? >>>>> >>>>> No, there is no limit on connectivity other than "child with all >>>>> unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=3D25 and chil= d's >>>>> in-mempool + in-package ancestor limits. >>>>> >>>>> >>>>> > Considering the current Core's mempool acceptance rules, I think >>>>> CPFP batching is unsafe for LN time-sensitive closure. A malicious tx= -relay >>>>> jamming successful on one channel commitment transaction would contam= ine >>>>> the remaining commitments sharing the same package. >>>>> >>>>> > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are >>>>> commitment transactions and E a shared CPFP. If a malicious A' transa= ction >>>>> has a better feerate than A, the whole package acceptance will fail. = Even >>>>> if A' confirms in the following block, >>>>> the propagation and confirmation of B+C+D have been delayed. This >>>>> could carry on a loss of funds. >>>>> >>>>> Please note that A may replace A' even if A' has higher fees than A >>>>> individually, because the proposed package RBF utilizes the fees and = size >>>>> of the entire package. This just requires E to pay enough fees, altho= ugh >>>>> this can be pretty high if there are also potential B' and C' competi= ng >>>>> commitment transactions that we don't know about. >>>>> >>>>> >>>>> > IMHO, I'm leaning towards deploying during a first phase >>>>> 1-parent/1-child. I think it's the most conservative step still impro= ving >>>>> second-layer safety. >>>>> >>>>> So far, my understanding is that multi-parent-1-child is desired for >>>>> batched fee-bumping ( >>>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) >>>>> and I've also seen your response which I have less context on ( >>>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202)= . >>>>> That being said, I am happy to create a new proposal for 1 parent + 1= child >>>>> (which would be slightly simpler) and plan for moving to >>>>> multi-parent-1-child later if that is preferred. I am very interested= in >>>>> hearing feedback on that approach. >>>>> >>>>> >>>>> > If A+B is submitted to replace A', where A pays 0 sats, B pays 200 >>>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B >>>>> acceptance fails. For this reason I think the individual RBF should b= e >>>>> bypassed and only the package RBF apply ? >>>>> >>>>> I think there is a misunderstanding here - let me describe what I'm >>>>> proposing we'd do in this situation: we'll try individual submission = for A, >>>>> see that it fails due to "insufficient fees." Then, we'll try package >>>>> validation for A+B and use package RBF. If A+B pays enough, it can st= ill >>>>> replace A'. If A fails for a bad signature, we won't look at B or A+B= . Does >>>>> this meet your expectations? >>>>> >>>>> >>>>> > What problem are you trying to solve by the package feerate *after* >>>>> dedup rule ? >>>>> > My understanding is that an in-package transaction might be already >>>>> in the mempool. Therefore, to compute a correct RBF penalty replaceme= nt, >>>>> the vsize of this transaction could be discarded lowering the cost of >>>>> package RBF. >>>>> >>>>> I'm proposing that, when a transaction has already been submitted to >>>>> mempool, we would ignore both its fees and vsize when calculating pac= kage >>>>> feerate. In example G2, we shouldn't count M1 fees after its submissi= on to >>>>> mempool, since M1's fees have already been used to pay for its indivi= dual >>>>> bandwidth, and it shouldn't be used again to pay for P2 and P3's band= width. >>>>> We also shouldn't count its vsize, since it has already been paid for= . >>>>> >>>>> >>>>> > I think this is a footgunish API, as if a package issuer send the >>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B= ,C. >>>>> Then try to broadcast the higher-feerate C'+D' package, it should be >>>>> rejected. So it's breaking the naive broadcaster assumption that a >>>>> higher-feerate/higher-fee package always replaces ? >>>>> >>>>> Note that, if C' conflicts with C, it also conflicts with D, since D >>>>> is a descendant of C and would thus need to be evicted along with it. >>>>> Implicitly, D' would not be in conflict with D. >>>>> More generally, this example is surprising to me because I didn't >>>>> think packages would be used to fee-bump replaceable transactions. Do= we >>>>> want the child to be able to replace mempool transactions as well? Th= is can >>>>> be implemented with a bit of additional logic. >>>>> >>>>> > I think this is unsafe for L2s if counterparties have malleability >>>>> of the child transaction. They can block your package replacement by >>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents su= ch an >>>>> ability. >>>>> >>>>> I'm not sure what you mean? Let's say we have a package of parent A + >>>>> child B, where A is supposed to replace a mempool transaction A'. Are= you >>>>> saying that counterparties are able to malleate the package child B, = or a >>>>> child of A'? If they can malleate a child of A', that shouldn't matte= r as >>>>> long as A' is signaling replacement. This would be handled identicall= y with >>>>> full RBF and what Core currently implements. >>>>> >>>>> > I think this is an issue brought by the trimming during the dedup >>>>> phase. If we preserve the package integrity, only re-using the tx-lev= el >>>>> checks results of already in-mempool transactions to gain in CPU time= we >>>>> won't have this issue. Package childs can add unconfirmed inputs as l= ong as >>>>> they're in-package, the bip125 rule2 is only evaluated against parent= s ? >>>>> >>>>> Sorry, I don't understand what you mean by "preserve the package >>>>> integrity?" Could you elaborate? >>>>> >>>>> > Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 >>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate >>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vby= tes >>>>> and C pays 1 sat/vb for 1000 vbytes. >>>>> >>>>> > Package A + B ancestor score is 10 sat/vb. >>>>> >>>>> > D has a higher feerate/absolute fee than B. >>>>> >>>>> > Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C'= s >>>>> 1000 sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb) >>>>> >>>>> I am in agreement with your calculations but unsure if we disagree on >>>>> the expected outcome. Yes, B has an ancestor score of 10sat/vb and D = has an >>>>> ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than = B's, >>>>> it fails the proposed package RBF Rule #2, so this package would be >>>>> rejected. Does this meet your expectations? >>>>> >>>>> Thank you for linking to projects that might be interested in package >>>>> relay :) >>>>> >>>>> Thanks, >>>>> Gloria >>>>> >>>>> On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard < >>>>> antoine.riard@gmail.com> wrote: >>>>> >>>>>> Hi Gloria, >>>>>> >>>>>> > A package may contain transactions that are already in the mempool= . >>>>>> We >>>>>> > remove >>>>>> > ("deduplicate") those transactions from the package for the >>>>>> purposes of >>>>>> > package >>>>>> > mempool acceptance. If a package is empty after deduplication, we = do >>>>>> > nothing. >>>>>> >>>>>> IIUC, you have a package A+B+C submitted for acceptance and A is >>>>>> already in your mempool. You trim out A from the package and then ev= aluate >>>>>> B+C. >>>>>> >>>>>> I think this might be an issue if A is the higher-fee element of the >>>>>> ABC package. B+C package fees might be under the mempool min fee and= will >>>>>> be rejected, potentially breaking the acceptance expectations of the >>>>>> package issuer ? >>>>>> >>>>>> Further, I think the dedup should be done on wtxid, as you might hav= e >>>>>> multiple valid witnesses. Though with varying vsizes and as such off= ering >>>>>> different feerates. >>>>>> >>>>>> E.g you're going to evaluate the package A+B and A' is already in >>>>>> your mempool with a bigger valid witness. You trim A based on txid, = then >>>>>> you evaluate A'+B, which fails the fee checks. However, evaluating A= +B >>>>>> would have been a success. >>>>>> >>>>>> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to >>>>>> avoid repeated signatures verification and parent UTXOs fetches ? Ca= n we >>>>>> achieve the same goal by bypassing tx-level checks for already-in tx= n while >>>>>> conserving the package integrity for package-level checks ? >>>>>> >>>>>> > Note that it's possible for the parents to be >>>>>> > indirect >>>>>> > descendants/ancestors of one another, or for parent and child to >>>>>> share a >>>>>> > parent, >>>>>> > so we cannot make any other topology assumptions. >>>>>> >>>>>> I'm not clearly understanding the accepted topologies. By "parent an= d >>>>>> child to share a parent", do you mean the set of transactions A, B, = C, >>>>>> where B is spending A and C is spending A and B would be correct ? >>>>>> >>>>>> If yes, is there a width-limit introduced or we fallback on >>>>>> MAX_PACKAGE_COUNT=3D25 ? >>>>>> >>>>>> IIRC, one rationale to come with this topology limitation was to >>>>>> lower the DoS risks when potentially deploying p2p packages. >>>>>> >>>>>> Considering the current Core's mempool acceptance rules, I think CPF= P >>>>>> batching is unsafe for LN time-sensitive closure. A malicious tx-rel= ay >>>>>> jamming successful on one channel commitment transaction would conta= mine >>>>>> the remaining commitments sharing the same package. >>>>>> >>>>>> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitmen= t >>>>>> transactions and E a shared CPFP. If a malicious A' transaction has = a >>>>>> better feerate than A, the whole package acceptance will fail. Even = if A' >>>>>> confirms in the following block, >>>>>> the propagation and confirmation of B+C+D have been delayed. This >>>>>> could carry on a loss of funds. >>>>>> >>>>>> That said, if you're broadcasting commitment transactions without >>>>>> time-sensitive HTLC outputs, I think the batching is effectively a f= ee >>>>>> saving as you don't have to duplicate the CPFP. >>>>>> >>>>>> IMHO, I'm leaning towards deploying during a first phase >>>>>> 1-parent/1-child. I think it's the most conservative step still impr= oving >>>>>> second-layer safety. >>>>>> >>>>>> > *Rationale*: It would be incorrect to use the fees of transaction= s >>>>>> that are >>>>>> > already in the mempool, as we do not want a transaction's fees to = be >>>>>> > double-counted for both its individual RBF and package RBF. >>>>>> >>>>>> I'm unsure about the logical order of the checks proposed. >>>>>> >>>>>> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 >>>>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B >>>>>> acceptance fails. For this reason I think the individual RBF should = be >>>>>> bypassed and only the package RBF apply ? >>>>>> >>>>>> Note this situation is plausible, with current LN design, your >>>>>> counterparty can have a commitment transaction with a better fee jus= t by >>>>>> selecting a higher `dust_limit_satoshis` than yours. >>>>>> >>>>>> > Examples F and G [14] show the same package, but P1 is submitted >>>>>> > individually before >>>>>> > the package in example G. In example F, we can see that the 300vB >>>>>> package >>>>>> > pays >>>>>> > an additional 200sat in fees, which is not enough to pay for its o= wn >>>>>> > bandwidth >>>>>> > (BIP125#4). In example G, we can see that P1 pays enough to replac= e >>>>>> M1, but >>>>>> > using P1's fees again during package submission would make it look >>>>>> like a >>>>>> > 300sat >>>>>> > increase for a 200vB package. Even including its fees and size >>>>>> would not be >>>>>> > sufficient in this example, since the 300sat looks like enough for >>>>>> the 300vB >>>>>> > package. The calculcation after deduplication is 100sat increase >>>>>> for a >>>>>> > package >>>>>> > of size 200vB, which correctly fails BIP125#4. Assume all >>>>>> transactions have >>>>>> > a >>>>>> > size of 100vB. >>>>>> >>>>>> What problem are you trying to solve by the package feerate *after* >>>>>> dedup rule ? >>>>>> >>>>>> My understanding is that an in-package transaction might be already >>>>>> in the mempool. Therefore, to compute a correct RBF penalty replacem= ent, >>>>>> the vsize of this transaction could be discarded lowering the cost o= f >>>>>> package RBF. >>>>>> >>>>>> If we keep a "safe" dedup mechanism (see my point above), I think >>>>>> this discount is justified, as the validation cost of node operators= is >>>>>> paid for ? >>>>>> >>>>>> > The child cannot replace mempool transactions. >>>>>> >>>>>> Let's say you issue package A+B, then package C+B', where B' is a >>>>>> child of both A and C. This rule fails the acceptance of C+B' ? >>>>>> >>>>>> I think this is a footgunish API, as if a package issuer send the >>>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,= B,C. >>>>>> Then try to broadcast the higher-feerate C'+D' package, it should be >>>>>> rejected. So it's breaking the naive broadcaster assumption that a >>>>>> higher-feerate/higher-fee package always replaces ? And it might be = unsafe >>>>>> in protocols where states are symmetric. E.g a malicious counterpart= y >>>>>> broadcasts first S+A, then you honestly broadcast S+B, where B pays = better >>>>>> fees. >>>>>> >>>>>> > All mempool transactions to be replaced must signal replaceability= . >>>>>> >>>>>> I think this is unsafe for L2s if counterparties have malleability o= f >>>>>> the child transaction. They can block your package replacement by >>>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents s= uch an >>>>>> ability. >>>>>> >>>>>> I think it's better to either fix inherited signaling or move toward= s >>>>>> full-rbf. >>>>>> >>>>>> > if a package parent has already been submitted, it would >>>>>> > look >>>>>> >like the child is spending a "new" unconfirmed input. >>>>>> >>>>>> I think this is an issue brought by the trimming during the dedup >>>>>> phase. If we preserve the package integrity, only re-using the tx-le= vel >>>>>> checks results of already in-mempool transactions to gain in CPU tim= e we >>>>>> won't have this issue. Package childs can add unconfirmed inputs as = long as >>>>>> they're in-package, the bip125 rule2 is only evaluated against paren= ts ? >>>>>> >>>>>> > However, we still achieve the same goal of requiring the >>>>>> > replacement >>>>>> > transactions to have a ancestor score at least as high as the >>>>>> original >>>>>> > ones. >>>>>> >>>>>> I'm not sure if this holds... >>>>>> >>>>>> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 >>>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate >>>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vb= ytes >>>>>> and C pays 1 sat/vb for 1000 vbytes. >>>>>> >>>>>> Package A + B ancestor score is 10 sat/vb. >>>>>> >>>>>> D has a higher feerate/absolute fee than B. >>>>>> >>>>>> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's >>>>>> 1000 sats + D's 1500 sats) / >>>>>> A's 100 vb + C's 1000 vb + D's 100 vb) >>>>>> >>>>>> Overall, this is a review through the lenses of LN requirements. I >>>>>> think other L2 protocols/applications >>>>>> could be candidates to using package accept/relay such as: >>>>>> * https://github.com/lightninglabs/pool >>>>>> * https://github.com/discreetlogcontracts/dlcspecs >>>>>> * https://github.com/bitcoin-teleport/teleport-transactions/ >>>>>> * https://github.com/sapio-lang/sapio >>>>>> * >>>>>> https://github.com/commerceblock/mercury/blob/master/doc/statechains= .md >>>>>> * https://github.com/revault/practical-revault >>>>>> >>>>>> Thanks for rolling forward the ball on this subject. >>>>>> >>>>>> Antoine >>>>>> >>>>>> Le jeu. 16 sept. 2021 =C3=A0 03:55, Gloria Zhao via bitcoin-dev < >>>>>> bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit : >>>>>> >>>>>>> Hi there, >>>>>>> >>>>>>> I'm writing to propose a set of mempool policy changes to enable >>>>>>> package >>>>>>> validation (in preparation for package relay) in Bitcoin Core. Thes= e >>>>>>> would not >>>>>>> be consensus or P2P protocol changes. However, since mempool policy >>>>>>> significantly affects transaction propagation, I believe this is >>>>>>> relevant for >>>>>>> the mailing list. >>>>>>> >>>>>>> My proposal enables packages consisting of multiple parents and 1 >>>>>>> child. If you >>>>>>> develop software that relies on specific transaction relay >>>>>>> assumptions and/or >>>>>>> are interested in using package relay in the future, I'm very >>>>>>> interested to hear >>>>>>> your feedback on the utility or restrictiveness of these package >>>>>>> policies for >>>>>>> your use cases. >>>>>>> >>>>>>> A draft implementation of this proposal can be found in [Bitcoin Co= re >>>>>>> PR#22290][1]. >>>>>>> >>>>>>> An illustrated version of this post can be found at >>>>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a. >>>>>>> I have also linked the images below. >>>>>>> >>>>>>> ## Background >>>>>>> >>>>>>> Feel free to skip this section if you are already familiar with >>>>>>> mempool policy >>>>>>> and package relay terminology. >>>>>>> >>>>>>> ### Terminology Clarifications >>>>>>> >>>>>>> * Package =3D an ordered list of related transactions, representabl= e >>>>>>> by a Directed >>>>>>> Acyclic Graph. >>>>>>> * Package Feerate =3D the total modified fees divided by the total >>>>>>> virtual size of >>>>>>> all transactions in the package. >>>>>>> - Modified fees =3D a transaction's base fees + fee delta appli= ed >>>>>>> by the user >>>>>>> with `prioritisetransaction`. As such, we expect this to vary >>>>>>> across >>>>>>> mempools. >>>>>>> - Virtual Size =3D the maximum of virtual sizes calculated usin= g >>>>>>> [BIP141 >>>>>>> virtual size][2] and sigop weight. [Implemented here in >>>>>>> Bitcoin Core][3]. >>>>>>> - Note that feerate is not necessarily based on the base fees >>>>>>> and serialized >>>>>>> size. >>>>>>> >>>>>>> * Fee-Bumping =3D user/wallet actions that take advantage of miner >>>>>>> incentives to >>>>>>> boost a transaction's candidacy for inclusion in a block, >>>>>>> including Child Pays >>>>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our >>>>>>> intention in >>>>>>> mempool policy is to recognize when the new transaction is more >>>>>>> economical to >>>>>>> mine than the original one(s) but not open DoS vectors, so there ar= e >>>>>>> some >>>>>>> limitations. >>>>>>> >>>>>>> ### Policy >>>>>>> >>>>>>> The purpose of the mempool is to store the best (to be most >>>>>>> incentive-compatible >>>>>>> with miners, highest feerate) candidates for inclusion in a block. >>>>>>> Miners use >>>>>>> the mempool to build block templates. The mempool is also useful as >>>>>>> a cache for >>>>>>> boosting block relay and validation performance, aiding transaction >>>>>>> relay, and >>>>>>> generating feerate estimations. >>>>>>> >>>>>>> Ideally, all consensus-valid transactions paying reasonable fees >>>>>>> should make it >>>>>>> to miners through normal transaction relay, without any special >>>>>>> connectivity or >>>>>>> relationships with miners. On the other hand, nodes do not have >>>>>>> unlimited >>>>>>> resources, and a P2P network designed to let any honest node >>>>>>> broadcast their >>>>>>> transactions also exposes the transaction validation engine to DoS >>>>>>> attacks from >>>>>>> malicious peers. >>>>>>> >>>>>>> As such, for unconfirmed transactions we are considering for our >>>>>>> mempool, we >>>>>>> apply a set of validation rules in addition to consensus, primarily >>>>>>> to protect >>>>>>> us from resource exhaustion and aid our efforts to keep the highest >>>>>>> fee >>>>>>> transactions. We call this mempool _policy_: a set of (configurable= , >>>>>>> node-specific) rules that transactions must abide by in order to be >>>>>>> accepted >>>>>>> into our mempool. Transaction "Standardness" rules and mempool >>>>>>> restrictions such >>>>>>> as "too-long-mempool-chain" are both examples of policy. >>>>>>> >>>>>>> ### Package Relay and Package Mempool Accept >>>>>>> >>>>>>> In transaction relay, we currently consider transactions one at a >>>>>>> time for >>>>>>> submission to the mempool. This creates a limitation in the node's >>>>>>> ability to >>>>>>> determine which transactions have the highest feerates, since we >>>>>>> cannot take >>>>>>> into account descendants (i.e. cannot use CPFP) until all the >>>>>>> transactions are >>>>>>> in the mempool. Similarly, we cannot use a transaction's descendant= s >>>>>>> when >>>>>>> considering it for RBF. When an individual transaction does not mee= t >>>>>>> the mempool >>>>>>> minimum feerate and the user isn't able to create a replacement >>>>>>> transaction >>>>>>> directly, it will not be accepted by mempools. >>>>>>> >>>>>>> This limitation presents a security issue for applications and user= s >>>>>>> relying on >>>>>>> time-sensitive transactions. For example, Lightning and other >>>>>>> protocols create >>>>>>> UTXOs with multiple spending paths, where one counterparty's >>>>>>> spending path opens >>>>>>> up after a timelock, and users are protected from cheating scenario= s >>>>>>> as long as >>>>>>> they redeem on-chain in time. A key security assumption is that all >>>>>>> parties' >>>>>>> transactions will propagate and confirm in a timely manner. This >>>>>>> assumption can >>>>>>> be broken if fee-bumping does not work as intended. >>>>>>> >>>>>>> The end goal for Package Relay is to consider multiple transactions >>>>>>> at the same >>>>>>> time, e.g. a transaction with its high-fee child. This may help us >>>>>>> better >>>>>>> determine whether transactions should be accepted to our mempool, >>>>>>> especially if >>>>>>> they don't meet fee requirements individually or are better RBF >>>>>>> candidates as a >>>>>>> package. A combination of changes to mempool validation logic, >>>>>>> policy, and >>>>>>> transaction relay allows us to better propagate the transactions >>>>>>> with the >>>>>>> highest package feerates to miners, and makes fee-bumping tools mor= e >>>>>>> powerful >>>>>>> for users. >>>>>>> >>>>>>> The "relay" part of Package Relay suggests P2P messaging changes, >>>>>>> but a large >>>>>>> part of the changes are in the mempool's package validation logic. >>>>>>> We call this >>>>>>> *Package Mempool Accept*. >>>>>>> >>>>>>> ### Previous Work >>>>>>> >>>>>>> * Given that mempool validation is DoS-sensitive and complex, it >>>>>>> would be >>>>>>> dangerous to haphazardly tack on package validation logic. Many >>>>>>> efforts have >>>>>>> been made to make mempool validation less opaque (see [#16400][4], >>>>>>> [#21062][5], >>>>>>> [#22675][6], [#22796][7]). >>>>>>> * [#20833][8] Added basic capabilities for package validation, test >>>>>>> accepts only >>>>>>> (no submission to mempool). >>>>>>> * [#21800][9] Implemented package ancestor/descendant limit checks >>>>>>> for arbitrary >>>>>>> packages. Still test accepts only. >>>>>>> * Previous package relay proposals (see [#16401][10], [#19621][11])= . >>>>>>> >>>>>>> ### Existing Package Rules >>>>>>> >>>>>>> These are in master as introduced in [#20833][8] and [#21800][9]. >>>>>>> I'll consider >>>>>>> them as "given" in the rest of this document, though they can be >>>>>>> changed, since >>>>>>> package validation is test-accept only right now. >>>>>>> >>>>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=3D25` count and >>>>>>> `MAX_PACKAGE_SIZE=3D101KvB` total size [8] >>>>>>> >>>>>>> *Rationale*: This is already enforced as mempool >>>>>>> ancestor/descendant limits. >>>>>>> Presumably, transactions in a package are all related, so exceeding >>>>>>> this limit >>>>>>> would mean that the package can either be split up or it wouldn't >>>>>>> pass this >>>>>>> mempool policy. >>>>>>> >>>>>>> 2. Packages must be topologically sorted: if any dependencies exist >>>>>>> between >>>>>>> transactions, parents must appear somewhere before children. [8] >>>>>>> >>>>>>> 3. A package cannot have conflicting transactions, i.e. none of the= m >>>>>>> can spend >>>>>>> the same inputs. This also means there cannot be duplicate >>>>>>> transactions. [8] >>>>>>> >>>>>>> 4. When packages are evaluated against ancestor/descendant limits i= n >>>>>>> a test >>>>>>> accept, the union of all of their descendants and ancestors is >>>>>>> considered. This >>>>>>> is essentially a "worst case" heuristic where every transaction in >>>>>>> the package >>>>>>> is treated as each other's ancestor and descendant. [8] >>>>>>> Packages for which ancestor/descendant limits are accurately >>>>>>> captured by this >>>>>>> heuristic: [19] >>>>>>> >>>>>>> There are also limitations such as the fact that CPFP carve out is >>>>>>> not applied >>>>>>> to package transactions. #20833 also disables RBF in package >>>>>>> validation; this >>>>>>> proposal overrides that to allow packages to use RBF. >>>>>>> >>>>>>> ## Proposed Changes >>>>>>> >>>>>>> The next step in the Package Mempool Accept project is to implement >>>>>>> submission >>>>>>> to mempool, initially through RPC only. This allows us to test the >>>>>>> submission >>>>>>> logic before exposing it on P2P. >>>>>>> >>>>>>> ### Summary >>>>>>> >>>>>>> - Packages may contain already-in-mempool transactions. >>>>>>> - Packages are 2 generations, Multi-Parent-1-Child. >>>>>>> - Fee-related checks use the package feerate. This means that >>>>>>> wallets can >>>>>>> create a package that utilizes CPFP. >>>>>>> - Parents are allowed to RBF mempool transactions with a set of >>>>>>> rules similar >>>>>>> to BIP125. This enables a combination of CPFP and RBF, where a >>>>>>> transaction's descendant fees pay for replacing mempool conflicts. >>>>>>> >>>>>>> There is a draft implementation in [#22290][1]. It is WIP, but >>>>>>> feedback is >>>>>>> always welcome. >>>>>>> >>>>>>> ### Details >>>>>>> >>>>>>> #### Packages May Contain Already-in-Mempool Transactions >>>>>>> >>>>>>> A package may contain transactions that are already in the mempool. >>>>>>> We remove >>>>>>> ("deduplicate") those transactions from the package for the purpose= s >>>>>>> of package >>>>>>> mempool acceptance. If a package is empty after deduplication, we d= o >>>>>>> nothing. >>>>>>> >>>>>>> *Rationale*: Mempools vary across the network. It's possible for a >>>>>>> parent to be >>>>>>> accepted to the mempool of a peer on its own due to differences in >>>>>>> policy and >>>>>>> fee market fluctuations. We should not reject or penalize the entir= e >>>>>>> package for >>>>>>> an individual transaction as that could be a censorship vector. >>>>>>> >>>>>>> #### Packages Are Multi-Parent-1-Child >>>>>>> >>>>>>> Only packages of a specific topology are permitted. Namely, a >>>>>>> package is exactly >>>>>>> 1 child with all of its unconfirmed parents. After deduplication, >>>>>>> the package >>>>>>> may be exactly the same, empty, 1 child, 1 child with just some of >>>>>>> its >>>>>>> unconfirmed parents, etc. Note that it's possible for the parents t= o >>>>>>> be indirect >>>>>>> descendants/ancestors of one another, or for parent and child to >>>>>>> share a parent, >>>>>>> so we cannot make any other topology assumptions. >>>>>>> >>>>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple >>>>>>> parents >>>>>>> makes it possible to fee-bump a batch of transactions. Restricting >>>>>>> packages to a >>>>>>> defined topology is also easier to reason about and simplifies the >>>>>>> validation >>>>>>> logic greatly. Multi-parent-1-child allows us to think of the >>>>>>> package as one big >>>>>>> transaction, where: >>>>>>> >>>>>>> - Inputs =3D all the inputs of parents + inputs of the child that c= ome >>>>>>> from >>>>>>> confirmed UTXOs >>>>>>> - Outputs =3D all the outputs of the child + all outputs of the >>>>>>> parents that >>>>>>> aren't spent by other transactions in the package >>>>>>> >>>>>>> Examples of packages that follow this rule (variations of example A >>>>>>> show some >>>>>>> possibilities after deduplication): ![image][15] >>>>>>> >>>>>>> #### Fee-Related Checks Use Package Feerate >>>>>>> >>>>>>> Package Feerate =3D the total modified fees divided by the total >>>>>>> virtual size of >>>>>>> all transactions in the package. >>>>>>> >>>>>>> To meet the two feerate requirements of a mempool, i.e., the >>>>>>> pre-configured >>>>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum >>>>>>> feerate, the >>>>>>> total package feerate is used instead of the individual feerate. Th= e >>>>>>> individual >>>>>>> transactions are allowed to be below feerate requirements if the >>>>>>> package meets >>>>>>> the feerate requirements. For example, the parent(s) in the package >>>>>>> can have 0 >>>>>>> fees but be paid for by the child. >>>>>>> >>>>>>> *Rationale*: This can be thought of as "CPFP within a package," >>>>>>> solving the >>>>>>> issue of a parent not meeting minimum fees on its own. This allows = L2 >>>>>>> applications to adjust their fees at broadcast time instead of >>>>>>> overshooting or >>>>>>> risking getting stuck/pinned. >>>>>>> >>>>>>> We use the package feerate of the package *after deduplication*. >>>>>>> >>>>>>> *Rationale*: It would be incorrect to use the fees of transactions >>>>>>> that are >>>>>>> already in the mempool, as we do not want a transaction's fees to b= e >>>>>>> double-counted for both its individual RBF and package RBF. >>>>>>> >>>>>>> Examples F and G [14] show the same package, but P1 is submitted >>>>>>> individually before >>>>>>> the package in example G. In example F, we can see that the 300vB >>>>>>> package pays >>>>>>> an additional 200sat in fees, which is not enough to pay for its ow= n >>>>>>> bandwidth >>>>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace >>>>>>> M1, but >>>>>>> using P1's fees again during package submission would make it look >>>>>>> like a 300sat >>>>>>> increase for a 200vB package. Even including its fees and size woul= d >>>>>>> not be >>>>>>> sufficient in this example, since the 300sat looks like enough for >>>>>>> the 300vB >>>>>>> package. The calculcation after deduplication is 100sat increase fo= r >>>>>>> a package >>>>>>> of size 200vB, which correctly fails BIP125#4. Assume all >>>>>>> transactions have a >>>>>>> size of 100vB. >>>>>>> >>>>>>> #### Package RBF >>>>>>> >>>>>>> If a package meets feerate requirements as a package, the parents i= n >>>>>>> the >>>>>>> transaction are allowed to replace-by-fee mempool transactions. The >>>>>>> child cannot >>>>>>> replace mempool transactions. Multiple transactions can replace the >>>>>>> same >>>>>>> transaction, but in order to be valid, none of the transactions can >>>>>>> try to >>>>>>> replace an ancestor of another transaction in the same package >>>>>>> (which would thus >>>>>>> make its inputs unavailable). >>>>>>> >>>>>>> *Rationale*: Even if we are using package feerate, a package will >>>>>>> not propagate >>>>>>> as intended if RBF still requires each individual transaction to >>>>>>> meet the >>>>>>> feerate requirements. >>>>>>> >>>>>>> We use a set of rules slightly modified from BIP125 as follows: >>>>>>> >>>>>>> ##### Signaling (Rule #1) >>>>>>> >>>>>>> All mempool transactions to be replaced must signal replaceability. >>>>>>> >>>>>>> *Rationale*: Package RBF signaling logic should be the same for >>>>>>> package RBF and >>>>>>> single transaction acceptance. This would be updated if single >>>>>>> transaction >>>>>>> validation moves to full RBF. >>>>>>> >>>>>>> ##### New Unconfirmed Inputs (Rule #2) >>>>>>> >>>>>>> A package may include new unconfirmed inputs, but the ancestor >>>>>>> feerate of the >>>>>>> child must be at least as high as the ancestor feerates of every >>>>>>> transaction >>>>>>> being replaced. This is contrary to BIP125#2, which states "The >>>>>>> replacement >>>>>>> transaction may only include an unconfirmed input if that input was >>>>>>> included in >>>>>>> one of the original transactions. (An unconfirmed input spends an >>>>>>> output from a >>>>>>> currently-unconfirmed transaction.)" >>>>>>> >>>>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the >>>>>>> replacement >>>>>>> transaction has a higher ancestor score than the original >>>>>>> transaction(s) (see >>>>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed >>>>>>> input can lower the >>>>>>> ancestor score of the replacement transaction. P1 is trying to >>>>>>> replace M1, and >>>>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, >>>>>>> and M2 pays >>>>>>> 100sat. Assume all transactions have a size of 100vB. While, in >>>>>>> isolation, P1 >>>>>>> looks like a better mining candidate than M1, it must be mined with >>>>>>> M2, so its >>>>>>> ancestor feerate is actually 4.5sat/vB. This is lower than M1's >>>>>>> ancestor >>>>>>> feerate, which is 6sat/vB. >>>>>>> >>>>>>> In package RBF, the rule analogous to BIP125#2 would be "none of th= e >>>>>>> transactions in the package can spend new unconfirmed inputs." >>>>>>> Example J [17] shows >>>>>>> why, if any of the package transactions have ancestors, package >>>>>>> feerate is no >>>>>>> longer accurate. Even though M2 and M3 are not ancestors of P1 >>>>>>> (which is the >>>>>>> replacement transaction in an RBF), we're actually interested in th= e >>>>>>> entire >>>>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, >>>>>>> P1, P2, and >>>>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened >>>>>>> to only allow >>>>>>> the child to have new unconfirmed inputs, either, because it can >>>>>>> still cause us >>>>>>> to overestimate the package's ancestor score. >>>>>>> >>>>>>> However, enforcing a rule analogous to BIP125#2 would not only make >>>>>>> Package RBF >>>>>>> less useful, but would also break Package RBF for packages with >>>>>>> parents already >>>>>>> in the mempool: if a package parent has already been submitted, it >>>>>>> would look >>>>>>> like the child is spending a "new" unconfirmed input. In example K >>>>>>> [18], we're >>>>>>> looking to replace M1 with the entire package including P1, P2, and >>>>>>> P3. We must >>>>>>> consider the case where one of the parents is already in the mempoo= l >>>>>>> (in this >>>>>>> case, P2), which means we must allow P3 to have new unconfirmed >>>>>>> inputs. However, >>>>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not >>>>>>> replace M1 >>>>>>> with this package. >>>>>>> >>>>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less >>>>>>> strict than >>>>>>> BIP125#2. However, we still achieve the same goal of requiring the >>>>>>> replacement >>>>>>> transactions to have a ancestor score at least as high as the >>>>>>> original ones. As >>>>>>> a result, the entire package is required to be a higher feerate >>>>>>> mining candidate >>>>>>> than each of the replaced transactions. >>>>>>> >>>>>>> Another note: the [comment][13] above the BIP125#2 code in the >>>>>>> original RBF >>>>>>> implementation suggests that the rule was intended to be temporary. >>>>>>> >>>>>>> ##### Absolute Fee (Rule #3) >>>>>>> >>>>>>> The package must increase the absolute fee of the mempool, i.e. the >>>>>>> total fees >>>>>>> of the package must be higher than the absolute fees of the mempool >>>>>>> transactions >>>>>>> it replaces. Combined with the CPFP rule above, this differs from >>>>>>> BIP125 Rule #3 >>>>>>> - an individual transaction in the package may have lower fees than >>>>>>> the >>>>>>> transaction(s) it is replacing. In fact, it may have 0 fees, and >>>>>>> the child >>>>>>> pays for RBF. >>>>>>> >>>>>>> ##### Feerate (Rule #4) >>>>>>> >>>>>>> The package must pay for its own bandwidth; the package feerate mus= t >>>>>>> be higher >>>>>>> than the replaced transactions by at least minimum relay feerate >>>>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this >>>>>>> differs from >>>>>>> BIP125 Rule #4 - an individual transaction in the package can have = a >>>>>>> lower >>>>>>> feerate than the transaction(s) it is replacing. In fact, it may >>>>>>> have 0 fees, >>>>>>> and the child pays for RBF. >>>>>>> >>>>>>> ##### Total Number of Replaced Transactions (Rule #5) >>>>>>> >>>>>>> The package cannot replace more than 100 mempool transactions. This >>>>>>> is identical >>>>>>> to BIP125 Rule #5. >>>>>>> >>>>>>> ### Expected FAQs >>>>>>> >>>>>>> 1. Is it possible for only some of the package to make it into the >>>>>>> mempool? >>>>>>> >>>>>>> Yes, it is. However, since we evict transactions from the mempoo= l >>>>>>> by >>>>>>> descendant score and the package child is supposed to be sponsoring >>>>>>> the fees of >>>>>>> its parents, the most common scenario would be all-or-nothing. This >>>>>>> is >>>>>>> incentive-compatible. In fact, to be conservative, package >>>>>>> validation should >>>>>>> begin by trying to submit all of the transactions individually, and >>>>>>> only use the >>>>>>> package mempool acceptance logic if the parents fail due to low >>>>>>> feerate. >>>>>>> >>>>>>> 2. Should we allow packages to contain already-confirmed >>>>>>> transactions? >>>>>>> >>>>>>> No, for practical reasons. In mempool validation, we actually >>>>>>> aren't able to >>>>>>> tell with 100% confidence if we are looking at a transaction that >>>>>>> has already >>>>>>> confirmed, because we look up inputs using a UTXO set. If we have >>>>>>> historical >>>>>>> block data, it's possible to look for it, but this is inefficient, >>>>>>> not always >>>>>>> possible for pruning nodes, and unnecessary because we're not going >>>>>>> to do >>>>>>> anything with the transaction anyway. As such, we already have the >>>>>>> expectation >>>>>>> that transaction relay is somewhat "stateful" i.e. nobody should be >>>>>>> relaying >>>>>>> transactions that have already been confirmed. Similarly, we >>>>>>> shouldn't be >>>>>>> relaying packages that contain already-confirmed transactions. >>>>>>> >>>>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290 >>>>>>> [2]: >>>>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797= fc97d040fe/bip-0141.mediawiki#transaction-size-calculations >>>>>>> [3]: >>>>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f4= 0746f392eb75e/src/policy/policy.cpp#L282 >>>>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400 >>>>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062 >>>>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675 >>>>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796 >>>>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833 >>>>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800 >>>>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401 >>>>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621 >>>>>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawik= i >>>>>>> [13]: >>>>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c61= 4ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104 >>>>>>> [14]: >>>>>>> https://user-images.githubusercontent.com/25183001/133567078-075a97= 1c-0619-4339-9168-b41fd2b90c28.png >>>>>>> [15]: >>>>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da= 75-f875-44bb-b954-cb7a1725cc0d.png >>>>>>> [16]: >>>>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4= a8-ae9c-49f8-abb9-81e8e0aba224.png >>>>>>> [17]: >>>>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d= 0e-36c8-4831-b1a8-706634540af3.png >>>>>>> [18]: >>>>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff11= 42-439f-4547-800a-2ba2b0242bcb.png >>>>>>> [19]: >>>>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447= cb-dcb4-4a31-b9c1-7d86205b68bc.png >>>>>>> [20]: >>>>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f= 56-af96-44c8-8d78-983719888c19.png >>>>>>> _______________________________________________ >>>>>>> bitcoin-dev mailing list >>>>>>> bitcoin-dev@lists.linuxfoundation.org >>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >>>>>>> >>>>>> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev@lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> > --0000000000002dd02d05cd162aa1 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Bastien

> In the case of LN, an attacker= can game this and heavily restrict
your RBF attempts if you're onl= y allowed to use confirmed inputs
and have many channels (and a l= imited number of confirmed inputs).
Otherwise you'll need nod= e operators to pre-emptively split their
utxos into many small ut= xos just for fee bumping, which is inefficient...

I share= the concern about splitting utxos into smaller ones.
IIRC, the carve-ou= t tolerance is only 2txn/10_000 vb. If one of your counterparties attach a = junk branch on her own anchor output, are you allowed to chain your self-ow= ned unconfirmed CPFP ?
I'm thinking about the topology "Chained= CPFPs" exposed here : https://github.com/rust-bitcoin/rust-lightning/issue= s/989.
Or if you have another L2 broadcast topology which= could be safe w.r.t our current mempool logic :) ?


Le=C2=A0lun. 27 sept. 2021 =C3=A0=C2=A003:15, Bastien TEINTURIER &l= t;bastien@acinq.fr> a =C3=A9crit= =C2=A0:
I think we could= restrain package acceptance to only confirmed inputs for now and revisit l= ater this point ? For LN-anchor, you can assume that the fee-bumping UTXO f= eeding the CPFP is already
confirmed. Or are there currently-deployed us= e-cases which would benefit from your proposed Rule #2 ?

I think constraining package acceptance to only confirmed= inputs
is very limiting and quite dangerous for L2 protocols.

In the case of LN, an attacker can game this and hea= vily restrict
your RBF attempts if you're only allowed to use= confirmed inputs
and have many channels (and a limited number of= confirmed inputs).
Otherwise you'll need node operators to p= re-emptively split their
utxos into many small utxos just for fee= bumping, which is inefficient...

Bastien

Le= =C2=A0lun. 27 sept. 2021 =C3=A0=C2=A000:27, Antoine Riard via bitcoin-dev &= lt;bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit=C2=A0:
Hi Gl= oria,

Thanks for your answers,

> In summary, it seems that= the decisions that might still need
> attention/input from devs on t= his mailing list are:
> 1. Whether we should start with multiple-pare= nt-1-child or 1-parent-1-child.
> 2. Whether it's ok to require t= hat the child not have conflicts with
> mempool transactions.

= Yes 1) it would be good to have inputs of more potential users of package a= cceptance . And 2) I think it's more a matter of clearer wording of the= proposal.

However, see my final point on the relaxation around &quo= t;unconfirmed inputs" which might in fact alter our current block cons= truction strategy.

> Right, the fact that we essentially always c= hoose the first-seen witness is
> an unfortunate limitation that exis= ts already. Adding package mempool
> accept doesn't worsen this, = but the procedure in the future is to replace
> the witness when it m= akes sense economically. We can also add logic to
> allow package fee= rate to pay for witness replacements as well. This is
> pretty far in= to the future, though.

Yes I agree package mempool doesn't worse= n this. And it's not an issue for current LN as you can't significa= ntly inflate a spending witness for the 2-of-2 funding output.
However, = it might be an issue for multi-party protocol where the spending script has= alternative branches with asymmetric valid witness weights. Taproot should= ease that kind of script so hopefully we would deploy wtxid-replacement no= t too far in the future.

> I could be misunderstanding, but an at= tacker wouldn't be able to
> batch-attack like this. Alice's = package only conflicts with A' + D', not A'
> + B' + = C' + D'. She only needs to pay for evicting 2 transactions.

= Yeah I can be clearer, I think you have 2 pinning attacks scenarios to cons= ider.

In LN, if you're trying to confirm a commitment transactio= n to time-out or claim on-chain a HTLC and the timelock is near-expiration,= you should be ready to pay in commitment+2nd-stage HTLC transaction fees a= s much as the value offered by the HTLC.

Following this security ass= umption, an attacker can exploit it by targeting together commitment transa= ctions from different channels by blocking them under a high-fee child, of = which the fee value
is equal to the top-value HTLC + 1. Victims's fe= e-bumping logics won't overbid as it's not worthy to offer fees bey= ond their competed HTLCs. Apart from observing mempools state, victims can&= #39;t learn they're targeted by the same attacker.

To draw from = the aforementioned topology, Mallory broadcasts A' + B' + C' + = D', where A' conflicts with Alice's P1, B' conflicts with B= ob's P2, C' conflicts with Caroll's P3. Let's assume P1 is = confirming the top-value HTLC of the set. If D' fees is higher than P1 = + 1, it won't be rational for Alice or Bob or Caroll to keep offering c= ompeting feerates. Mallory will be at loss on stealing P1, as she has paid = more in fees but will realize a gain on P2+P3.

In this model, Alice = is allowed to evict those 2 transactions (A' + D') but as she is ec= onomically-bounded she won't succeed.

Mallory is maliciously exp= loiting RBF rule 3 on absolute fee. I think this 1st pinning scenario is co= rrect and "lucractive" when you sum the global gain/loss.

= There is a 2nd attack scenario where A + B + C + D, where D is the child of= A,B,C. All those transactions are honestly issued by Alice. Once A + B + C= + D are propagated in network mempools, Mallory is able to replace A + D w= ith =C2=A0A' + D' where D' is paying a higher fee. This package= A' + D' will confirm soon if D feerate was compelling but Mallory = succeeds in delaying the confirmation
of B + C for one or more blocks. A= s B + C are pre-signed commitments with a low-fee rate they won't confi= rm without Alice issuing a new child E. Mallory can repeat the same trick b= y broadcasting
B' + E' and delay again the confirmation of C.
If the remaining package pending HTLC has a higher-value than all the = malicious fees over-bid, Mallory should realize a gain. With this 2nd pinni= ng attack, the malicious entity buys confirmation delay of your packaged-to= gether commitments.

Assuming those attacks are correct, I'm lean= ing towards being conservative with the LDK broadcast backend. Though once = again, other L2 devs have likely other use-cases and opinions :)

>= ; =C2=A0B' only needs to pay for itself in this case.

Yes I thin= k it's a nice discount when UTXO is single-owned. In the context of sha= red-owned UTXO (e.g LN), you might not if there is an in-mempool package al= ready spending the UTXO and have to assume the worst-case scenario. I.e hav= e B' committing enough fee to pay for A' replacement bandwidth. I t= hink we can't do that much for this case...

> If a package me= ets feerate requirements as a
package, the parents in the transaction ar= e allowed to replace-by-fee
mempool transactions. The child cannot repla= ce mempool transactions."

I agree with the Mallory-vs-Alice cas= e. Though if Alice broadcasts A+B' to replace A+B because the first bro= adcast isn't satisfying anymore due to mempool spikes ? Assuming B'= fees is enough, I think that case as child B' replacing in-mempool tra= nsaction B. Which I understand going against=C2=A0 "The child cannot r= eplace mempool transactions".

Maybe wording could be a bit clea= rer ?

> While it would be nice to have full RBF, malleability of = the child won't
> block RBF here. If we're trying to replace = A', we only require that A'
> signals replaceability, and don= 't mind if its child doesn't.

Yes, it sounds good.

&g= t; Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A mi= ner
> should prefer to utilize their block space more effectively.
If your mempool is empty and only composed of A+C+D or A+B, I think ta= king A+C+D is the most efficient block construction you can come up with as= a miner ?

> No, because we don't use that model.

Can = you describe what miner model we are using ? Like the block construction st= rategy implemented by `addPackagesTxs` or also encompassing our current mem= pool acceptance policy, which I think rely on absolute fee over ancestor sc= ore in case of replacement ?

I think this point is worthy to discuss= as otherwise we might downgrade the efficiency of our current block constr= uction strategy in periods of near-empty mempools. A knowledge which could = be discreetly leveraged by a miner to gain an advantage on the rest of the = mining ecosystem.

Note, I think we *might* have to go in this direct= ion if we want to replace replace-by-fee by replace-by-feerate or replace-b= y-ancestor and solve in-depth pinning attacks. Though if we do so,
IMO = we would need more thoughts.

I think we could restrain package accep= tance to only confirmed inputs for now and revisit later this point ? For L= N-anchor, you can assume that the fee-bumping UTXO feeding the CPFP is alre= ady
confirmed. Or are there currently-deployed use-cases which would ben= efit from your proposed Rule #2 ?

Antoine

Le=C2=A0jeu. 23 sept. 2= 021 =C3=A0=C2=A011:36, Gloria Zhao <gloriajzhao@gmail.com> a =C3=A9crit=C2=A0:
H= i Antoine,

Thanks as always for your input. I'm glad we agr= ee on so much!

In summary, it seems that the decis= ions that might still need attention/input from devs on this mailing list a= re:
1. Whether we should start with multiple-parent-1-child or 1-paren= t-1-child.
2. Whether it's ok to require that the child not have con= flicts with mempool transactions.

Responding to your comments..= .

> IIUC, you have package A+B, during the dedup pha= se early in `AcceptMultipleTransactions` if you observe same-txid-different= -wtixd A' and A' is higher feerate than A, you trim A and replace b= y A' ?

> I think this approach is safe, the one who appears u= nsafe to me is when A' has a _lower_ feerate, even if A' is already= accepted by our mempool ? In that case iirc that would be a pinning.
Right, the fact that we essentially always choose the first-seen witness = is an unfortunate limitation that exists already. Adding package mempool ac= cept doesn't worsen this, but the procedure in the future is to replace= the witness when it makes sense economically. We can also add logic to all= ow package feerate to pay for witness replacements as well. This is pretty = far into the future, though.

> It sounds uneconomical for an atta= cker but I think it's not when you consider than you can "batch&qu= ot; attack against multiple honest counterparties. E.g, Mallory broadcast A= ' + B' + C' + D' where A' conflicts with Alice's ho= nest package P1, B' conflicts with Bob's honest package P2, C' = conflicts with Caroll's honest package P3. And D' is a high-fee chi= ld of A' + B' + C'.

> If D' is higher-fee than P1= or P2 or P3 but inferior to the sum of HTLCs confirmed by P1+P2+P3, I thin= k it's lucrative for the attacker ?

I could be misunderstanding,= but an attacker wouldn't be able to batch-attack like this. Alice'= s package only conflicts with A' + D', not A' + B' + C'= + D'. She only needs to pay for evicting 2 transactions.

> D= o we assume that broadcasted packages are "honest" by default and= that the parent(s) always need the child to pass the fee checks, that way = saving the processing of individual transactions which are expected to fail= in 99% of cases or more ad hoc composition of packages at relay ?
> = I think this point is quite dependent on the p2p packages format/logic we&#= 39;ll end up on and that we should feel free to revisit it later ?

I= think it's the opposite; there's no way for us to assume that p2p = packages will be "honest." I'd like to have two things before= we expose on P2P: (1) ensure that the amount of resources potentially allo= cated for package validation isn't disproportionately higher than that = of single transaction validation and (2) only use package validation when w= e're unsatisifed with the single validation result, e.g. we might get b= etter fees.
Yes, let's revisit this later :)
=C2=A0
=C2=A0>= Yes, if you receive A+B, and A is already in-mempoo, I agree you can disca= rd its feerate as B should pay for all fees checked on its own. Where I'= ;m unclear is when you have in-mempool A+B and receive A+B'. Should B&#= 39; have a fee high enough to cover the bandwidth penalty replacement (`Pay= sForRBF`, 2nd check) of both A+B' or only B' ?
=C2=A0
=C2=A0B= ' only needs to pay for itself in this case.
=C2=A0
> > Do = we want the child to be able to replace mempool transactions as well?
> If we mean when you have replaceable A+B then A'+B' try to r= eplace with a higher-feerate ? I think that's exactly the case we need = for Lightning as A+B is coming from Alice and A'+B' is coming from = Bob :/

Let me clarify this because I can see that my wording wa= s ambiguous, and then please let me know if it fits Lightning's needs?<= /div>

In my proposal, I wrote "If a package meets feerat= e requirements as a package, the parents in the transaction are allowed to = replace-by-fee mempool transactions. The child cannot replace mempool trans= actions." What I meant was: the package can replace mempool transactio= ns if any of the parents conflict with mempool transactions. The child cann= ot not conflict with any mempool transactions.
The Lightning use case th= is attempts to address is: Alice and Mallory are LN counterparties, and hav= e packages A+B and A'+B', respectively. A and A' are their comm= itment transactions and conflict with each other; they have shared inputs a= nd different txids.
B spends Alice's anchor output from A. B' sp= ends Mallory's anchor output from A'. Thus, B and B' do not con= flict with each other.
Alice can broadcast her package, A+B, to replace = Mallory's package, A'+B', since B doesn't conflict with the= mempool.

Would this be ok?

> The second option, a child o= f A', In the LN case I think the CPFP is attached on one's anchor o= utput.

While it would be nice to have full RBF, malleability of the = child won't block RBF here. If we're trying to replace A', we o= nly require that A' signals replaceability, and don't mind if its c= hild doesn't.

> > B has an ancestor score of 10sat/vb and = D has an
> > ancestor score of ~2.9sat/vb. Since D's ancestor = score is lower than B's,
> > it fails the proposed package RBF= Rule #2, so this package would be
> > rejected. Does this meet yo= ur expectations?

> Well what sounds odd to me, in my example, we = fail D even if it has a higher-fee than B. Like A+B absolute fees are 2000 = sats and A+C+D absolute fees are 4500 sats ?

Yes, A+C+D pays 2500sat= more in fees, but it is also 1000vB larger. A miner should prefer to utili= ze their block space more effectively.

> Is this compatible with = a model where a miner prioritizes absolute fees over ancestor score, in the= case that mempools aren't full-enough to fulfill a block ?

No, = because we don't use that model.

Thanks,
Gloria

On Thu, Sep 23= , 2021 at 5:29 AM Antoine Riard <antoine.riard@gmail.com> wrote:
> Correct,= if B+C is too low feerate to be accepted, we will reject it. I
> pre= fer this because it is incentive compatible: A can be mined by itself,
&= gt; so there's no reason to prefer A+B+C instead of A.
> As anoth= er way of looking at this, consider the case where we do accept
> A+B= +C and it sits at the "bottom" of our mempool. If our mempool rea= ches
> capacity, we evict the lowest descendant feerate transactions,= which are
> B+C in this case. This gives us the same resulting mempo= ol, with A and not
> B+C.

I agree here. Doing otherwise, we mi= ght evict other transactions mempool in `MempoolAccept::Finalize` with a hi= gher-feerate than B+C while those evicted transactions are the most compell= ing for block construction.

I thought at first missing this acceptan= ce requirement would break a fee-bumping scheme like Parent-Pay-For-Child w= here a high-fee parent is attached to a child signed with SIGHASH_ANYONECAN= PAY but in this case the child fee is capturing the parent value. I can'= ;t think of other fee-bumping schemes potentially affected. If they do exis= t I would say they're wrong in their design assumptions.

> If= or when we have witness replacement, the logic is: if the individual
&g= t; transaction is enough to replace the mempool one, the replacement will> happen during the preceding individual transaction acceptance, and> deduplication logic will work. Otherwise, we will try to deduplicate= by
> wtxid, see that we need a package witness replacement, and use = the package
> feerate to evaluate whether this is economically ration= al.

IIUC, you have package A+B, during the dedup phase early in `Acc= eptMultipleTransactions` if you observe same-txid-different-wtixd A' an= d A' is higher feerate than A, you trim A and replace by A' ?
I think this approach is safe, the one who appears unsafe to me is when A= ' has a _lower_ feerate, even if A' is already accepted by our memp= ool ? In that case iirc that would be a pinning.

Good to see progres= s on witness replacement before we see usage of Taproot tree in the context= of multi-party, where a malicious counterparty inflates its witness to jam= a honest spending.

(Note, the commit linked currently points nowher= e :))


> Please note that A may replace A' even if A' = has higher fees than A
> individually, because the proposed package R= BF utilizes the fees and size
> of the entire package. This just requ= ires E to pay enough fees, although
> this can be pretty high if ther= e are also potential B' and C' competing
> commitment transac= tions that we don't know about.

Ah right, if the package accepta= nce waives `PaysMoreThanConflicts` for the individual check on A, the hones= t package should replace the pinning attempt. I've not fully parsed the= proposed implementation yet.

Though note, I think it's still un= safe for a Lightning multi-commitment-broadcast-as-one-package as a malicio= us A' might have an absolute fee higher than E. It sounds uneconomical = for
an attacker but I think it's not when you consider than you can = "batch" attack against multiple honest counterparties. E.g, Mallo= ry broadcast A' + B' + C' + D' where A' conflicts with = Alice's honest package P1, B' conflicts with Bob's honest packa= ge P2, C' conflicts with Caroll's honest package P3. And D' is = a high-fee child of A' + B' + C'.

If D' is higher-fe= e than P1 or P2 or P3 but inferior to the sum of HTLCs confirmed by P1+P2+P= 3, I think it's lucrative for the attacker ?

> So far, my und= erstanding is that multi-parent-1-child is desired for
> batched fee-= bumping (
> https://github.com/bitcoin/bitcoi= n/pull/22674#issuecomment-897951289) and
> I've also seen you= r response which I have less context on (
> h= ttps://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202). T= hat
> being said, I am happy to create a new proposal for 1 parent + = 1 child
> (which would be slightly simpler) and plan for moving to> multi-parent-1-child later if that is preferred. I am very interested= in
> hearing feedback on that approach.

I think batched fee-b= umping is okay as long as you don't have time-sensitive outputs encumbe= ring your commitment transactions. For the reasons mentioned above, I think= that's unsafe.

What I'm worried about is=C2=A0 L2 developer= s, potentially not aware about all the mempool subtleties blurring the diff= erence and always batching their broadcast by default.

IMO, a good t= hing by restraining to 1-parent + 1 child, =C2=A0we artificially constraint= L2 design space for now and minimize risks of unsafe usage of the package = API :)

I think that's a point where it would be relevant to have= the opinion of more L2 devs.

> I think there is a misunderstandi= ng here - let me describe what I'm
> proposing we'd do in thi= s situation: we'll try individual submission for A,
> see that it= fails due to "insufficient fees." Then, we'll try package> validation for A+B and use package RBF. If A+B pays enough, it can st= ill
> replace A'. If A fails for a bad signature, we won't lo= ok at B or A+B. Does
> this meet your expectations?

Yes there = was a misunderstanding, I think this approach is correct, it's more a q= uestion of performance. Do we assume that broadcasted packages are "ho= nest" by default and that the parent(s) always need the child to pass = the fee checks, that way saving the processing of individual transactions w= hich are expected to fail in 99% of cases or more ad hoc composition of pac= kages at relay ?

I think this point is quite dependent on the p2p pa= ckages format/logic we'll end up on and that we should feel free to rev= isit it later ?


> What problem are you trying to solve by the= package feerate *after* dedup
rule ?
> My understanding is that a= n in-package transaction might be already in
the mempool. Therefore, to = compute a correct RBF penalty replacement, the
vsize of this transaction= could be discarded lowering the cost of package
RBF.

> I'= m proposing that, when a transaction has already been submitted to
> = mempool, we would ignore both its fees and vsize when calculating package> feerate.

Yes, if you receive A+B, and A is already in-mempoo= , I agree you can discard its feerate as B should pay for all fees checked = on its own. Where I'm unclear is when you have in-mempool A+B and recei= ve A+B'. Should B' have a fee high enough to cover the bandwidth pe= nalty replacement (`PaysForRBF`, 2nd check) of both A+B' or only B'= ?

If you have a second-layer like current Lightning, you might have= a counterparty commitment to replace and should always expect to have to p= ay for parent replacement bandwidth.

Where a potential discount soun= ds interesting is when you have an univoque state on the first-stage of tra= nsactions. E.g DLC's funding transaction which might be CPFP by any par= ticipant iirc.

> Note that, if C' conflicts with C, it also c= onflicts with D, since D is a
> descendant of C and would thus need t= o be evicted along with it.

Ah once again I think it's a misunde= rstanding without the code under my eyes! If we do C' `PreChecks`, solv= e the conflicts provoked by it, i.e mark for potential eviction D and don&#= 39;t consider it for future conflicts in the rest of the package, I think D= ' `PreChecks` should be good ?

> More generally, this example= is surprising to me because I didn't think
> packages would be u= sed to fee-bump replaceable transactions. Do we want the
> child to b= e able to replace mempool transactions as well?

If we mean when you = have replaceable A+B then A'+B' try to replace with a higher-feerat= e ? I think that's exactly the case we need for Lightning as A+B is com= ing from Alice and A'+B' is coming from Bob :/

> I'm = not sure what you mean? Let's say we have a package of parent A + child=
> B, where A is supposed to replace a mempool transaction A'. Ar= e you saying
> that counterparties are able to malleate the package c= hild B, or a child of
> A'?

The second option, a child of= A', In the LN case I think the CPFP is attached on one's anchor ou= tput.

I think it's good if we assume the solve-conflicts-after-p= arent's`'PreChecks` mentioned above or fixing inherited signaling o= r full-rbf ?

> Sorry, I don't understand what you mean by &qu= ot;preserve the package
> integrity?" Could you elaborate?
After thinking the relaxation about the "new" unconfirmed input= is not linked to trimming but I would say more to the multi-parent support= .

Let's say you have A+B trying to replace C+D where B is also s= pending already in-mempool E. To succeed, you need to waive the no-new-unco= nfirmed input as D isn't spending E.

So good, I think we agree o= n the problem description here.

> I am in agreement with your cal= culations but unsure if we disagree on the
> expected outcome. Yes, B= has an ancestor score of 10sat/vb and D has an
> ancestor score of ~= 2.9sat/vb. Since D's ancestor score is lower than B's,
> it f= ails the proposed package RBF Rule #2, so this package would be
> rej= ected. Does this meet your expectations?

Well what sounds odd to me,= in my example, we fail D even if it has a higher-fee than B. Like A+B abso= lute fees are 2000 sats and A+C+D absolute fees are 4500 sats ?

Is t= his compatible with a model where a miner prioritizes absolute fees over an= cestor score, in the case that mempools aren't full-enough to fulfill a= block ?

Let me know if I can clarify a point.

Antoine

Le= =C2=A0lun. 20 sept. 2021 =C3=A0=C2=A011:10, Gloria Zhao <gloriajzhao@gmail.com> a= =C3=A9crit=C2=A0:

Hi Antoine,

First of all, thank you for the = thorough review. I appreciate your insight on LN requirements.

> = IIUC, you have a package A+B+C submitted for acceptance and A is already in= your mempool. You trim out A from the package and then evaluate B+C.
> I think this might be an issue if A is the higher-fee element of the= ABC package. B+C package fees might be under the mempool min fee and will = be rejected, potentially breaking the acceptance expectations of the packag= e issuer ?

Correct, if B+C is too low feerate to be accepted, we wil= l reject it. I prefer this because it is incentive compatible: A can be min= ed by itself, so there's no reason to prefer A+B+C instead of A.
As = another way of looking at this, consider the case where we do accept A+B+C = and it sits at the "bottom" of our mempool. If our mempool reache= s capacity, we evict the lowest descendant feerate transactions, which are = B+C in this case. This gives us the same resulting mempool, with A and not = B+C.


> Further, I think the dedup shoul= d be done on wtxid, as you might have multiple valid witnesses. Though with= varying vsizes and as such offering different feerates.

I agree tha= t variations of the same package with different witnesses is a case that mu= st be handled. I consider witness replacement to be a project that can be d= one in parallel to package mempool acceptance because being able to accept = packages does not worsen the problem of a same-txid-different-witness "= ;pinning" attack.

If or when we have witness replacement, the l= ogic is: if the individual transaction is enough to replace the mempool one= , the replacement will happen during the preceding individual transaction a= cceptance, and deduplication logic will work. Otherwise, we will try to ded= uplicate by wtxid, see that we need a package witness replacement, and use = the package feerate to evaluate whether this is economically rational.
<= br>See the #22290 "handle package transactions already in mempool"= ; commit (https://github.= com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfc= b5ff), which handles the case of same-txid-different-witness by simply = using the transaction in the mempool for now, with TODOs for what I just de= scribed.


> I'm not clearly understanding the accepted top= ologies. By "parent and child to share a parent", do you mean the= set of transactions A, B, C, where B is spending A and C is spending A and= B would be correct ?

Yes, that is what I meant. Yes, that would a v= alid package under these rules.

> If yes, is there a width-limit = introduced or we fallback on MAX_PACKAGE_COUNT=3D25 ?

No, there is n= o limit on connectivity other than "child with all unconfirmed parents= ." We will enforce MAX_PACKAGE_COUNT=3D25 and child's in-mempool += in-package ancestor limits.


> Considering the current Core&#= 39;s mempool acceptance rules, I think CPFP batching is unsafe for LN time-= sensitive closure. A malicious tx-relay jamming successful on one channel c= ommitment transaction would contamine the remaining commitments sharing the= same package.

> E.g, you broadcast the package A+B+C+D+E where A= ,B,C,D are commitment transactions and E a shared CPFP. If a malicious A= 9; transaction has a better feerate than A, the whole package acceptance wi= ll fail. Even if A' confirms in the following block,
the propagation= and confirmation of B+C+D have been delayed. This could carry on a loss of= funds.

Please note that A may replace A' even if A' has hig= her fees than A individually, because the proposed package RBF utilizes the= fees and size of the entire package. This just requires E to pay enough fe= es, although this can be pretty high if there are also potential B' and= C' competing commitment transactions that we don't know about.
=

> IMHO, I'm leaning towards deploying during a first phase 1= -parent/1-child. I think it's the most conservative step still improvin= g second-layer safety.

So far, my understanding is that multi-parent= -1-child is desired for batched fee-bumping (https:= //github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and I&#= 39;ve also seen your response which I have less context on (https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-9003522= 02). That being said, I am happy to create a new proposal for 1 parent = + 1 child (which would be slightly simpler) and plan for moving to multi-pa= rent-1-child later if that is preferred. I am very interested in hearing fe= edback on that approach.


> If A+B is submitted to replace A&#= 39;, where A pays 0 sats, B pays 200 sats and A' pays 100 sats. If we a= pply the individual RBF on A, A+B acceptance fails. For this reason I think= the individual RBF should be bypassed and only the package RBF apply ?
=
I think there is a misunderstanding here - let me describe what I'm= proposing we'd do in this situation: we'll try individual submissi= on for A, see that it fails due to "insufficient fees." Then, we&= #39;ll try package validation for A+B and use package RBF. If A+B pays enou= gh, it can still replace A'. If A fails for a bad signature, we won'= ;t look at B or A+B. Does this meet your expectations?


> What= problem are you trying to solve by the package feerate *after* dedup rule = ?
> My understanding is that an in-package transaction might be alrea= dy in the mempool. Therefore, to compute a correct RBF penalty replacement,= the vsize of this transaction could be discarded lowering the cost of pack= age RBF.

I'm proposing that, when a transaction has already been= submitted to mempool, we would ignore both its fees and vsize when calcula= ting package feerate. In example G2, we shouldn't count M1 fees after i= ts submission to mempool, since M1's fees have already been used to pay= for its individual bandwidth, and it shouldn't be used again to pay fo= r P2 and P3's bandwidth. We also shouldn't count its vsize, since i= t has already been paid for.


> I think this is a footgunish A= PI, as if a package issuer send the multiple-parent-one-child package A,B,C= ,D where D is the child of A,B,C. Then try to broadcast the higher-feerate = C'+D' package, it should be rejected. So it's breaking the naiv= e broadcaster assumption that a higher-feerate/higher-fee package always re= places ?

Note that, if C' conflicts with C, it also conflic= ts with D, since D is a descendant of C and would thus need to be evicted a= long with it. Implicitly, D' would not be in conflict with D.
=
More generally, this example is surprising to me because I didn't = think packages would be used to fee-bump replaceable transactions. Do we wa= nt the child to be able to replace mempool transactions as well? This can b= e implemented with a bit of additional logic.

> I think this is= unsafe for L2s if counterparties have malleability of the child transactio= n. They can block your package replacement by opting-out from RBF signaling= . IIRC, LN's "anchor output" presents such an ability.
I'm not sure what you mean? Let's say we have a package of parent = A + child B, where A is supposed to replace a mempool transaction A'. A= re you saying that counterparties are able to malleate the package child B,= or a child of A'? If they can malleate a child of A', that shouldn= 't matter as long as A' is signaling replacement. This would be han= dled identically with full RBF and what Core currently implements.

&= gt; I think this is an issue brought by the trimming during the dedup phase= . If we preserve the package integrity, only re-using the tx-level checks r= esults of already in-mempool transactions to gain in CPU time we won't = have this issue. Package childs can add unconfirmed inputs as long as they&= #39;re in-package, the bip125 rule2 is only evaluated against parents ?
=
Sorry, I don't understand what you mean by "preserve the packa= ge integrity?" Could you elaborate?

> Let's say you have= in-mempool A, B where A pays 10 sat/vb for 100 vbytes and B pays 10 sat/vb= for 100 vbytes. You have the candidate replacement D spending both A and C= where D pays 15sat/vb for 100 vbytes and C pays 1 sat/vb for 1000 vbytes.<= br>
> Package A + B ancestor score is 10 sat/vb.

> D has a = higher feerate/absolute fee than B.

> Package A + C + D ancestor = score is ~ 3 sat/vb ((A's 1000 sats + C's 1000 sats + D's 1500 = sats) / A's 100 vb + C's 1000 vb + D's 100 vb)

I am= in agreement with your calculations but unsure if we disagree on the expec= ted outcome. Yes, B has an ancestor score of 10sat/vb and D has an ancestor= score of ~2.9sat/vb. Since D's ancestor score is lower than B's, i= t fails the proposed package RBF Rule #2, so this package would be rejected= . Does this meet your expectations?

Thank you for = linking to projects that might be interested in package relay :)
<= br>Thanks,
Gloria

On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <= antoine.riard@= gmail.com> wrote:
Hi Gloria,

> A package may contain tr= ansactions that are already in the mempool. We
> remove
> (&quo= t;deduplicate") those transactions from the package for the purposes o= f
> package
> mempool acceptance. If a package is empty after d= eduplication, we do
> nothing.

IIUC, you have a package A+B+C = submitted for acceptance and A is already in your mempool. You trim out A f= rom the package and then evaluate B+C.

I think this might be an issu= e if A is the higher-fee element of the ABC package. B+C package fees might= be under the mempool min fee and will be rejected, potentially breaking th= e acceptance expectations of the package issuer ?

Further, I think t= he dedup should be done on wtxid, as you might have multiple valid witnesse= s. Though with varying vsizes and as such offering different feerates.
<= br>E.g you're going to evaluate the package A+B and A' is already i= n your mempool with a bigger valid witness. You trim A based on txid, then = you evaluate A'+B, which fails the fee checks. However, evaluating A+B = would have been a success.

AFAICT, the dedup rationale would be to s= ave on CPU time/IO disk, to avoid repeated signatures verification and pare= nt UTXOs fetches ? Can we achieve the same goal by bypassing tx-level check= s for already-in txn while conserving the package integrity for package-lev= el checks ?

> Note that it's possible for the parents to be> indirect
> descendants/ancestors of one another, or for parent= and child to share a
> parent,
> so we cannot make any other t= opology assumptions.

I'm not clearly understanding the accepted = topologies. By "parent and child to share a parent", do you mean = the set of transactions A, B, C, where B is spending A and C is spending A = and B would be correct ?

If yes, is there a width-limit introduced o= r we fallback on MAX_PACKAGE_COUNT=3D25 ?

IIRC, one rationale to com= e with this topology limitation was to lower the DoS risks when potentially= deploying p2p packages.

Considering the current Core's mempool = acceptance rules, I think CPFP batching is unsafe for LN time-sensitive clo= sure. A malicious tx-relay jamming successful on one channel commitment tra= nsaction would contamine the remaining commitments sharing the same package= .

E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitm= ent transactions and E a shared CPFP. If a malicious A' transaction has= a better feerate than A, the whole package acceptance will fail. Even if A= ' confirms in the following block,
the propagation and confirmation= of B+C+D have been delayed. This could carry on a loss of funds.

Th= at said, if you're broadcasting commitment transactions without time-se= nsitive HTLC outputs, I think the batching is effectively a fee saving as y= ou don't have to duplicate the CPFP.

IMHO, I'm leaning towar= ds deploying during a first phase 1-parent/1-child. I think it's the mo= st conservative step still improving second-layer safety.

> *Rati= onale*: =C2=A0It would be incorrect to use the fees of transactions that ar= e
> already in the mempool, as we do not want a transaction's fee= s to be
> double-counted for both its individual RBF and package RBF.=

I'm unsure about the logical order of the checks proposed.
<= br>If A+B is submitted to replace A', where A pays 0 sats, B pays 200 s= ats and A' pays 100 sats. If we apply the individual RBF on A, A+B acce= ptance fails. For this reason I think the individual RBF should be bypassed= and only the package RBF apply ?

Note this situation is plausible,= with current LN design, your counterparty can have a commitment transactio= n with a better fee just by selecting a higher `dust_limit_satoshis` than y= ours.

> Examples F and G [14] show the same package, but P1 is su= bmitted
> individually before
> the package in example G. In ex= ample F, we can see that the 300vB package
> pays
> an addition= al 200sat in fees, which is not enough to pay for its own
> bandwidth=
> (BIP125#4). In example G, we can see that P1 pays enough to replac= e M1, but
> using P1's fees again during package submission would= make it look like a
> 300sat
> increase for a 200vB package. E= ven including its fees and size would not be
> sufficient in this exa= mple, since the 300sat looks like enough for the 300vB
> package. The= calculcation after deduplication is 100sat increase for a
> package<= br>> of size 200vB, which correctly fails BIP125#4. Assume all transacti= ons have
> a
> size of 100vB.

What problem are you tryin= g to solve by the package feerate *after* dedup rule ?

My understand= ing is that an in-package transaction might be already in the mempool. Ther= efore, to compute a correct RBF penalty replacement, the vsize of this tran= saction could be discarded lowering the cost of package RBF.

If we k= eep a "safe" dedup mechanism (see my point above), I think this d= iscount is justified, as the validation cost of node operators is paid for = ?

> The child cannot replace mempool transactions.

Let'= ;s say you issue package A+B, then package C+B', where B' is a chil= d of both A and C. This rule fails the acceptance of C+B' ?

I th= ink this is a footgunish API, as if a package issuer send the multiple-pare= nt-one-child package A,B,C,D where D is the child of A,B,C. Then try to bro= adcast the higher-feerate C'+D' package, it should be rejected. So = it's breaking the naive broadcaster assumption that a higher-feerate/hi= gher-fee package always replaces ? And it might be unsafe in protocols wher= e states are symmetric. E.g a malicious counterparty broadcasts first S+A, = then you honestly broadcast S+B, where B pays better fees.

> All = mempool transactions to be replaced must signal replaceability.

I th= ink this is unsafe for L2s if counterparties have malleability of the child= transaction. They can block your package replacement by opting-out from RB= F signaling. IIRC, LN's "anchor output" presents such an abil= ity.

I think it's better to either fix inherited signaling or mo= ve towards full-rbf.

> if a package parent has already been submi= tted, it would
> look
>like the child is spending a "new&q= uot; unconfirmed input.

I think this is an issue brought by the trim= ming during the dedup phase. If we preserve the package integrity, only re-= using the tx-level checks results of already in-mempool transactions to gai= n in CPU time we won't have this issue. Package childs can add unconfir= med inputs as long as they're in-package, the bip125 rule2 is only eval= uated against parents ?

> However, we still achieve the same goal= of requiring the
> replacement
> transactions to have a ancest= or score at least as high as the original
> ones.

I'm not = sure if this holds...

Let's say you have in-mempool A, B where A= pays 10 sat/vb for 100 vbytes and B pays 10 sat/vb for 100 vbytes. You hav= e the candidate replacement D spending both A and C where D pays 15sat/vb f= or 100 vbytes and C pays 1 sat/vb for 1000 vbytes.

Package A + B anc= estor score is 10 sat/vb.

D has a higher feerate/absolute fee than B= .

Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats= + C's 1000 sats + D's 1500 sats) /
A's 100 vb + C's 10= 00 vb + D's 100 vb)

Overall, this is a review through the lenses= of LN requirements. I think other L2 protocols/applications
could be ca= ndidates to using package accept/relay such as:
* https://github.com/lightningl= abs/pool
* https://github.com/discreetlogcontracts/dlcspecs<= br>* https://github.com/bitcoin-teleport/teleport-transaction= s/
* https://github.com/sapio-lang/sapio
* = https://github.com/commerceblock/mercury/blob/master/doc/statechains.md=
* https://github.com/revault/practical-revault

Thanks for ro= lling forward the ball on this subject.

Antoine

Le=C2=A0jeu. 16 s= ept. 2021 =C3=A0=C2=A003:55, Gloria Zhao via bitcoin-dev <bitcoin-dev@li= sts.linuxfoundation.org> a =C3=A9crit=C2=A0:
Hi there,

I'= ;m writing to propose a set of mempool policy changes to enable package
= validation (in preparation for package relay) in Bitcoin Core. These would = not
be consensus or P2P protocol changes. However, since mempool policy<= br>significantly affects transaction propagation, I believe this is relevan= t for
the mailing list.

My proposal enables packages consisting o= f multiple parents and 1 child. If you
develop software that relies on s= pecific transaction relay assumptions and/or
are interested in using pac= kage relay in the future, I'm very interested to hear
your feedback = on the utility or restrictiveness of these package policies for
your use= cases.

A draft implementation of this proposal can be found in [Bit= coin Core
PR#22290][1].

An illustrated version of this post can b= e found at
I have also linked the images bel= ow.

## Background

Feel free to skip this section if you are= already familiar with mempool policy
and package relay terminology.
=
### Terminology Clarifications

* Package =3D an ordered list of = related transactions, representable by a Directed
=C2=A0 Acyclic Graph.<= br>* Package Feerate =3D the total modified fees divided by the total virtu= al size of
=C2=A0 all transactions in the package.
=C2=A0 =C2=A0 - Mo= dified fees =3D a transaction's base fees + fee delta applied by the us= er
=C2=A0 =C2=A0 =C2=A0 with `prioritisetransaction`. As such, we expect= this to vary across
mempools.
=C2=A0 =C2=A0 - Virtual Size =3D the m= aximum of virtual sizes calculated using [BIP141
=C2=A0 =C2=A0 =C2=A0 vi= rtual size][2] and sigop weight. [Implemented here in Bitcoin Core][3].
= =C2=A0 =C2=A0 - Note that feerate is not necessarily based on the base fees= and serialized
=C2=A0 =C2=A0 =C2=A0 size.

* Fee-Bumping =3D user= /wallet actions that take advantage of miner incentives to
=C2=A0 boost = a transaction's candidacy for inclusion in a block, including Child Pay= s
for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention= in
mempool policy is to recognize when the new transaction is more econ= omical to
mine than the original one(s) but not open DoS vectors, so the= re are some
limitations.

### Policy

The purpose of the mem= pool is to store the best (to be most incentive-compatible
with miners, = highest feerate) candidates for inclusion in a block. Miners use
the mem= pool to build block templates. The mempool is also useful as a cache forboosting block relay and validation performance, aiding transaction relay,= and
generating feerate estimations.

Ideally, all consensus-valid= transactions paying reasonable fees should make it
to miners through no= rmal transaction relay, without any special connectivity or
relationship= s with miners. On the other hand, nodes do not have unlimited
resources,= and a P2P network designed to let any honest node broadcast their
trans= actions also exposes the transaction validation engine to DoS attacks from<= br>malicious peers.

As such, for unconfirmed transactions we are con= sidering for our mempool, we
apply a set of validation rules in addition= to consensus, primarily to protect
us from resource exhaustion and aid = our efforts to keep the highest fee
transactions. We call this mempool _= policy_: a set of (configurable,
node-specific) rules that transactions = must abide by in order to be accepted
into our mempool. Transaction &quo= t;Standardness" rules and mempool restrictions such
as "too-lo= ng-mempool-chain" are both examples of policy.

### Package Rela= y and Package Mempool Accept

In transaction relay, we currently cons= ider transactions one at a time for
submission to the mempool. This crea= tes a limitation in the node's ability to
determine which transactio= ns have the highest feerates, since we cannot take
into account descenda= nts (i.e. cannot use CPFP) until all the transactions are
in the mempool= . Similarly, we cannot use a transaction's descendants when
consider= ing it for RBF. When an individual transaction does not meet the mempoolminimum feerate and the user isn't able to create a replacement transa= ction
directly, it will not be accepted by mempools.

This limitat= ion presents a security issue for applications and users relying on
time= -sensitive transactions. For example, Lightning and other protocols create<= br>UTXOs with multiple spending paths, where one counterparty's spendin= g path opens
up after a timelock, and users are protected from cheating = scenarios as long as
they redeem on-chain in time. A key security assump= tion is that all parties'
transactions will propagate and confirm in= a timely manner. This assumption can
be broken if fee-bumping does not = work as intended.

The end goal for Package Relay is to consider mult= iple transactions at the same
time, e.g. a transaction with its high-fee= child. This may help us better
determine whether transactions should be= accepted to our mempool, especially if
they don't meet fee requirem= ents individually or are better RBF candidates as a
package. A combinati= on of changes to mempool validation logic, policy, and
transaction relay= allows us to better propagate the transactions with the
highest package= feerates to miners, and makes fee-bumping tools more powerful
for users= .

The "relay" part of Package Relay suggests P2P messaging= changes, but a large
part of the changes are in the mempool's packa= ge validation logic. We call this
*Package Mempool Accept*.

### P= revious Work

* Given that mempool validation is DoS-sensitive and co= mplex, it would be
=C2=A0 dangerous to haphazardly tack on package valid= ation logic. Many efforts have
been made to make mempool validation less= opaque (see [#16400][4], [#21062][5],
[#22675][6], [#22796][7]).
* [= #20833][8] Added basic capabilities for package validation, test accepts on= ly
=C2=A0 (no submission to mempool).
* [#21800][9] Implemented packa= ge ancestor/descendant limit checks for arbitrary
=C2=A0 packages. Still= test accepts only.
* Previous package relay proposals (see [#16401][10]= , [#19621][11]).

### Existing Package Rules

These are in mast= er as introduced in [#20833][8] and [#21800][9]. I'll consider
them = as "given" in the rest of this document, though they can be chang= ed, since
package validation is test-accept only right now.

1. A = package cannot exceed `MAX_PACKAGE_COUNT=3D25` count and
`MAX_PACKAGE_SI= ZE=3D101KvB` total size [8]

=C2=A0 =C2=A0*Rationale*: This is alread= y enforced as mempool ancestor/descendant limits.
Presumably, transactio= ns in a package are all related, so exceeding this limit
would mean that= the package can either be split up or it wouldn't pass this
mempool= policy.

2. Packages must be topologically sorted: if any dependenci= es exist between
transactions, parents must appear somewhere before chil= dren. [8]

3. A package cannot have conflicting transactions, i.e. no= ne of them can spend
the same inputs. This also means there cannot = be duplicate transactions. [8]

4. When packages are eva= luated against ancestor/descendant limits in a test
accept, the union of= all of their descendants and ancestors is considered. This
is essential= ly a "worst case" heuristic where every transaction in the packag= e
is treated as each other's ancestor and descendant. [8]
Packag= es for which ancestor/descendant limits are accurately captured by this
=
heuristic: [19]

There are also limitations such as the fact t= hat CPFP carve out is not applied
to package transactions. #20833 also d= isables RBF in package validation; this
proposal overrides that to allow= packages to use RBF.

## Proposed Changes

The next step in th= e Package Mempool Accept project is to implement submission
to mempool, = initially through RPC only. This allows us to test the submission
logic = before exposing it on P2P.

### Summary

- Packages may contain= already-in-mempool transactions.
- Packages are 2 generations, Multi-Pa= rent-1-Child.
- Fee-related checks use the package feerate. This means t= hat wallets can
create a package that utilizes CPFP.
- Parents are al= lowed to RBF mempool transactions with a set of rules similar
=C2=A0 to = BIP125. This enables a combination of CPFP and RBF, where a
transaction&= #39;s descendant fees pay for replacing mempool conflicts.

There is = a draft implementation in [#22290][1]. It is WIP, but feedback is
always= welcome.

### Details

#### Packages May Contain Already-in-Me= mpool Transactions

A package may contain transactions that are alrea= dy in the mempool. We remove
("deduplicate") those transaction= s from the package for the purposes of package
mempool acceptance. If a = package is empty after deduplication, we do nothing.

*Rationale*: Me= mpools vary across the network. It's possible for a parent to be
acc= epted to the mempool of a peer on its own due to differences in policy and<= br>fee market fluctuations. We should not reject or penalize the entire pac= kage for
an individual transaction as that could be a censorship vector.=

#### Packages Are Multi-Parent-1-Child

Only packages of a sp= ecific topology are permitted. Namely, a package is exactly
1 child with= all of its unconfirmed parents. After deduplication, the package
may be= exactly the same, empty, 1 child, 1 child with just some of its
unconfi= rmed parents, etc. Note that it's possible for the parents to be indire= ct
descendants/ancestors of one another, or for parent and child to shar= e a parent,
so we cannot make any other topology assumptions.

*Ra= tionale*: This allows for fee-bumping by CPFP. Allowing multiple parentsmakes it possible to fee-bump a batch of transactions. Restricting package= s to a
defined topology is also easier to reason about and simplifies th= e validation
logic greatly. Multi-parent-1-child allows us to think of t= he package as one big
transaction, where:

- Inputs =3D all the in= puts of parents + inputs of the child that come from
=C2=A0 confirmed UT= XOs
- Outputs =3D all the outputs of the child + all outputs of the pare= nts that
=C2=A0 aren't spent by other transactions in the package
Examples of packages that follow this rule (variations of example A sh= ow some
possibilities after deduplication): ![image][15]

#### Fee= -Related Checks Use Package Feerate

Package Feerate =3D the total mo= dified fees divided by the total virtual size of
all transactions in the= package.

To meet the two feerate requirements of a mempool, i.e., t= he pre-configured
minimum relay feerate (`minRelayTxFee`) and dynamic me= mpool minimum feerate, the
total package feerate is used instead of the = individual feerate. The individual
transactions are allowed to be below = feerate requirements if the package meets
the feerate requirements. For = example, the parent(s) in the package can have 0
fees but be paid for by= the child.

*Rationale*: This can be thought of as "CPFP within= a package," solving the
issue of a parent not meeting minimum fees= on its own. This allows L2
applications to adjust their fees at broadca= st time instead of overshooting or
risking getting stuck/pinned.

= We use the package feerate of the package *after deduplication*.

*Ra= tionale*: =C2=A0It would be incorrect to use the fees of transactions that = are
already in the mempool, as we do not want a transaction's fees t= o be
double-counted for both its individual RBF and package RBF.

= Examples F and G [14] show the same package, but P1 is submitted individual= ly before
the package in example G. In example F, we can see that the 30= 0vB package pays
an additional 200sat in fees, which is not enough to pa= y for its own bandwidth
(BIP125#4). In example G, we can see that P1 pay= s enough to replace M1, but
using P1's fees again during package sub= mission would make it look like a 300sat
increase for a 200vB package. E= ven including its fees and size would not be
sufficient in this example,= since the 300sat looks like enough for the 300vB
package. The calculcat= ion after deduplication is 100sat increase for a package
of size 200vB, = which correctly fails BIP125#4. Assume all transactions have a
size of 1= 00vB.

#### Package RBF

If a package meets feerate requirement= s as a package, the parents in the
transaction are allowed to replace-by= -fee mempool transactions. The child cannot
replace mempool transactions= . Multiple transactions can replace the same
transaction, but in order t= o be valid, none of the transactions can try to
replace an ancestor of a= nother transaction in the same package (which would thus
make its inputs= unavailable).

*Rationale*: Even if we are using package feerate, a = package will not propagate
as intended if RBF still requires each indivi= dual transaction to meet the
feerate requirements.

We use a set o= f rules slightly modified from BIP125 as follows:

##### Signaling (R= ule #1)

All mempool transactions to be replaced must signal replacea= bility.

*Rationale*: Package RBF signaling logic should be the same = for package RBF and
single transaction acceptance. This would be updated= if single transaction
validation moves to full RBF.

##### New Un= confirmed Inputs (Rule #2)

A package may include new unconfirmed inp= uts, but the ancestor feerate of the
child must be at least as high as t= he ancestor feerates of every transaction
being replaced. This is contra= ry to BIP125#2, which states "The replacement
transaction may only = include an unconfirmed input if that input was included in
one of the or= iginal transactions. (An unconfirmed input spends an output from a
curre= ntly-unconfirmed transaction.)"

*Rationale*: The purpose of BIP= 125#2 is to ensure that the replacement
transaction has a higher ancesto= r score than the original transaction(s) (see
[comment][13]). Example H = [16] shows how adding a new unconfirmed input can lower the
ancestor sco= re of the replacement transaction. P1 is trying to replace M1, and
spend= s an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and M2 pays<= br>100sat. Assume all transactions have a size of 100vB. While, in isolatio= n, P1
looks like a better mining candidate than M1, it must be mined wit= h M2, so its
ancestor feerate is actually 4.5sat/vB.=C2=A0 This is lower= than M1's ancestor
feerate, which is 6sat/vB.

In package RBF= , the rule analogous to BIP125#2 would be "none of the
transactions= in the package can spend new unconfirmed inputs." Example J [17] show= s
why, if any of the package transactions have ancestors, package feerat= e is no
longer accurate. Even though M2 and M3 are not ancestors of P1 (= which is the
replacement transaction in an RBF), we're actually inte= rested in the entire
package. A miner should mine M1 which is 5sat/vB in= stead of M2, M3, P1, P2, and
P3, which is only 4sat/vB. The Package RBF = rule cannot be loosened to only allow
the child to have new unconfirmed = inputs, either, because it can still cause us
to overestimate the packag= e's ancestor score.

However, enforcing a rule analogous to BIP12= 5#2 would not only make Package RBF
less useful, but would also break Pa= ckage RBF for packages with parents already
in the mempool: if a package= parent has already been submitted, it would look
like the child is spen= ding a "new" unconfirmed input. In example K [18], we're
l= ooking to replace M1 with the entire package including P1, P2, and P3. We m= ust
consider the case where one of the parents is already in the mempool= (in this
case, P2), which means we must allow P3 to have new unconfirme= d inputs. However,
M2 lowers the ancestor score of P3 to 4.3sat/vB, so w= e should not replace M1
with this package.

Thus, the package RBF = rule regarding new unconfirmed inputs is less strict than
BIP125#2. Howe= ver, we still achieve the same goal of requiring the replacement
transac= tions to have a ancestor score at least as high as the original ones. Asa result, the entire package is required to be a higher feerate mining can= didate
than each of the replaced transactions.

Another note: the = [comment][13] above the BIP125#2 code in the original RBF
implementation= suggests that the rule was intended to be temporary.

##### Absolute= Fee (Rule #3)

The package must increase the absolute fee of the mem= pool, i.e. the total fees
of the package must be higher than the absolut= e fees of the mempool transactions
it replaces. Combined with the CPFP r= ule above, this differs from BIP125 Rule #3
- an individual transaction = in the package may have lower fees than the
=C2=A0 transaction(s) it is = replacing. In fact, it may have 0 fees, and the child
pays for RBF.
<= br>##### Feerate (Rule #4)

The package must pay for its own bandwidt= h; the package feerate must be higher
than the replaced transactions by = at least minimum relay feerate
(`incrementalRelayFee`). Combined with th= e CPFP rule above, this differs from
BIP125 Rule #4 - an individual tran= saction in the package can have a lower
feerate than the transaction(s) = it is replacing. In fact, it may have 0 fees,
and the child pays for RBF= .

##### Total Number of Replaced Transactions (Rule #5)

The p= ackage cannot replace more than 100 mempool transactions. This is identical=
to BIP125 Rule #5.

### Expected FAQs

1. Is it possible fo= r only some of the package to make it into the mempool?

=C2=A0 =C2= =A0Yes, it is. However, since we evict transactions from the mempool by
= descendant score and the package child is supposed to be sponsoring the fee= s of
its parents, the most common scenario would be all-or-nothing. This= is
incentive-compatible. In fact, to be conservative, package validatio= n should
begin by trying to submit all of the transactions individually,= and only use the
package mempool acceptance logic if the parents fail d= ue to low feerate.

2. Should we allow packages to contain already-co= nfirmed transactions?

=C2=A0 =C2=A0 No, for practical reasons. In me= mpool validation, we actually aren't able to
tell with 100% confiden= ce if we are looking at a transaction that has already
confirmed, becaus= e we look up inputs using a UTXO set. If we have historical
block data, = it's possible to look for it, but this is inefficient, not always
po= ssible for pruning nodes, and unnecessary because we're not going to do=
anything with the transaction anyway. As such, we already have the expe= ctation
that transaction relay is somewhat "stateful" i.e. nob= ody should be relaying
transactions that have already been confirmed. Si= milarly, we shouldn't be
relaying packages that contain already-conf= irmed transactions.

[1]: https://github.com/bitcoin/bitcoin/pull/22= 290
[2]: https://github.com/bitcoin/bips/blob/1f0b563738199ca= 60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations<= /a>
[3]:
= https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392= eb75e/src/policy/policy.cpp#L282
[4]: https://github.com/bitcoin/bi= tcoin/pull/16400
[5]: https://github.com/bitcoin/bitcoin/pull/21062=
[6]: https://github.com/bitcoin/bitcoin/pull/22675
[7]: ht= tps://github.com/bitcoin/bitcoin/pull/22796
[8]: https://github.com= /bitcoin/bitcoin/pull/20833
[9]: https://github.com/bitcoin/bitcoin= /pull/21800
[10]: https://github.com/bitcoin/bitcoin/pull/16401=
[11]: https://github.com/bitcoin/bitcoin/pull/19621
[12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawik= i
[13]: https://github.com/bitcoin/bitcoin/pull/6871/fil= es#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R11= 01-R1104
[14]: https://user-images.githubusercontent.com/25183001/133567078-075a971c-0= 619-4339-9168-b41fd2b90c28.png
[15]: https://user-images.githubusercontent.com/2518300= 1/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
[16]: https://user-images.githu= busercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.pn= g
[17]: htt= ps://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-483= 1-b1a8-706634540af3.png
[18]: https://user-images.githubusercontent.com/25183001/13356= 7444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
[19]: https://user-images.githubusercont= ent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
= [20]: https://user= -images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-98= 3719888c19.png
_______________________________________________
bitcoin-dev mailing list
= bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mail= man/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
= bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mail= man/listinfo/bitcoin-dev
--0000000000002dd02d05cd162aa1--