Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id A6A2D1062; Mon, 28 Oct 2019 09:45:54 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id B6E36876; Mon, 28 Oct 2019 09:45:52 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id q78so10571613lje.5; Mon, 28 Oct 2019 02:45:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=7jNv972mhsrOVapbAXo3fXObKpiDxj+9KyLxar1TDYM=; b=ujaYBJFyJiYF4syjnPqEIn0qfWypQiLwX7SVy1zBwWhtvFD0k5axHL6wLeQAItBU9c QO5p3IxoWRqdZ1xMEhS6HZJ/vB+fjQ9q4Fmlf1hIpJ0MCJF2IB+bdwGrzQ2nOPKWs4rC mK9eQMPPdDzoWDWzBv6MXZtJdnhisI9PfD9APzOD8iK16d7vxpJm5Ur1LdRpnh99HQ9r 4GL1n2EqILHU5k5QUld42d3CX1eFn8r40o0sKi4FKxvUqb5PEV73rdgYP8mhCLdnFakT vJWxLvlTDoSfSb0VxGlOVjX0ImpOyq6DLDxle7y5Mikn1VdWsSDFI985zCC1KVDVg/AX 6iWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=7jNv972mhsrOVapbAXo3fXObKpiDxj+9KyLxar1TDYM=; b=kJHSxqxa8FRK2rUhFCmKM01yKExYQRJHFBsQvBtFuQsrUWT/sHHhDg1ImksrnCYX9X q9nr2fCy5qq0f1NIvdnzNHS3HSr3Y+vQ//CpypEeKRygiIyuz5wQ4370/oJpbjRc2dTv TXR3DtjwV1K9oefP065JC0wBB2NnqzFB91aJVSst9Y0ejITXsjPJb6avNbkqUdIbN5dT M+qbClpITcKF8Votm3mOp0JquECNgi5R5FnaKkoR6UsofqFj1PPo6EQjug7qEqtidxcy E2qJD4RniqPwAuOK59Lgbt6Gmn1ot8K+6kVQ4PepEuhyvTPPxVjHAs0BXswkY6Yy93Yf dIRQ== X-Gm-Message-State: APjAAAU8+xtTgP7gITWj0gOi7/O5rGWogxJLtTlH8wpXUnydDtJ/m+kQ 8di+fnyWQHU1Q6tHC2oVSqBLWYsabZ5IyGrDKk4= X-Google-Smtp-Source: APXvYqySUDr6Xc92u+axwJbdLDw64iArpat3V3BYwg5o+jEnoTn/JMzxq+zg31lpL9p3dQrV7Ku3wfAkGYdsjdWgmTw= X-Received: by 2002:a2e:9655:: with SMTP id z21mr4213871ljh.120.1572255950831; Mon, 28 Oct 2019 02:45:50 -0700 (PDT) MIME-Version: 1.0 References: <6728FF51-E378-4AED-99BA-ECB83688AA9C@mattcorallo.com> In-Reply-To: From: =?UTF-8?Q?Johan_Tor=C3=A5s_Halseth?= Date: Mon, 28 Oct 2019 10:45:39 +0100 Message-ID: To: Jeremy Content-Type: multipart/alternative; boundary="0000000000007206bd0595f55dcb" X-Spam-Status: No, score=1.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DOS_RCVD_IP_TWICE_B, FREEMAIL_FROM, HTML_MESSAGE, RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1 X-Spam-Level: * X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Mailman-Approved-At: Mon, 28 Oct 2019 10:06:48 +0000 Cc: Bitcoin Protocol Discussion , lightning-dev Subject: Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning) X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Oct 2019 09:45:54 -0000 --0000000000007206bd0595f55dcb Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable > > > I don=E2=80=99te see how? Let=E2=80=99s imagine Party A has two spendable= outputs, now > they stuff the package size on one of their spendable outlets until it is > right at the limit, add one more on their other output (to meet the > Carve-Out), and now Party B can=E2=80=99t do anything. Matt: With the proposed change, party B would always be able to add a child to its output, regardless of what games party A is playing. Thanks for the explanation, Jeremy! > In terms of relay cost, if an ancestor can be replaced, it will invalidat= e > all it's children, meaning that no one paid for that broadcasting. This c= an > be fixed by appropriately assessing Replace By Fee update fees to > encapsulate all descendants, but there are some tricky edge cases that ma= ke > this non-obvious to do. Relay cost is the obvious problem with just naively removing all limits. Relaxing the current rules by allowing to add a child to each output as long as it has a single unconfirmed parent would still only allow free relay of O(size of parent) extra data (which might not be that bad? Similar to the carve-out rule we could put limits on the child size). This would be enough for the current LN use case (increasing fee of commitment tx), but not for OP_SECURETHEBAG I guess, as you need the tree of children, as you mention. I imagine walking the mempool wouldn't change much, as you would only have one extra child per output. But here I'm just speculating, as I don't know the code well enough know what the diff would look like. > OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a > tree where they are individualized leaf nodes with a preceding CSV. Then, > the above fix would ensure each HTLC always has time to close properly as > they would have individualized lockpoints. This is desirable for some > additional reasons and not for others, but it should "work". This is interesting for an LN commitment! You could really hide every output of the commitment within OP_STB, which could either allow bypassing the fee-pinning attack entirely (if the output cannot be spent unconfirmed) or adding fees to the commitment using SIGHASH_SINGLE|ANYONECANPAY. - Johan On Sun, Oct 27, 2019 at 8:13 PM Jeremy wrote: > Johan, > > The issues with mempool limits for OP_SECURETHEBAG are related, but have > distinct solutions. > > There are two main categories of mempool issues at stake. One is relay > cost, the other is mempool walking. > > In terms of relay cost, if an ancestor can be replaced, it will invalidat= e > all it's children, meaning that no one paid for that broadcasting. This c= an > be fixed by appropriately assessing Replace By Fee update fees to > encapsulate all descendants, but there are some tricky edge cases that ma= ke > this non-obvious to do. > > The other issue is walking the mempool -- many of the algorithms we use i= n > the mempool can be N log N or N^2 in the number of descendants. (simple > example: an input chain of length N to a fan out of N outputs that are al= l > spent, is O(N^2) to look up ancestors per-child, unless we're caching). > > The other sort of walking issue is where the indegree or outdegree for a > transaction is high. Then when we are computing descendants or ancestors = we > will need to visit it multiple times. To avoid re-expanding a node, we > currently cache it with a set. This uses O(N) extra memory and makes O(N > Log N) (we use std::set not unordered_set) comparisons. > > I just opened a PR which should help with some of the walking issues by > allowing us to cheaply cache which nodes we've visited on a run. It makes= a > lot of previously O(N log N) stuff O(N) and doesn't allocate as much new > memory. See: https://github.com/bitcoin/bitcoin/pull/17268. > > > Now, for OP_SECURETHEBAG we want a particular property that is very > different from with lightning htlcs (as is). We want that an unlimited > number of child OP_SECURETHEBAG txns may extend from a confirmed > OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as > lightning (one dangling unconfirmed to permit channels). > > OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a > tree where they are individualized leaf nodes with a preceding CSV. Then, > the above fix would ensure each HTLC always has time to close properly as > they would have individualized lockpoints. This is desirable for some > additional reasons and not for others, but it should "work". > > > > -- > @JeremyRubin > > > > On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo > wrote: > >> I don=E2=80=99te see how? Let=E2=80=99s imagine Party A has two spendabl= e outputs, now >> they stuff the package size on one of their spendable outlets until it i= s >> right at the limit, add one more on their other output (to meet the >> Carve-Out), and now Party B can=E2=80=99t do anything. >> >> On Oct 24, 2019, at 21:05, Johan Tor=C3=A5s Halseth = wrote: >> >> =EF=BB=BF >> It essentially changes the rule to always allow CPFP-ing the commitment >> as long as there is an output available without any descendants. It chan= ges >> the commitment from "you always need at least, and exactly, one non-CSV >> output per party. " to "you always need at least one non-CSV output per >> party. " >> >> I realize these limits are there for a reason though, but I'm wondering >> if could relax them. Also now that jeremyrubin has expressed problems wi= th >> the current mempool limits. >> >> On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo >> wrote: >> >>> I may be missing something, but I'm not sure how this changes anything? >>> >>> If you have a commitment transaction, you always need at least, and >>> exactly, one non-CSV output per party. The fact that there is a size >>> limitation on the transaction that spends for carve-out purposes only >>> effects how many other inputs/outputs you can add, but somehow I doubt >>> its ever going to be a large enough number to matter. >>> >>> Matt >>> >>> On 10/24/19 1:49 PM, Johan Tor=C3=A5s Halseth wrote: >>> > Reviving this old thread now that the recently released RC for bitcoi= nd >>> > 0.19 includes the above mentioned carve-out rule. >>> > >>> > In an attempt to pave the way for more robust CPFP of on-chain >>> contracts >>> > (Lightning commitment transactions), the carve-out rule was added in >>> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked >>> on >>> > an implementation of a new commitment format for utilizing the Bring >>> > Your Own Fees strategy using CPFP, I=E2=80=99m wondering if the speci= al case >>> > rule should have been relaxed a bit, to avoid the need for adding a 1 >>> > CSV to all outputs (in case of Lightning this means HTLC scripts woul= d >>> > need to be changed to add the CSV delay). >>> > >>> > Instead, what about letting the rule be >>> > >>> > The last transaction which is added to a package of dependent >>> > transactions in the mempool must: >>> > * Have no more than one unconfirmed parent. >>> > >>> > This would of course allow adding a large transaction to each output = of >>> > the unconfirmed parent, which in effect would allow an attacker to >>> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is >>> > this a problem with the current mempool acceptance code in bitcoind? = I >>> > would imagine evicting transactions based on feerate when the max >>> > mempool size is met handles this, but I=E2=80=99m asking since it see= ms like >>> > there has been several changes to the acceptance code and eviction >>> > policy since the limit was first introduced. >>> > >>> > - Johan >>> > >>> > >>> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell >> > > wrote: >>> > >>> > Matt Corallo >> > > writes: >>> > >>> Thus, even if you imagine a steady-state mempool growth, >>> unless the >>> > >>> "near the top of the mempool" criteria is "near the top of th= e >>> next >>> > >>> block" (which is obviously *not* incentive-compatible) >>> > >> >>> > >> I was defining "top of mempool" as "in the first 4 MSipa", ie. >>> next >>> > >> block, and assumed you'd only allow RBF if the old package >>> wasn't >>> > in the >>> > >> top and the replacement would be. That seems incentive >>> > compatible; more >>> > >> than the current scheme? >>> > > >>> > > My point was, because of block time variance, even that criteri= a >>> > doesn't hold up. If you assume a steady flow of new transactions >>> and >>> > one or two blocks come in "late", suddenly "top 4MWeight" isn't >>> > likely to get confirmed until a few blocks come in "early". Given >>> > block variance within a 12 block window, this is a relatively >>> likely >>> > scenario. >>> > >>> > [ Digging through old mail. ] >>> > >>> > Doesn't really matter. Lightning close algorithm would be: >>> > >>> > 1. Give bitcoind unileratal close. >>> > 2. Ask bitcoind what current expidited fee is (or survey your >>> mempool). >>> > 3. Give bitcoind child "push" tx at that total feerate. >>> > 4. If next block doesn't contain unilateral close tx, goto 2. >>> > >>> > In this case, if you allow a simpified RBF where 'you can replace >>> if >>> > 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3. >>> > old tx isnt', >>> > it works. >>> > >>> > It allows someone 100k of free tx spam, sure. But it's simple. >>> > >>> > We could further restrict it by marking the unilateral close >>> somehow to >>> > say "gonna be pushed" and further limiting the child tx weight >>> (say, >>> > 5kSipa?) in that case. >>> > >>> > Cheers, >>> > Rusty. >>> > _______________________________________________ >>> > Lightning-dev mailing list >>> > Lightning-dev@lists.linuxfoundation.org >>> > >>> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >>> > >>> >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev@lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> > --0000000000007206bd0595f55dcb Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

I don=E2=80=99te see how? Le= t=E2=80=99s imagine Party A has two spendable outputs, now they stuff the p= ackage size on one of their spendable outlets until it is right at the limi= t, add one more on their other output (to meet the Carve-Out), and now Part= y B can=E2=80=99t do anything.
Matt: With the proposed cha= nge, party B would always be able to add a child to its output, regardless = of what games party A is playing.=C2=A0


=
Thanks for the explanation, Jeremy!
=C2=A0
In terms of relay cost, if an ancesto= r can be replaced, it will invalidate all it's children, meaning that n= o one paid for that broadcasting. This can be fixed by appropriately assess= ing Replace By Fee update fees to encapsulate all descendants, but there ar= e some tricky edge cases that make this non-obvious to do.

Relay cost is the obvious problem with just naively removin= g all limits. Relaxing the current rules by allowing to add a child to each= output as long as it has a single unconfirmed=C2=A0parent would still only= allow free relay of O(size of parent) extra data (which might not be that = bad? Similar to the carve-out rule we could put limits on the child size). = This would be enough for the current LN use case (increasing fee of commitm= ent tx), but not for OP_SECURETHEBAG I guess, as you need the tree of child= ren, as you mention.

I imagine walking the mempool= wouldn't change much, as you would only have one extra child per outpu= t. But here I'm just speculating, as I don't know the code well eno= ugh know what the diff would look like.


OP_SECURETHEBAG can help with the LN= issue by putting all HTLCS into a tree where they are individualized leaf = nodes with a preceding CSV. Then, the above fix would ensure each HTLC alwa= ys has time to close properly as they would have individualized lockpoints.= This is desirable for some additional reasons and not for others, but it s= hould "work".

This is interesting= for an LN commitment! You could really hide every output of the commitment= within OP_STB, which could either allow bypassing the fee-pinning attack e= ntirely (if the output cannot be spent unconfirmed) or adding fees to the c= ommitment using SIGHASH_SINGLE|ANYONECANPAY.

- Joh= an

On Sun, Oct 27, 2019 at 8:13 PM Jeremy <jlrubin@mit.edu> wrote:
Johan,
The issues with mempool limits for OP_SECURETHEBAG are = related, but have distinct solutions.

There are two main categories of mempool issues at stake. One is relay co= st, the other is mempool walking.

In= terms of relay cost, if an ancestor can be replaced, it will invalidate al= l it's children, meaning that no one paid for that broadcasting. This c= an be fixed by appropriately assessing Replace By Fee update fees to encaps= ulate all descendants, but there are some tricky edge cases that make this = non-obvious to do.

The other issue i= s walking the mempool -- many of the algorithms we use in the mempool can b= e N log N or N^2 in the number of descendants. (simple example: an input ch= ain of length N to a fan out of N outputs that are all spent, is O(N^2) to = look up ancestors per-child, unless we're caching).
The other sort of walking issue is where the indegree o= r outdegree for a transaction is high. Then when we are computing descendan= ts or ancestors we will need to visit it multiple times. To avoid re-expand= ing a node, we currently cache it with a set. This uses O(N) extra memory a= nd makes O(N Log N) (we use std::set not unordered_set) comparisons.

I just opened a PR which should help = with some of the walking issues by allowing us to cheaply cache which nodes= we've visited on a run. It makes a lot of previously O(N log N) stuff = O(N) and doesn't allocate as much new memory. See: https://github.com/= bitcoin/bitcoin/pull/17268.


=
Now, for OP_SECURETHEBAG we want a particular property th= at is very different from with lightning htlcs (as is). We want that an unl= imited number of child OP_SECURETHEBAG txns may extend from a confirmed OP_= SECURETHEBAG, and then at the leaf nodes, we want the same rule as lightnin= g (one dangling unconfirmed to permit channels).

OP_SECURETHEBAG can help with the LN issue by putting all HTLC= S into a tree where they are individualized leaf nodes with a preceding CSV= . Then, the above fix would ensure each HTLC always has time to close prope= rly as they would have individualized lockpoints. This is desirable for som= e additional reasons and not for others, but it should "work".

=
On Fri= , Oct 25, 2019 at 10:31 AM Matt Corallo <lf-lists@mattcorallo.com> wrote:
<= /div>
I don=E2=80=99te see how? Let=E2=80=99s imagine Party A has = two spendable outputs, now they stuff the package size on one of their spen= dable outlets until it is right at the limit, add one more on their other o= utput (to meet the Carve-Out), and now Party B can=E2=80=99t do anything.

On Oct 24, 2019, at 21:0= 5, Johan Tor=C3=A5s Halseth <johanth@gmail.com> wrote:

=EF=BB=BF
It essentially changes the rule to always allow C= PFP-ing the commitment as long as there is an output available without any = descendants. It changes the commitment from "you always need at least,= and exactly, one non-CSV output per party. " to "you always need= at least one non-CSV output per party. "

I realize these limits are there for a reason though, but I'm = wondering if could relax them. Also now that jeremyrubin has expressed prob= lems with the current mempool limits.

On Thu, Oct 24, 2019 at 11= :25 PM Matt Corallo <lf-lists@mattcorallo.com> wrote:
I may be missing something, but I'm= not sure how this changes anything?

If you have a commitment transaction, you always need at least, and
exactly, one non-CSV output per party. The fact that there is a size
limitation on the transaction that spends for carve-out purposes only
effects how many other inputs/outputs you can add, but somehow I doubt
its ever going to be a large enough number to matter.

Matt

On 10/24/19 1:49 PM, Johan Tor=C3=A5s Halseth wrote:
> Reviving this old thread now that the recently released RC for bitcoin= d
> 0.19 includes the above mentioned carve-out rule.
>
> In an attempt to pave the way for more robust CPFP of on-chain contrac= ts
> (Lightning commitment transactions), the carve-out rule was added in > https://github.com/bitcoin/bitcoin/pull/15681.= However, having worked on
> an implementation of a new commitment format for utilizing the Bring > Your Own Fees strategy using CPFP, I=E2=80=99m wondering if the specia= l case
> rule should have been relaxed a bit, to avoid the need for adding a 1<= br> > CSV to all outputs (in case of Lightning this means HTLC scripts would=
> need to be changed to add the CSV delay).
>
> Instead, what about letting the rule be
>
> The last transaction which is added to a package of dependent
> transactions in the mempool must:
> =C2=A0 * Have no more than one unconfirmed parent.
>
> This would of course allow adding a large transaction to each output o= f
> the unconfirmed parent, which in effect would allow an attacker to
> exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is > this a problem with the current mempool acceptance code in bitcoind? I=
> would imagine evicting transactions based on feerate when the max
> mempool size is met handles this, but I=E2=80=99m asking since it seem= s like
> there has been several changes to the acceptance code and eviction
> policy since the limit was first introduced.
>
> - Johan
>
>
> On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell <rusty@rustcorp.com.au
> <mailto:= rusty@rustcorp.com.au>> wrote:
>
>=C2=A0 =C2=A0 =C2=A0Matt Corallo <lf-lists@mattcorallo.com
>=C2=A0 =C2=A0 =C2=A0<mailto:lf-lists@mattcorallo.com>> writes:
>=C2=A0 =C2=A0 =C2=A0>>> Thus, even if you imagine a steady-sta= te mempool growth, unless the
>=C2=A0 =C2=A0 =C2=A0>>> "near the top of the mempool"= ; criteria is "near the top of the next
>=C2=A0 =C2=A0 =C2=A0>>> block" (which is obviously *not* = incentive-compatible)
>=C2=A0 =C2=A0 =C2=A0>>
>=C2=A0 =C2=A0 =C2=A0>> I was defining "top of mempool" = as "in the first 4 MSipa", ie. next
>=C2=A0 =C2=A0 =C2=A0>> block, and assumed you'd only allow RB= F if the old package wasn't
>=C2=A0 =C2=A0 =C2=A0in the
>=C2=A0 =C2=A0 =C2=A0>> top and the replacement would be.=C2=A0 Th= at seems incentive
>=C2=A0 =C2=A0 =C2=A0compatible; more
>=C2=A0 =C2=A0 =C2=A0>> than the current scheme?
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> My point was, because of block time variance, = even that criteria
>=C2=A0 =C2=A0 =C2=A0doesn't hold up. If you assume a steady flow of= new transactions and
>=C2=A0 =C2=A0 =C2=A0one or two blocks come in "late", suddenl= y "top 4MWeight" isn't
>=C2=A0 =C2=A0 =C2=A0likely to get confirmed until a few blocks come in = "early". Given
>=C2=A0 =C2=A0 =C2=A0block variance within a 12 block window, this is a = relatively likely
>=C2=A0 =C2=A0 =C2=A0scenario.
>
>=C2=A0 =C2=A0 =C2=A0[ Digging through old mail. ]
>
>=C2=A0 =C2=A0 =C2=A0Doesn't really matter.=C2=A0 Lightning close al= gorithm would be:
>
>=C2=A0 =C2=A0 =C2=A01.=C2=A0 Give bitcoind unileratal close.
>=C2=A0 =C2=A0 =C2=A02.=C2=A0 Ask bitcoind what current expidited fee is= (or survey your mempool).
>=C2=A0 =C2=A0 =C2=A03.=C2=A0 Give bitcoind child "push" tx at= that total feerate.
>=C2=A0 =C2=A0 =C2=A04.=C2=A0 If next block doesn't contain unilater= al close tx, goto 2.
>
>=C2=A0 =C2=A0 =C2=A0In this case, if you allow a simpified RBF where &#= 39;you can replace if
>=C2=A0 =C2=A0 =C2=A01. feerate is higher, 2. new tx is in first 4Msipa = of mempool, 3.
>=C2=A0 =C2=A0 =C2=A0old tx isnt',
>=C2=A0 =C2=A0 =C2=A0it works.
>
>=C2=A0 =C2=A0 =C2=A0It allows someone 100k of free tx spam, sure.=C2=A0= But it's simple.
>
>=C2=A0 =C2=A0 =C2=A0We could further restrict it by marking the unilate= ral close somehow to
>=C2=A0 =C2=A0 =C2=A0say "gonna be pushed" and further limitin= g the child tx weight (say,
>=C2=A0 =C2=A0 =C2=A05kSipa?) in that case.
>
>=C2=A0 =C2=A0 =C2=A0Cheers,
>=C2=A0 =C2=A0 =C2=A0Rusty.
>=C2=A0 =C2=A0 =C2=A0_______________________________________________
>=C2=A0 =C2=A0 =C2=A0Lightning-dev mailing list
>=C2=A0 =C2=A0 =C2=A0Lightning-dev@lists.linuxfoundation.org
>=C2=A0 =C2=A0 =C2=A0<mailto:Lightning-dev@lists.linuxfoundation.or= g>
>=C2=A0 =C2=A0 =C2=A0https://list= s.linuxfoundation.org/mailman/listinfo/lightning-dev
>
_______________________________________________ Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/ma= ilman/listinfo/lightning-dev
--0000000000007206bd0595f55dcb--