Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 080FB2C; Sun, 27 Oct 2019 19:13:25 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from outgoing.mit.edu (outgoing-auth-1.mit.edu [18.9.28.11]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 5C2CC63D; Sun, 27 Oct 2019 19:13:23 +0000 (UTC) Received: from mail-io1-f48.google.com (mail-io1-f48.google.com [209.85.166.48]) (authenticated bits=0) (User authenticated as jlrubin@ATHENA.MIT.EDU) by outgoing.mit.edu (8.14.7/8.12.4) with ESMTP id x9RJDLgE011202 (version=TLSv1/SSLv3 cipher=AES128-GCM-SHA256 bits=128 verify=NOT); Sun, 27 Oct 2019 15:13:21 -0400 Received: by mail-io1-f48.google.com with SMTP id p6so8050594iod.7; Sun, 27 Oct 2019 12:13:21 -0700 (PDT) X-Gm-Message-State: APjAAAXsL9HYXz19a0Y+tD05J6qvNkjVnssY3zBlq2W1wAmcFg8df1Ch 34C8PZsuUe+WhGmlc0URDG1ez86Sq4Msugxh0WM= X-Google-Smtp-Source: APXvYqycmWXobfkECAwg98y3GJAF4Mvp767bobS4aWYIb+Y9HFfZ0+cl2HZY38NuueZa9zc92HUC1emURLhwr44/dTw= X-Received: by 2002:a6b:b987:: with SMTP id j129mr14745273iof.105.1572203600873; Sun, 27 Oct 2019 12:13:20 -0700 (PDT) MIME-Version: 1.0 References: <6728FF51-E378-4AED-99BA-ECB83688AA9C@mattcorallo.com> In-Reply-To: <6728FF51-E378-4AED-99BA-ECB83688AA9C@mattcorallo.com> From: Jeremy Date: Sun, 27 Oct 2019 12:13:09 -0700 X-Gmail-Original-Message-ID: Message-ID: To: Matt Corallo Content-Type: multipart/alternative; boundary="000000000000250da90595e92d28" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,HTML_MESSAGE, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Bitcoin Protocol Discussion , lightning-dev Subject: Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning) X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Oct 2019 19:13:25 -0000 --000000000000250da90595e92d28 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Johan, The issues with mempool limits for OP_SECURETHEBAG are related, but have distinct solutions. There are two main categories of mempool issues at stake. One is relay cost, the other is mempool walking. In terms of relay cost, if an ancestor can be replaced, it will invalidate all it's children, meaning that no one paid for that broadcasting. This can be fixed by appropriately assessing Replace By Fee update fees to encapsulate all descendants, but there are some tricky edge cases that make this non-obvious to do. The other issue is walking the mempool -- many of the algorithms we use in the mempool can be N log N or N^2 in the number of descendants. (simple example: an input chain of length N to a fan out of N outputs that are all spent, is O(N^2) to look up ancestors per-child, unless we're caching). The other sort of walking issue is where the indegree or outdegree for a transaction is high. Then when we are computing descendants or ancestors we will need to visit it multiple times. To avoid re-expanding a node, we currently cache it with a set. This uses O(N) extra memory and makes O(N Log N) (we use std::set not unordered_set) comparisons. I just opened a PR which should help with some of the walking issues by allowing us to cheaply cache which nodes we've visited on a run. It makes a lot of previously O(N log N) stuff O(N) and doesn't allocate as much new memory. See: https://github.com/bitcoin/bitcoin/pull/17268. Now, for OP_SECURETHEBAG we want a particular property that is very different from with lightning htlcs (as is). We want that an unlimited number of child OP_SECURETHEBAG txns may extend from a confirmed OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as lightning (one dangling unconfirmed to permit channels). OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a tree where they are individualized leaf nodes with a preceding CSV. Then, the above fix would ensure each HTLC always has time to close properly as they would have individualized lockpoints. This is desirable for some additional reasons and not for others, but it should "work". -- @JeremyRubin On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo wrote: > I don=E2=80=99te see how? Let=E2=80=99s imagine Party A has two spendable= outputs, now > they stuff the package size on one of their spendable outlets until it is > right at the limit, add one more on their other output (to meet the > Carve-Out), and now Party B can=E2=80=99t do anything. > > On Oct 24, 2019, at 21:05, Johan Tor=C3=A5s Halseth w= rote: > > =EF=BB=BF > It essentially changes the rule to always allow CPFP-ing the commitment a= s > long as there is an output available without any descendants. It changes > the commitment from "you always need at least, and exactly, one non-CSV > output per party. " to "you always need at least one non-CSV output per > party. " > > I realize these limits are there for a reason though, but I'm wondering i= f > could relax them. Also now that jeremyrubin has expressed problems with t= he > current mempool limits. > > On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo > wrote: > >> I may be missing something, but I'm not sure how this changes anything? >> >> If you have a commitment transaction, you always need at least, and >> exactly, one non-CSV output per party. The fact that there is a size >> limitation on the transaction that spends for carve-out purposes only >> effects how many other inputs/outputs you can add, but somehow I doubt >> its ever going to be a large enough number to matter. >> >> Matt >> >> On 10/24/19 1:49 PM, Johan Tor=C3=A5s Halseth wrote: >> > Reviving this old thread now that the recently released RC for bitcoin= d >> > 0.19 includes the above mentioned carve-out rule. >> > >> > In an attempt to pave the way for more robust CPFP of on-chain contrac= ts >> > (Lightning commitment transactions), the carve-out rule was added in >> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked >> on >> > an implementation of a new commitment format for utilizing the Bring >> > Your Own Fees strategy using CPFP, I=E2=80=99m wondering if the specia= l case >> > rule should have been relaxed a bit, to avoid the need for adding a 1 >> > CSV to all outputs (in case of Lightning this means HTLC scripts would >> > need to be changed to add the CSV delay). >> > >> > Instead, what about letting the rule be >> > >> > The last transaction which is added to a package of dependent >> > transactions in the mempool must: >> > * Have no more than one unconfirmed parent. >> > >> > This would of course allow adding a large transaction to each output o= f >> > the unconfirmed parent, which in effect would allow an attacker to >> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is >> > this a problem with the current mempool acceptance code in bitcoind? I >> > would imagine evicting transactions based on feerate when the max >> > mempool size is met handles this, but I=E2=80=99m asking since it seem= s like >> > there has been several changes to the acceptance code and eviction >> > policy since the limit was first introduced. >> > >> > - Johan >> > >> > >> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell > > > wrote: >> > >> > Matt Corallo > > > writes: >> > >>> Thus, even if you imagine a steady-state mempool growth, unles= s >> the >> > >>> "near the top of the mempool" criteria is "near the top of the >> next >> > >>> block" (which is obviously *not* incentive-compatible) >> > >> >> > >> I was defining "top of mempool" as "in the first 4 MSipa", ie. >> next >> > >> block, and assumed you'd only allow RBF if the old package wasn= 't >> > in the >> > >> top and the replacement would be. That seems incentive >> > compatible; more >> > >> than the current scheme? >> > > >> > > My point was, because of block time variance, even that criteria >> > doesn't hold up. If you assume a steady flow of new transactions a= nd >> > one or two blocks come in "late", suddenly "top 4MWeight" isn't >> > likely to get confirmed until a few blocks come in "early". Given >> > block variance within a 12 block window, this is a relatively like= ly >> > scenario. >> > >> > [ Digging through old mail. ] >> > >> > Doesn't really matter. Lightning close algorithm would be: >> > >> > 1. Give bitcoind unileratal close. >> > 2. Ask bitcoind what current expidited fee is (or survey your >> mempool). >> > 3. Give bitcoind child "push" tx at that total feerate. >> > 4. If next block doesn't contain unilateral close tx, goto 2. >> > >> > In this case, if you allow a simpified RBF where 'you can replace = if >> > 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3. >> > old tx isnt', >> > it works. >> > >> > It allows someone 100k of free tx spam, sure. But it's simple. >> > >> > We could further restrict it by marking the unilateral close >> somehow to >> > say "gonna be pushed" and further limiting the child tx weight (sa= y, >> > 5kSipa?) in that case. >> > >> > Cheers, >> > Rusty. >> > _______________________________________________ >> > Lightning-dev mailing list >> > Lightning-dev@lists.linuxfoundation.org >> > >> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> > >> > _______________________________________________ > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > --000000000000250da90595e92d28 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Johan,

The issues w= ith mempool limits for OP_SECURETHEBAG are related, but have distinct solut= ions.

There are two main categories of mempool issues at stake. One i= s relay cost, the other is mempool walking.

In terms of relay cost, i= f an ancestor can be replaced, it will invalidate all it's children, me= aning that no one paid for that broadcasting. This can be fixed by appropri= ately assessing Replace By Fee update fees to encapsulate all descendants, = but there are some tricky edge cases that make this non-obvious to do.

The other issue is walking the mempool -- many of the algorithms we use i= n the mempool can be N log N or N^2 in the number of descendants. (simple e= xample: an input chain of length N to a fan out of N outputs that are all s= pent, is O(N^2) to look up ancestors per-child, unless we're caching).<= /div>

The other sort of walking issue is where the indegree or outdegree fo= r a transaction is high. Then when we are computing descendants or ancestor= s we will need to visit it multiple times. To avoid re-expanding a node, we= currently cache it with a set. This uses O(N) extra memory and makes O(N L= og N) (we use std::set not unordered_set) comparisons.

I just op= ened a PR which should help with some of the walking issues by allowing us = to cheaply cache which nodes we've visited on a run. It makes a lot of = previously O(N log N) stuff O(N) and doesn't allocate as much new memor= y. See: https://g= ithub.com/bitcoin/bitcoin/pull/17268.


Now, for OP_SECURETHEBAG we want a particular property that= is very different from with lightning htlcs (as is). We want that an unlim= ited number of child OP_SECURETHEBAG txns may extend from a confirmed OP_SE= CURETHEBAG, and then at the leaf nodes, we want the same rule as lightning = (one dangling unconfirmed to permit channels).

OP_SECURETHEBAG can he= lp with the LN issue by putting all HTLCS into a tree where they are indivi= dualized leaf nodes with a preceding CSV. Then, the above fix would ensure = each HTLC always has time to close properly as they would have individualiz= ed lockpoints. This is desirable for some additional reasons and not for ot= hers, but it should "work".


On Fri, Oct 25, 2019 at 10:31 AM M= att Corallo <lf-lists@mattco= rallo.com> wrote:
I don=E2=80=99te see how? Let= =E2=80=99s imagine Party A has two spendable outputs, now they stuff the pa= ckage size on one of their spendable outlets until it is right at the limit= , add one more on their other output (to meet the Carve-Out), and now Party= B can=E2=80=99t do anything.

On Oct 24, 2019, at 21:05, Johan Tor=C3=A5s Halseth <johanth@gmail.com> wrote:=

=EF= =BB=BF
It essentially cha= nges the rule to always allow CPFP-ing the commitment as long as there is a= n output available without any descendants. It changes the commitment from = "you always need at least, and exactly, one non-CSV output per party. = " to "you always need at least one non-CSV output per party. &quo= t;

I realize these limits are there fo= r a reason though, but I'm wondering if could relax them. Also now that= jeremyrubin has expressed problems with the current mempool limits.
<= /div>

On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo <lf-lists@mattcorallo.com> = wrote:
I may be = missing something, but I'm not sure how this changes anything?

If you have a commitment transaction, you always need at least, and
exactly, one non-CSV output per party. The fact that there is a size
limitation on the transaction that spends for carve-out purposes only
effects how many other inputs/outputs you can add, but somehow I doubt
its ever going to be a large enough number to matter.

Matt

On 10/24/19 1:49 PM, Johan Tor=C3=A5s Halseth wrote:
> Reviving this old thread now that the recently released RC for bitcoin= d
> 0.19 includes the above mentioned carve-out rule.
>
> In an attempt to pave the way for more robust CPFP of on-chain contrac= ts
> (Lightning commitment transactions), the carve-out rule was added in > https://github.com/bitcoin/bitcoin/pull/15681.= However, having worked on
> an implementation of a new commitment format for utilizing the Bring > Your Own Fees strategy using CPFP, I=E2=80=99m wondering if the specia= l case
> rule should have been relaxed a bit, to avoid the need for adding a 1<= br> > CSV to all outputs (in case of Lightning this means HTLC scripts would=
> need to be changed to add the CSV delay).
>
> Instead, what about letting the rule be
>
> The last transaction which is added to a package of dependent
> transactions in the mempool must:
> =C2=A0 * Have no more than one unconfirmed parent.
>
> This would of course allow adding a large transaction to each output o= f
> the unconfirmed parent, which in effect would allow an attacker to
> exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is > this a problem with the current mempool acceptance code in bitcoind? I=
> would imagine evicting transactions based on feerate when the max
> mempool size is met handles this, but I=E2=80=99m asking since it seem= s like
> there has been several changes to the acceptance code and eviction
> policy since the limit was first introduced.
>
> - Johan
>
>
> On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell <rusty@rustcorp.com.au
> <mailto:= rusty@rustcorp.com.au>> wrote:
>
>=C2=A0 =C2=A0 =C2=A0Matt Corallo <lf-lists@mattcorallo.com
>=C2=A0 =C2=A0 =C2=A0<mailto:lf-lists@mattcorallo.com>> writes:
>=C2=A0 =C2=A0 =C2=A0>>> Thus, even if you imagine a steady-sta= te mempool growth, unless the
>=C2=A0 =C2=A0 =C2=A0>>> "near the top of the mempool"= ; criteria is "near the top of the next
>=C2=A0 =C2=A0 =C2=A0>>> block" (which is obviously *not* = incentive-compatible)
>=C2=A0 =C2=A0 =C2=A0>>
>=C2=A0 =C2=A0 =C2=A0>> I was defining "top of mempool" = as "in the first 4 MSipa", ie. next
>=C2=A0 =C2=A0 =C2=A0>> block, and assumed you'd only allow RB= F if the old package wasn't
>=C2=A0 =C2=A0 =C2=A0in the
>=C2=A0 =C2=A0 =C2=A0>> top and the replacement would be.=C2=A0 Th= at seems incentive
>=C2=A0 =C2=A0 =C2=A0compatible; more
>=C2=A0 =C2=A0 =C2=A0>> than the current scheme?
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> My point was, because of block time variance, = even that criteria
>=C2=A0 =C2=A0 =C2=A0doesn't hold up. If you assume a steady flow of= new transactions and
>=C2=A0 =C2=A0 =C2=A0one or two blocks come in "late", suddenl= y "top 4MWeight" isn't
>=C2=A0 =C2=A0 =C2=A0likely to get confirmed until a few blocks come in = "early". Given
>=C2=A0 =C2=A0 =C2=A0block variance within a 12 block window, this is a = relatively likely
>=C2=A0 =C2=A0 =C2=A0scenario.
>
>=C2=A0 =C2=A0 =C2=A0[ Digging through old mail. ]
>
>=C2=A0 =C2=A0 =C2=A0Doesn't really matter.=C2=A0 Lightning close al= gorithm would be:
>
>=C2=A0 =C2=A0 =C2=A01.=C2=A0 Give bitcoind unileratal close.
>=C2=A0 =C2=A0 =C2=A02.=C2=A0 Ask bitcoind what current expidited fee is= (or survey your mempool).
>=C2=A0 =C2=A0 =C2=A03.=C2=A0 Give bitcoind child "push" tx at= that total feerate.
>=C2=A0 =C2=A0 =C2=A04.=C2=A0 If next block doesn't contain unilater= al close tx, goto 2.
>
>=C2=A0 =C2=A0 =C2=A0In this case, if you allow a simpified RBF where &#= 39;you can replace if
>=C2=A0 =C2=A0 =C2=A01. feerate is higher, 2. new tx is in first 4Msipa = of mempool, 3.
>=C2=A0 =C2=A0 =C2=A0old tx isnt',
>=C2=A0 =C2=A0 =C2=A0it works.
>
>=C2=A0 =C2=A0 =C2=A0It allows someone 100k of free tx spam, sure.=C2=A0= But it's simple.
>
>=C2=A0 =C2=A0 =C2=A0We could further restrict it by marking the unilate= ral close somehow to
>=C2=A0 =C2=A0 =C2=A0say "gonna be pushed" and further limitin= g the child tx weight (say,
>=C2=A0 =C2=A0 =C2=A05kSipa?) in that case.
>
>=C2=A0 =C2=A0 =C2=A0Cheers,
>=C2=A0 =C2=A0 =C2=A0Rusty.
>=C2=A0 =C2=A0 =C2=A0_______________________________________________
>=C2=A0 =C2=A0 =C2=A0Lightning-dev mailing list
>=C2=A0 =C2=A0 =C2=A0Lightning-dev@lists.linuxfoundation.org
>=C2=A0 =C2=A0 =C2=A0<mailto:Lightning-dev@lists.linuxfoundation.or= g>
>=C2=A0 =C2=A0 =C2=A0https://list= s.linuxfoundation.org/mailman/listinfo/lightning-dev
>
_______________________________________________ Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/ma= ilman/listinfo/lightning-dev
--000000000000250da90595e92d28--