summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohan TorĂ¥s Halseth <johanth@gmail.com>2023-12-11 10:17:23 +0100
committerbitcoindev <bitcoindev@gnusha.org>2023-12-11 09:17:39 +0000
commit9e0df4833668fc7d1295397bf35d8f6209dbf89e (patch)
treeb0965f8427caf5ad99dc9b892733ff30e64a406b
parenta055d22b16eb19cc31962ec8b2adc5ec4b4f2e00 (diff)
downloadpi-bitcoindev-9e0df4833668fc7d1295397bf35d8f6209dbf89e.tar.gz
pi-bitcoindev-9e0df4833668fc7d1295397bf35d8f6209dbf89e.zip
Re: [bitcoin-dev] HTLC output aggregation as a mitigation for tx recycling, jamming, and on-chain efficiency (covenants)
-rw-r--r--17/de50cb7461112efd42a45a68f77fe41182ea5b585
1 files changed, 585 insertions, 0 deletions
diff --git a/17/de50cb7461112efd42a45a68f77fe41182ea5b b/17/de50cb7461112efd42a45a68f77fe41182ea5b
new file mode 100644
index 000000000..167c64da6
--- /dev/null
+++ b/17/de50cb7461112efd42a45a68f77fe41182ea5b
@@ -0,0 +1,585 @@
+Return-Path: <johanth@gmail.com>
+Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137])
+ by lists.linuxfoundation.org (Postfix) with ESMTP id 2D64CC0037;
+ Mon, 11 Dec 2023 09:17:39 +0000 (UTC)
+Received: from localhost (localhost [127.0.0.1])
+ by smtp4.osuosl.org (Postfix) with ESMTP id 0139240987;
+ Mon, 11 Dec 2023 09:17:39 +0000 (UTC)
+DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 0139240987
+Authentication-Results: smtp4.osuosl.org;
+ dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com
+ header.a=rsa-sha256 header.s=20230601 header.b=JOouAJtr
+X-Virus-Scanned: amavisd-new at osuosl.org
+X-Spam-Flag: NO
+X-Spam-Score: -2.099
+X-Spam-Level:
+X-Spam-Status: No, score=-2.099 tagged_above=-999 required=5
+ tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1,
+ DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001,
+ RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001]
+ autolearn=ham autolearn_force=no
+Received: from smtp4.osuosl.org ([127.0.0.1])
+ by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)
+ with ESMTP id MEJAwmMYvg05; Mon, 11 Dec 2023 09:17:37 +0000 (UTC)
+Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com
+ [IPv6:2607:f8b0:4864:20::112b])
+ by smtp4.osuosl.org (Postfix) with ESMTPS id C99DA4096E;
+ Mon, 11 Dec 2023 09:17:36 +0000 (UTC)
+DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org C99DA4096E
+Received: by mail-yw1-x112b.google.com with SMTP id
+ 00721157ae682-5d34f8f211fso40771637b3.0;
+ Mon, 11 Dec 2023 01:17:36 -0800 (PST)
+DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
+ d=gmail.com; s=20230601; t=1702286255; x=1702891055;
+ darn=lists.linuxfoundation.org;
+ h=content-transfer-encoding:cc:to:subject:message-id:date:from
+ :in-reply-to:references:mime-version:from:to:cc:subject:date
+ :message-id:reply-to;
+ bh=D7bIRRN1DMZPdEUX1MqGH7gjF2lg+Pfg8kAjSL5gMEo=;
+ b=JOouAJtrwPMfMJrwTBaQQtrTa97fK+DOy4cv4F5VZh8vaVnkh75VurNICZbiETETbx
+ NLvxgeKLIE69/auhkziFgu4tfgrhnF+PaoU+HSv8CeQjWpxOUnQ4EQu7hqMSgUoMItBT
+ ZODxkWV1G8e9iiiK0gkdsEkLUqRfBL2LC8CeJ+IQmgWPt/wgca3w7oWotrtj2i7vbcIL
+ +dKWjZueGGKQizDnl7994oUumuPXhGD71ir/zh+U/ST0PLgWi4sFg34l0MEl9xjDlAKs
+ RXRCl40oFfExAtcyvf/51q0YcZyqJf3JG+snePcnspbA+Ov5bBXK33dYP4uMJaIX5yFH
+ 37wQ==
+X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
+ d=1e100.net; s=20230601; t=1702286255; x=1702891055;
+ h=content-transfer-encoding:cc:to:subject:message-id:date:from
+ :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
+ :subject:date:message-id:reply-to;
+ bh=D7bIRRN1DMZPdEUX1MqGH7gjF2lg+Pfg8kAjSL5gMEo=;
+ b=Q8xHnV7zDFbO/m0p0VvsCyDcVnVQ6+ha/iphSbT9wKA5Q6HYjsnVjahrwUg9pNCnfU
+ sSCEQUPk1KLYGXJKoC+AhciD4m+L2r09a8HGCl/Mss/CjNdL3bc3drIZYz/agyTEg1FO
+ 6IKDBJAlRkIflT6jJ6gH0fBP+wtr63923PV884KhQknhWaWFx+PpwiCPnSJYcXwvnV5X
+ F4KA9zQ9cNvfXK8FbP5+oEqBHj+1eWAYEQEfG1mXfN8dO+F2nwcuzteu8JYAdgPt6n/n
+ dsdR2IMLVbg7BDQ+Xl3hhoMRaergG2ltGdj4vt/BgRSzAp2PdahAE7HqHYesC+d1oCGn
+ Jgkw==
+X-Gm-Message-State: AOJu0YyOozIJk6E7Ae6z8GpCfZg6Jym43rFeRK9rg35Vwjkfp/rhpEDo
+ iU2bFus+iExGidQK8n03apHVJFKJkOX1FyNXJO3QxLJXY5PBLA==
+X-Google-Smtp-Source: AGHT+IEqPgk011off50xGZ1vjV+me82MdELjMQ8fdKDYsRlYt6Wr10yxWJk3dhmynYXvSN/QBEKmaL1Blpf0nN7OY/M=
+X-Received: by 2002:a0d:e810:0:b0:5d8:1a72:7103 with SMTP id
+ r16-20020a0de810000000b005d81a727103mr3040154ywe.44.1702286255158; Mon, 11
+ Dec 2023 01:17:35 -0800 (PST)
+MIME-Version: 1.0
+References: <CAD3i26Dux33wF=Ki0ouChseW7dehRuz+QC54bmsm7xzm2YACQQ@mail.gmail.com>
+ <CALZpt+GqOeZvkw738GBF0_G4B5fm6noieiddG2QzrbHOG=wTxA@mail.gmail.com>
+In-Reply-To: <CALZpt+GqOeZvkw738GBF0_G4B5fm6noieiddG2QzrbHOG=wTxA@mail.gmail.com>
+From: =?UTF-8?Q?Johan_Tor=C3=A5s_Halseth?= <johanth@gmail.com>
+Date: Mon, 11 Dec 2023 10:17:23 +0100
+Message-ID: <CAD3i26B0UAdAbPdNazrQ0RwtorhMM6NnXHkUXqDd3-+mBDLJEA@mail.gmail.com>
+To: Antoine Riard <antoine.riard@gmail.com>
+Content-Type: text/plain; charset="UTF-8"
+Content-Transfer-Encoding: quoted-printable
+X-Mailman-Approved-At: Mon, 11 Dec 2023 13:28:37 +0000
+Cc: Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>,
+ "lightning-dev\\\\@lists.linuxfoundation.org"
+ <lightning-dev@lists.linuxfoundation.org>
+Subject: Re: [bitcoin-dev] HTLC output aggregation as a mitigation for tx
+ recycling, jamming, and on-chain efficiency (covenants)
+X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
+X-Mailman-Version: 2.1.15
+Precedence: list
+List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
+List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
+ <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
+List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
+List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
+List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
+List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
+ <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
+X-List-Received-Date: Mon, 11 Dec 2023 09:17:39 -0000
+
+Hi, Antoine.
+
+> The attack works on legacy channels if the holder (or local) commitment t=
+ransaction confirms first, the second-stage HTLC claim transaction is fully=
+ malleable by the counterparty.
+
+Yes, correct. Thanks for pointing that out!
+
+> I think one of the weaknesses of this approach is the level of malleabili=
+ty still left to the counterparty, where one might burn in miners fees all =
+the HTLC accumulated value promised to the counterparty, and for which the =
+preimages have been revealed off-chain.
+
+Is this a concern though, if we assume there's no revoked state that
+can be broadcast (Eltoo)? Could you share an example of how this would
+be played out by an attacker?
+
+> I wonder if a more safe approach, eliminating a lot of competing interest=
+s style of mempool games, wouldn't be to segregate HTLC claims in two separ=
+ate outputs, with full replication of the HTLC lockscripts in both outputs,=
+ and let a covenant accepts or rejects aggregated claims with satisfying wi=
+tness and chain state condition for time lock.
+
+I'm not sure what you mean here, could you elaborate?
+
+> I wonder if in a PTLC world, you can generate an aggregate curve point fo=
+r all the sub combinations of scalar plausible. Unrevealed curve points in =
+a taproot branch are cheap. It might claim an offered HTLC near-constant si=
+ze too.
+
+That sounds possible, but how would you deal with the exponential
+blowup in the number of combinations?
+
+Cheers,
+Johan
+
+
+On Tue, Nov 21, 2023 at 3:39=E2=80=AFAM Antoine Riard <antoine.riard@gmail.=
+com> wrote:
+>
+> Hi Johan,
+>
+> Few comments.
+>
+> ## Transaction recycling
+> The transaction recycling attack is made possible by the change made
+> to HTLC second level transactions for the anchor channel type[8];
+> making it possible to add fees to the transaction by adding inputs
+> without violating the signature. For the legacy channel type this
+> attack was not possible, as all fees were taken from the HTLC outputs
+> themselves, and had to be agreed upon by channel counterparties during
+> signing (of course this has its own problems, which is why we wanted
+> to change it).
+>
+> The attack works on legacy channels if the holder (or local) commitment t=
+ransaction confirms first, the second-stage HTLC claim transaction is fully=
+ malleable by the counterparty.
+>
+> See https://github.com/lightning/bolts/blob/master/03-transactions.md#off=
+ered-htlc-outputs (only remote_htlcpubkey required)
+>
+> Note a replacement cycling attack works in a future package-relay world t=
+oo.
+>
+> See test: https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4=
+005603f43d72f1efcf
+>
+> > The idea of HTLC output aggregation is to collapse all HTLC outputs on
+> > the commitment to a single one. This has many benefits (that I=E2=80=99=
+ll get
+> > to), one of them being the possibility to let the spender claim the
+> > portion of the output that they=E2=80=99re right to, deciding how much =
+should
+> > go to fees. Note that this requires a covenant to be possible.
+>
+> Another advantage of HTLC output aggregation is the reduction of fee-bump=
+ing reserves requirements on channel counterparties, as second-stage HTLC t=
+ransactions have common fields (nVersion, nLocktime, ...) *could* be shared=
+.
+>
+> > ## A single HTLC output
+> > Today, every forwarded HTLC results in an output that needs to be
+> > manifested on the commitment transaction in order to claw back money
+> > in case of an uncooperative channel counterparty. This puts a limit on
+> > the number of active HTLCs (in order for the commitment transaction to
+> > not become too large) which makes it possible to jam the channel with
+> > small amounts of capital [1]. It also turns out that having this limit
+> > be large makes it expensive and complicated to sweep the outputs
+> > efficiently [2].
+>
+> > Instead of having new HTLC outputs manifest for each active
+> > forwarding, with covenants on the base layer one could create a single
+> > aggregated output on the commitment. The output amount being the sum
+> > of the active HTLCs (offered and received), alternatively one output
+> > for received and one for offered. When spending this output, you would
+> > only be entitled to the fraction of the amount corresponding to the
+> > HTLCs you know the preimage for (received), or that has timed out
+> > (offered).
+>
+> > ## Impacts to transaction recycling
+> > Depending on the capabilities of the covenant available (e.g.
+> > restricting the number of inputs to the transaction) the transaction
+> > spending the aggregated HTLC output can be made self sustained: the
+> > spender will be able to claim what is theirs (preimage or timeout) and
+> > send it to whatever output they want, or to fees. The remainder will
+> > go back into a covenant restricted output with the leftover HTLCs.
+> > Note that this most likely requires Eltoo in order to not enable fee
+> > siphoning[7].
+>
+> I think one of the weaknesses of this approach is the level of malleabili=
+ty still left to the counterparty, where one might burn in miners fees all =
+the HTLC accumulated value promised to the counterparty, and for which the =
+preimages have been revealed off-chain.
+>
+> I wonder if a more safe approach, eliminating a lot of competing interest=
+s style of mempool games, wouldn't be to segregate HTLC claims in two separ=
+ate outputs, with full replication of the HTLC lockscripts in both outputs,=
+ and let a covenant accepts or rejects aggregated claims with satisfying wi=
+tness and chain state condition for time lock.
+>
+> > ## Impacts to slot jamming
+> > With the aggregated output being a reality, it changes the nature of
+> > =E2=80=9Cslot jamming=E2=80=9D [1] significantly. While channel capacit=
+y must still be
+> > reserved for in-flight HTLCs, one no longer needs to allocate a
+> > commitment output for each up to some hardcoded limit.
+>
+> > In today=E2=80=99s protocol this limit is 483, and I believe most
+> > implementations default to an even lower limit. This leads to channel
+> > jamming being quite inexpensive, as one can quickly fill a channel
+> > with small HTLCs, without needing a significant amount of capital to
+> > do so.
+>
+> > The origins of the 483 slot limits is the worst case commitment size
+> > before getting into unstandard territory [3]. With an aggregated
+> > output this would no longer be the case, as adding HTLCs would no
+> > longer affect commitment size. Instead, the full on-chain footprint of
+> > an HTLC would be deferred until claim time.
+>
+> > Does this mean one could lift, or even remove the limit for number of
+> > active HTLCs? Unfortunately, the obvious approach doesn=E2=80=99t seem =
+to get
+> > rid of the problem entirely, but mitigates it quite a bit.
+>
+> Yes, protocol limit of 483 is a long-term limit on the payment throughput=
+ of the LN, though as an upper bound we have the dust limits and mempool fl=
+uctuations rendering irrelevant the claim of such aggregated dust outputs. =
+Aggregated claims might give a more dynamic margin of what is a tangible an=
+d trust-minimized HTLC payment.
+>
+> > ### Slot jamming attack scenario
+> > Consider the scenario where an attacker sends a large number of
+> > non-dust* HTLCs across a channel, and the channel parties enforce no
+> > limit on the number of active HTLCs.
+>
+> > The number of payments would not affect the size of the commitment
+> > transaction at all, only the size of the witness that must be
+> > presented when claiming or timing out the HTLCs. This means that there
+> > is still a point at which chain fees get high enough for the HTLC to
+> > be uneconomical to claim. This is no different than in today=E2=80=99s =
+spec,
+> > and such HTLCs will just be stranded on-chain until chain fees
+> > decrease, at which point there is a race between the success and
+> > timeout spends.
+>
+> > There seems to be no way around this; if you want to claim an HTLC
+> > on-chain, you need to put the preimage on-chain. And when the HTLC
+> > first reaches you, you have no way of predicting the future chain fee.
+> > With a large number of uneconomical HTLCs in play, the total BTC
+> > exposure could still be very large, so you might want to limit this
+> > somewhat.
+>
+> > * Note that as long as the sum of HTLCs exceeds the dust limit, one
+> > could manifest the output on the transaction.
+>
+> Unless we introduce sliding windows during which the claim periods of an =
+HTLC can be claimed and freeze accordingly the HTLC-timeout path.
+>
+> See: https://fc22.ifca.ai/preproceedings/119.pdf
+>
+> Bad news: you will need off-chain consensus on the feerate threshold at w=
+hich the sliding windows kick-out among all the routing nodes participating=
+ in the HTLC payment path.
+>
+> > ## The good news
+> > With an aggregated HTLC output, the number of HTLCs would no longer
+> > impact the commitment transaction size while the channel is open and
+> > operational.
+>
+> > The marginal cost of claiming an HTLC with a preimage on-chain would
+> > be much lower; no new inputs or outputs, only a linear increase in the
+> > witness size. With a covenant primitive available, the extra footprint
+> > of the timeout and success transactions would no longer exist.
+>
+> > Claiming timed out HTLCs could still be made close to constant size
+> > (no preimage to present), so no additional on-chain cost with more
+> > HTLCs.
+>
+> I wonder if in a PTLC world, you can generate an aggregate curve point fo=
+r all the sub combinations of scalar plausible. Unrevealed curve points in =
+a taproot branch are cheap. It might claim an offered HTLC near-constant si=
+ze too.
+>
+> > ## The bad news
+> > The most obvious problem is that we would need a new covenant
+> > primitive on L1 (see below). However, I think it could be beneficial
+> > to start exploring these ideas now in order to guide the L1 effort
+> > towards something we could utilize to its fullest on L2.
+>
+> > As mentioned, even with a functioning covenant, we don=E2=80=99t escape=
+ the
+> > fact that a preimage needs to go on-chain, pricing out HTLCs at
+> > certain fee rates. This is analogous to the dust exposure problem
+> > discussed in [6], and makes some sort of limit still required.
+>
+> Ideally such covenant mechanisms would generalize to the withdrawal phase=
+ of payment pools, where dozens or hundreds of participants wish to confirm=
+ their non-competing withdrawal transactions concurrently. While unlocking =
+preimage or scalar can be aggregated in a single witness, there will still =
+be a need to verify that each withdrawal output associated with an unlockin=
+g secret is present in the transaction.
+>
+> Maybe few other L2s are answering this N-inputs-to-M-outputs pattern with=
+ advanced locking scripts conditions to satisfy.
+>
+> > ### Open question
+> > With PTLCs, could one create a compact proof showing that you know the
+> > preimage for m-of-n of the satoshis in the output? (some sort of
+> > threshold signature).
+>
+> > If we could do this we would be able to remove the slot jamming issue
+> > entirely; any number of active PTLCs would not change the on-chain
+> > cost of claiming them.
+>
+> See comments above, I think there is a plausible scheme here you just gen=
+erate all the point combinations possible, and only reveal the one you need=
+ at broadcast.
+>
+> > ## Covenant primitives
+> > A recursive covenant is needed to achieve this. Something like OP_CTV
+> > and OP_APO seems insufficient, since the number of ways the set of
+> > HTLCs could be claimed would cause combinatorial blowup in the number
+> > of possible spending transactions.
+>
+> > Personally, I=E2=80=99ve found the simple yet powerful properties of
+> > OP_CHECKCONTRACTVERIFY [4] together with OP_CAT and amount inspection
+> > particularly interesting for the use case, but I=E2=80=99m certain many=
+ of the
+> > other proposals could achieve the same thing. More direct inspection
+> > like you get from a proposal like OP_TX[9] would also most likely have
+> > the building blocks needed.
+>
+> As pointed out during the CTV drama and payment pool public discussion ye=
+ars ago, what would be very useful to tie-break among all covenant construc=
+tions would be an efficiency simulation framework. Even if the same semanti=
+c can be achieved independently by multiple covenants, they certainly do no=
+t have the same performance trade-offs (e.g average and worst-case witness =
+size).
+>
+> I don't think the blind approach of activating many complex covenants at =
+the same time is conservative enough in Bitcoin, where one might design "ma=
+licious" L2 contracts, of which the game-theory is not fully understood.
+>
+> See e.g https://blog.bitmex.com/txwithhold-smart-contracts/
+>
+> > ### Proof-of-concept
+> > I=E2=80=99ve implemented a rough demo** of spending an HTLC output that=
+ pays
+> > to a script with OP_CHECKCONTRACTVERIFY to achieve this [5]. The idea
+> > is to commit to all active HTLCs in a merkle tree, and have the
+> > spender provide merkle proofs for the HTLCs to claim, claiming the sum
+> > into a new output. The remainder goes back into a new output with the
+> > claimed HTLCs removed from the merkle tree.
+>
+> > An interesting trick one can do when creating the merkle tree, is
+> > sorting the HTLCs by expiry. This means that one in the timeout case
+> > claim a subtree of HTLCs using a single merkle proof (and RBF this
+> > batched timeout claim as more and more HTLCs expire) reducing the
+> > timeout case to constant size witness (or rather logarithmic in the
+> > total number of HTLCs).
+>
+> > **Consider it an experiment, as it is missing a lot before it could be
+> > usable in any real commitment setting.
+>
+> I think this is an interesting question if more advanced cryptosystems ba=
+sed on assumptions other than the DL problem could constitute a factor of s=
+calability of LN payment throughput by orders of magnitude, by decoupling n=
+umber of off-chain payments from the growth of the on-chain witness size ne=
+ed to claim them, without lowering in security as with trimmed HTLC due to =
+dust limits.
+>
+> Best,
+> Antoine
+>
+> Le jeu. 26 oct. 2023 =C3=A0 20:28, Johan Tor=C3=A5s Halseth via bitcoin-d=
+ev <bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit :
+>>
+>> Hi all,
+>>
+>> After the transaction recycling has spurred some discussion the last
+>> week or so, I figured it could be worth sharing some research I=E2=80=99=
+ve
+>> done into HTLC output aggregation, as it could be relevant for how to
+>> avoid this problem in a future channel type.
+>>
+>> TLDR; With the right covenant we can create HTLC outputs that are much
+>> more chain efficient, not prone to tx recycling and harder to jam.
+>>
+>> ## Transaction recycling
+>> The transaction recycling attack is made possible by the change made
+>> to HTLC second level transactions for the anchor channel type[8];
+>> making it possible to add fees to the transaction by adding inputs
+>> without violating the signature. For the legacy channel type this
+>> attack was not possible, as all fees were taken from the HTLC outputs
+>> themselves, and had to be agreed upon by channel counterparties during
+>> signing (of course this has its own problems, which is why we wanted
+>> to change it).
+>>
+>> The idea of HTLC output aggregation is to collapse all HTLC outputs on
+>> the commitment to a single one. This has many benefits (that I=E2=80=99l=
+l get
+>> to), one of them being the possibility to let the spender claim the
+>> portion of the output that they=E2=80=99re right to, deciding how much s=
+hould
+>> go to fees. Note that this requires a covenant to be possible.
+>>
+>> ## A single HTLC output
+>> Today, every forwarded HTLC results in an output that needs to be
+>> manifested on the commitment transaction in order to claw back money
+>> in case of an uncooperative channel counterparty. This puts a limit on
+>> the number of active HTLCs (in order for the commitment transaction to
+>> not become too large) which makes it possible to jam the channel with
+>> small amounts of capital [1]. It also turns out that having this limit
+>> be large makes it expensive and complicated to sweep the outputs
+>> efficiently [2].
+>>
+>> Instead of having new HTLC outputs manifest for each active
+>> forwarding, with covenants on the base layer one could create a single
+>> aggregated output on the commitment. The output amount being the sum
+>> of the active HTLCs (offered and received), alternatively one output
+>> for received and one for offered. When spending this output, you would
+>> only be entitled to the fraction of the amount corresponding to the
+>> HTLCs you know the preimage for (received), or that has timed out
+>> (offered).
+>>
+>> ## Impacts to transaction recycling
+>> Depending on the capabilities of the covenant available (e.g.
+>> restricting the number of inputs to the transaction) the transaction
+>> spending the aggregated HTLC output can be made self sustained: the
+>> spender will be able to claim what is theirs (preimage or timeout) and
+>> send it to whatever output they want, or to fees. The remainder will
+>> go back into a covenant restricted output with the leftover HTLCs.
+>> Note that this most likely requires Eltoo in order to not enable fee
+>> siphoning[7].
+>>
+>> ## Impacts to slot jamming
+>> With the aggregated output being a reality, it changes the nature of
+>> =E2=80=9Cslot jamming=E2=80=9D [1] significantly. While channel capacity=
+ must still be
+>> reserved for in-flight HTLCs, one no longer needs to allocate a
+>> commitment output for each up to some hardcoded limit.
+>>
+>> In today=E2=80=99s protocol this limit is 483, and I believe most
+>> implementations default to an even lower limit. This leads to channel
+>> jamming being quite inexpensive, as one can quickly fill a channel
+>> with small HTLCs, without needing a significant amount of capital to
+>> do so.
+>>
+>> The origins of the 483 slot limits is the worst case commitment size
+>> before getting into unstandard territory [3]. With an aggregated
+>> output this would no longer be the case, as adding HTLCs would no
+>> longer affect commitment size. Instead, the full on-chain footprint of
+>> an HTLC would be deferred until claim time.
+>>
+>> Does this mean one could lift, or even remove the limit for number of
+>> active HTLCs? Unfortunately, the obvious approach doesn=E2=80=99t seem t=
+o get
+>> rid of the problem entirely, but mitigates it quite a bit.
+>>
+>> ### Slot jamming attack scenario
+>> Consider the scenario where an attacker sends a large number of
+>> non-dust* HTLCs across a channel, and the channel parties enforce no
+>> limit on the number of active HTLCs.
+>>
+>> The number of payments would not affect the size of the commitment
+>> transaction at all, only the size of the witness that must be
+>> presented when claiming or timing out the HTLCs. This means that there
+>> is still a point at which chain fees get high enough for the HTLC to
+>> be uneconomical to claim. This is no different than in today=E2=80=99s s=
+pec,
+>> and such HTLCs will just be stranded on-chain until chain fees
+>> decrease, at which point there is a race between the success and
+>> timeout spends.
+>>
+>> There seems to be no way around this; if you want to claim an HTLC
+>> on-chain, you need to put the preimage on-chain. And when the HTLC
+>> first reaches you, you have no way of predicting the future chain fee.
+>> With a large number of uneconomical HTLCs in play, the total BTC
+>> exposure could still be very large, so you might want to limit this
+>> somewhat.
+>>
+>> * Note that as long as the sum of HTLCs exceeds the dust limit, one
+>> could manifest the output on the transaction.
+>>
+>> ## The good news
+>> With an aggregated HTLC output, the number of HTLCs would no longer
+>> impact the commitment transaction size while the channel is open and
+>> operational.
+>>
+>> The marginal cost of claiming an HTLC with a preimage on-chain would
+>> be much lower; no new inputs or outputs, only a linear increase in the
+>> witness size. With a covenant primitive available, the extra footprint
+>> of the timeout and success transactions would no longer exist.
+>>
+>> Claiming timed out HTLCs could still be made close to constant size
+>> (no preimage to present), so no additional on-chain cost with more
+>> HTLCs.
+>>
+>> ## The bad news
+>> The most obvious problem is that we would need a new covenant
+>> primitive on L1 (see below). However, I think it could be beneficial
+>> to start exploring these ideas now in order to guide the L1 effort
+>> towards something we could utilize to its fullest on L2.
+>>
+>> As mentioned, even with a functioning covenant, we don=E2=80=99t escape =
+the
+>> fact that a preimage needs to go on-chain, pricing out HTLCs at
+>> certain fee rates. This is analogous to the dust exposure problem
+>> discussed in [6], and makes some sort of limit still required.
+>>
+>> ### Open question
+>> With PTLCs, could one create a compact proof showing that you know the
+>> preimage for m-of-n of the satoshis in the output? (some sort of
+>> threshold signature).
+>>
+>> If we could do this we would be able to remove the slot jamming issue
+>> entirely; any number of active PTLCs would not change the on-chain
+>> cost of claiming them.
+>>
+>> ## Covenant primitives
+>> A recursive covenant is needed to achieve this. Something like OP_CTV
+>> and OP_APO seems insufficient, since the number of ways the set of
+>> HTLCs could be claimed would cause combinatorial blowup in the number
+>> of possible spending transactions.
+>>
+>> Personally, I=E2=80=99ve found the simple yet powerful properties of
+>> OP_CHECKCONTRACTVERIFY [4] together with OP_CAT and amount inspection
+>> particularly interesting for the use case, but I=E2=80=99m certain many =
+of the
+>> other proposals could achieve the same thing. More direct inspection
+>> like you get from a proposal like OP_TX[9] would also most likely have
+>> the building blocks needed.
+>>
+>> ### Proof-of-concept
+>> I=E2=80=99ve implemented a rough demo** of spending an HTLC output that =
+pays
+>> to a script with OP_CHECKCONTRACTVERIFY to achieve this [5]. The idea
+>> is to commit to all active HTLCs in a merkle tree, and have the
+>> spender provide merkle proofs for the HTLCs to claim, claiming the sum
+>> into a new output. The remainder goes back into a new output with the
+>> claimed HTLCs removed from the merkle tree.
+>>
+>> An interesting trick one can do when creating the merkle tree, is
+>> sorting the HTLCs by expiry. This means that one in the timeout case
+>> claim a subtree of HTLCs using a single merkle proof (and RBF this
+>> batched timeout claim as more and more HTLCs expire) reducing the
+>> timeout case to constant size witness (or rather logarithmic in the
+>> total number of HTLCs).
+>>
+>> **Consider it an experiment, as it is missing a lot before it could be
+>> usable in any real commitment setting.
+>>
+>>
+>> [1] https://bitcoinops.org/en/topics/channel-jamming-attacks/#htlc-jammi=
+ng-attack
+>> [2] https://github.com/lightning/bolts/issues/845
+>> [3] https://github.com/lightning/bolts/blob/aad959a297ff66946effb1655181=
+43be15777dd6/02-peer-protocol.md#rationale-7
+>> [4] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-Novembe=
+r/021182.html
+>> [5] https://github.com/halseth/tapsim/blob/b07f29804cf32dce0168ab5bb4055=
+8cbb18f2e76/examples/matt/claimpool/script.txt
+>> [6] https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-Octob=
+er/003257.html
+>> [7] https://github.com/lightning/bolts/issues/845#issuecomment-937736734
+>> [8] https://github.com/lightning/bolts/blob/8a64c6a1cef979b3f0cecb00ba7a=
+48c2d28b3588/03-transactions.md?plain=3D1#L333
+>> [9] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/020=
+450.html
+>> _______________________________________________
+>> bitcoin-dev mailing list
+>> bitcoin-dev@lists.linuxfoundation.org
+>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
+