summaryrefslogtreecommitdiff
path: root/4b/f4a26101297e57bd4ed81c5b792acbe3350861
blob: 009a6f1841c3fca39367bc151ed6d96bd5daf44d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
Return-Path: <antoine.riard@gmail.com>
Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137])
 by lists.linuxfoundation.org (Postfix) with ESMTP id CDCD1C0039;
 Tue, 21 Nov 2023 02:40:01 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by smtp4.osuosl.org (Postfix) with ESMTP id A0ECC401CE;
 Tue, 21 Nov 2023 02:40:01 +0000 (UTC)
DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org A0ECC401CE
Authentication-Results: smtp4.osuosl.org;
 dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com
 header.a=rsa-sha256 header.s=20230601 header.b=ALCzjOdz
X-Virus-Scanned: amavisd-new at osuosl.org
X-Spam-Flag: NO
X-Spam-Score: -2.098
X-Spam-Level: 
X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5
 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1,
 DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001,
 HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001,
 SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from smtp4.osuosl.org ([127.0.0.1])
 by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id QTDy3xhmo4a9; Tue, 21 Nov 2023 02:39:58 +0000 (UTC)
Received: from mail-il1-x135.google.com (mail-il1-x135.google.com
 [IPv6:2607:f8b0:4864:20::135])
 by smtp4.osuosl.org (Postfix) with ESMTPS id 54A06401BA;
 Tue, 21 Nov 2023 02:39:58 +0000 (UTC)
DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 54A06401BA
Received: by mail-il1-x135.google.com with SMTP id
 e9e14a558f8ab-35af64a180eso7339205ab.1; 
 Mon, 20 Nov 2023 18:39:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=gmail.com; s=20230601; t=1700534397; x=1701139197;
 darn=lists.linuxfoundation.org; 
 h=to:subject:message-id:date:from:in-reply-to:references:mime-version
 :from:to:cc:subject:date:message-id:reply-to;
 bh=2D48U+KmOO5cIRtFUMxXkMxXrry96RWek2apAKKXGGU=;
 b=ALCzjOdzkjIvs+7myjl15LswNj1g8/Y5a4P3fqsl+d8EB3bn418vaAXue2vZbzs9BL
 duw6FmptOMLsVnRcvTbDkB+rObLMWG6I7tlslQUTzubQfJrPtu+NukuBsvxZqa9JlWM4
 3dpbALOuTue1QnbsUR4EFYrdcrphmdcWmyrwCcGzwgCd5NEszRQ6CchyPb+mRXUylDo5
 IjQ3GNWQstbiT4f44duvegMJ2HWuSAyI1aUMgSG0j8FUbVj/WH09lePOIV+ahOrmWCSW
 xydvqx7Vze3okSDizm25Hm24KfQUAXjrVqD3WF7ijn0Qloxw9lbVFeGqZf5vm1rVP2ll
 KU8w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20230601; t=1700534397; x=1701139197;
 h=to:subject:message-id:date:from:in-reply-to:references:mime-version
 :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
 bh=2D48U+KmOO5cIRtFUMxXkMxXrry96RWek2apAKKXGGU=;
 b=UIeQ6mqvQ0tp3X4g3Ttv9VBPHPWmnddqM1QoCHLkq+50Md2rjLL8Ij/Q05oqt0zZNl
 Oec3D/44Tc1bj7te1kHtP3Rk0hda8X7MVlTF3nnflmwgriZ7Yi2rXciCMpbEYR1CJln4
 rqC+6dzS5KB5W9/F/utpJ2ZdF9zlsDDkCPZhWdn6yMIpZa68lyOAhsU2iv4XFdhRno8X
 GcsASpjw53eCFTl3dWFbhCbSdzjo7QNBhF0ia+htkjhjJgSXv1f+qmSmLAKMix+U1xDp
 q7vEl4ckrDoVOmG8Iaso0cxGMPYcy7QI1XGVQgoqYPToDHFOnPUbqjoi772TES5maEjV
 emmQ==
X-Gm-Message-State: AOJu0Yz+Bv/Tmp4sLzGyEFjbwqv9RQ33aug+NklDzVcceqxSQUBDRyRu
 Ar0qDaqbNmRdSAzfaRJr+ezqvAnsaVEMqp1S8We4UPnMfZf0QRG7
X-Google-Smtp-Source: AGHT+IFn39KAEcIWmm5wWoBhF0jr2+5ATAbZXBn7ZZckzFzS5yOq6sZBLmjNJ1QhxbByIjCeutq9ktAmPV+t+DYpwtc=
X-Received: by 2002:a92:3314:0:b0:35a:b399:555a with SMTP id
 a20-20020a923314000000b0035ab399555amr9937339ilf.10.1700534397112; Mon, 20
 Nov 2023 18:39:57 -0800 (PST)
MIME-Version: 1.0
References: <CAD3i26Dux33wF=Ki0ouChseW7dehRuz+QC54bmsm7xzm2YACQQ@mail.gmail.com>
In-Reply-To: <CAD3i26Dux33wF=Ki0ouChseW7dehRuz+QC54bmsm7xzm2YACQQ@mail.gmail.com>
From: Antoine Riard <antoine.riard@gmail.com>
Date: Tue, 21 Nov 2023 02:39:45 +0000
Message-ID: <CALZpt+GqOeZvkw738GBF0_G4B5fm6noieiddG2QzrbHOG=wTxA@mail.gmail.com>
To: =?UTF-8?Q?Johan_Tor=C3=A5s_Halseth?= <johanth@gmail.com>, 
 Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>, 
 "lightning-dev\\\\@lists.linuxfoundation.org"
 <lightning-dev@lists.linuxfoundation.org>
Content-Type: multipart/alternative; boundary="000000000000ab1685060aa084dc"
X-Mailman-Approved-At: Tue, 21 Nov 2023 11:26:28 +0000
Subject: Re: [bitcoin-dev] HTLC output aggregation as a mitigation for tx
 recycling, jamming, and on-chain efficiency (covenants)
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>, 
 <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>, 
 <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Tue, 21 Nov 2023 02:40:01 -0000

--000000000000ab1685060aa084dc
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Johan,

Few comments.

## Transaction recycling
The transaction recycling attack is made possible by the change made
to HTLC second level transactions for the anchor channel type[8];
making it possible to add fees to the transaction by adding inputs
without violating the signature. For the legacy channel type this
attack was not possible, as all fees were taken from the HTLC outputs
themselves, and had to be agreed upon by channel counterparties during
signing (of course this has its own problems, which is why we wanted
to change it).

The attack works on legacy channels if the holder (or local) commitment
transaction confirms first, the second-stage HTLC claim transaction is
fully malleable by the counterparty.

See
https://github.com/lightning/bolts/blob/master/03-transactions.md#offered-h=
tlc-outputs
(only remote_htlcpubkey required)

Note a replacement cycling attack works in a future package-relay world too=
.

See test:
https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72=
f1efcf

> The idea of HTLC output aggregation is to collapse all HTLC outputs on
> the commitment to a single one. This has many benefits (that I=E2=80=99ll=
 get
> to), one of them being the possibility to let the spender claim the
> portion of the output that they=E2=80=99re right to, deciding how much sh=
ould
> go to fees. Note that this requires a covenant to be possible.

Another advantage of HTLC output aggregation is the reduction of
fee-bumping reserves requirements on channel counterparties, as
second-stage HTLC transactions have common fields (nVersion, nLocktime,
...) *could* be shared.

> ## A single HTLC output
> Today, every forwarded HTLC results in an output that needs to be
> manifested on the commitment transaction in order to claw back money
> in case of an uncooperative channel counterparty. This puts a limit on
> the number of active HTLCs (in order for the commitment transaction to
> not become too large) which makes it possible to jam the channel with
> small amounts of capital [1]. It also turns out that having this limit
> be large makes it expensive and complicated to sweep the outputs
> efficiently [2].

> Instead of having new HTLC outputs manifest for each active
> forwarding, with covenants on the base layer one could create a single
> aggregated output on the commitment. The output amount being the sum
> of the active HTLCs (offered and received), alternatively one output
> for received and one for offered. When spending this output, you would
> only be entitled to the fraction of the amount corresponding to the
> HTLCs you know the preimage for (received), or that has timed out
> (offered).

> ## Impacts to transaction recycling
> Depending on the capabilities of the covenant available (e.g.
> restricting the number of inputs to the transaction) the transaction
> spending the aggregated HTLC output can be made self sustained: the
> spender will be able to claim what is theirs (preimage or timeout) and
> send it to whatever output they want, or to fees. The remainder will
> go back into a covenant restricted output with the leftover HTLCs.
> Note that this most likely requires Eltoo in order to not enable fee
> siphoning[7].

I think one of the weaknesses of this approach is the level of malleability
still left to the counterparty, where one might burn in miners fees all the
HTLC accumulated value promised to the counterparty, and for which the
preimages have been revealed off-chain.

I wonder if a more safe approach, eliminating a lot of competing interests
style of mempool games, wouldn't be to segregate HTLC claims in two
separate outputs, with full replication of the HTLC lockscripts in both
outputs, and let a covenant accepts or rejects aggregated claims with
satisfying witness and chain state condition for time lock.

> ## Impacts to slot jamming
> With the aggregated output being a reality, it changes the nature of
> =E2=80=9Cslot jamming=E2=80=9D [1] significantly. While channel capacity =
must still be
> reserved for in-flight HTLCs, one no longer needs to allocate a
> commitment output for each up to some hardcoded limit.

> In today=E2=80=99s protocol this limit is 483, and I believe most
> implementations default to an even lower limit. This leads to channel
> jamming being quite inexpensive, as one can quickly fill a channel
> with small HTLCs, without needing a significant amount of capital to
> do so.

> The origins of the 483 slot limits is the worst case commitment size
> before getting into unstandard territory [3]. With an aggregated
> output this would no longer be the case, as adding HTLCs would no
> longer affect commitment size. Instead, the full on-chain footprint of
> an HTLC would be deferred until claim time.

> Does this mean one could lift, or even remove the limit for number of
> active HTLCs? Unfortunately, the obvious approach doesn=E2=80=99t seem to=
 get
> rid of the problem entirely, but mitigates it quite a bit.

Yes, protocol limit of 483 is a long-term limit on the payment throughput
of the LN, though as an upper bound we have the dust limits and mempool
fluctuations rendering irrelevant the claim of such aggregated dust
outputs. Aggregated claims might give a more dynamic margin of what is a
tangible and trust-minimized HTLC payment.

> ### Slot jamming attack scenario
> Consider the scenario where an attacker sends a large number of
> non-dust* HTLCs across a channel, and the channel parties enforce no
> limit on the number of active HTLCs.

> The number of payments would not affect the size of the commitment
> transaction at all, only the size of the witness that must be
> presented when claiming or timing out the HTLCs. This means that there
> is still a point at which chain fees get high enough for the HTLC to
> be uneconomical to claim. This is no different than in today=E2=80=99s sp=
ec,
> and such HTLCs will just be stranded on-chain until chain fees
> decrease, at which point there is a race between the success and
> timeout spends.

> There seems to be no way around this; if you want to claim an HTLC
> on-chain, you need to put the preimage on-chain. And when the HTLC
> first reaches you, you have no way of predicting the future chain fee.
> With a large number of uneconomical HTLCs in play, the total BTC
> exposure could still be very large, so you might want to limit this
> somewhat.

> * Note that as long as the sum of HTLCs exceeds the dust limit, one
> could manifest the output on the transaction.

Unless we introduce sliding windows during which the claim periods of an
HTLC can be claimed and freeze accordingly the HTLC-timeout path.

See: https://fc22.ifca.ai/preproceedings/119.pdf

Bad news: you will need off-chain consensus on the feerate threshold at
which the sliding windows kick-out among all the routing nodes
participating in the HTLC payment path.

> ## The good news
> With an aggregated HTLC output, the number of HTLCs would no longer
> impact the commitment transaction size while the channel is open and
> operational.

> The marginal cost of claiming an HTLC with a preimage on-chain would
> be much lower; no new inputs or outputs, only a linear increase in the
> witness size. With a covenant primitive available, the extra footprint
> of the timeout and success transactions would no longer exist.

> Claiming timed out HTLCs could still be made close to constant size
> (no preimage to present), so no additional on-chain cost with more
> HTLCs.

I wonder if in a PTLC world, you can generate an aggregate curve point for
all the sub combinations of scalar plausible. Unrevealed curve points in a
taproot branch are cheap. It might claim an offered HTLC near-constant size
too.

> ## The bad news
> The most obvious problem is that we would need a new covenant
> primitive on L1 (see below). However, I think it could be beneficial
> to start exploring these ideas now in order to guide the L1 effort
> towards something we could utilize to its fullest on L2.

> As mentioned, even with a functioning covenant, we don=E2=80=99t escape t=
he
> fact that a preimage needs to go on-chain, pricing out HTLCs at
> certain fee rates. This is analogous to the dust exposure problem
> discussed in [6], and makes some sort of limit still required.

Ideally such covenant mechanisms would generalize to the withdrawal phase
of payment pools, where dozens or hundreds of participants wish to confirm
their non-competing withdrawal transactions concurrently. While unlocking
preimage or scalar can be aggregated in a single witness, there will still
be a need to verify that each withdrawal output associated with an
unlocking secret is present in the transaction.

Maybe few other L2s are answering this N-inputs-to-M-outputs pattern with
advanced locking scripts conditions to satisfy.

> ### Open question
> With PTLCs, could one create a compact proof showing that you know the
> preimage for m-of-n of the satoshis in the output? (some sort of
> threshold signature).

> If we could do this we would be able to remove the slot jamming issue
> entirely; any number of active PTLCs would not change the on-chain
> cost of claiming them.

See comments above, I think there is a plausible scheme here you just
generate all the point combinations possible, and only reveal the one you
need at broadcast.

> ## Covenant primitives
> A recursive covenant is needed to achieve this. Something like OP_CTV
> and OP_APO seems insufficient, since the number of ways the set of
> HTLCs could be claimed would cause combinatorial blowup in the number
> of possible spending transactions.

> Personally, I=E2=80=99ve found the simple yet powerful properties of
> OP_CHECKCONTRACTVERIFY [4] together with OP_CAT and amount inspection
> particularly interesting for the use case, but I=E2=80=99m certain many o=
f the
> other proposals could achieve the same thing. More direct inspection
> like you get from a proposal like OP_TX[9] would also most likely have
> the building blocks needed.

As pointed out during the CTV drama and payment pool public discussion
years ago, what would be very useful to tie-break among all covenant
constructions would be an efficiency simulation framework. Even if the same
semantic can be achieved independently by multiple covenants, they
certainly do not have the same performance trade-offs (e.g average and
worst-case witness size).

I don't think the blind approach of activating many complex covenants at
the same time is conservative enough in Bitcoin, where one might design
"malicious" L2 contracts, of which the game-theory is not fully understood.

See e.g https://blog.bitmex.com/txwithhold-smart-contracts/

> ### Proof-of-concept
> I=E2=80=99ve implemented a rough demo** of spending an HTLC output that p=
ays
> to a script with OP_CHECKCONTRACTVERIFY to achieve this [5]. The idea
> is to commit to all active HTLCs in a merkle tree, and have the
> spender provide merkle proofs for the HTLCs to claim, claiming the sum
> into a new output. The remainder goes back into a new output with the
> claimed HTLCs removed from the merkle tree.

> An interesting trick one can do when creating the merkle tree, is
> sorting the HTLCs by expiry. This means that one in the timeout case
> claim a subtree of HTLCs using a single merkle proof (and RBF this
> batched timeout claim as more and more HTLCs expire) reducing the
> timeout case to constant size witness (or rather logarithmic in the
> total number of HTLCs).

> **Consider it an experiment, as it is missing a lot before it could be
> usable in any real commitment setting.

I think this is an interesting question if more advanced cryptosystems
based on assumptions other than the DL problem could constitute a factor of
scalability of LN payment throughput by orders of magnitude, by decoupling
number of off-chain payments from the growth of the on-chain witness size
need to claim them, without lowering in security as with trimmed HTLC due
to dust limits.

Best,
Antoine

Le jeu. 26 oct. 2023 =C3=A0 20:28, Johan Tor=C3=A5s Halseth via bitcoin-dev=
 <
bitcoin-dev@lists.linuxfoundation.org> a =C3=A9crit :

> Hi all,
>
> After the transaction recycling has spurred some discussion the last
> week or so, I figured it could be worth sharing some research I=E2=80=99v=
e
> done into HTLC output aggregation, as it could be relevant for how to
> avoid this problem in a future channel type.
>
> TLDR; With the right covenant we can create HTLC outputs that are much
> more chain efficient, not prone to tx recycling and harder to jam.
>
> ## Transaction recycling
> The transaction recycling attack is made possible by the change made
> to HTLC second level transactions for the anchor channel type[8];
> making it possible to add fees to the transaction by adding inputs
> without violating the signature. For the legacy channel type this
> attack was not possible, as all fees were taken from the HTLC outputs
> themselves, and had to be agreed upon by channel counterparties during
> signing (of course this has its own problems, which is why we wanted
> to change it).
>
> The idea of HTLC output aggregation is to collapse all HTLC outputs on
> the commitment to a single one. This has many benefits (that I=E2=80=99ll=
 get
> to), one of them being the possibility to let the spender claim the
> portion of the output that they=E2=80=99re right to, deciding how much sh=
ould
> go to fees. Note that this requires a covenant to be possible.
>
> ## A single HTLC output
> Today, every forwarded HTLC results in an output that needs to be
> manifested on the commitment transaction in order to claw back money
> in case of an uncooperative channel counterparty. This puts a limit on
> the number of active HTLCs (in order for the commitment transaction to
> not become too large) which makes it possible to jam the channel with
> small amounts of capital [1]. It also turns out that having this limit
> be large makes it expensive and complicated to sweep the outputs
> efficiently [2].
>
> Instead of having new HTLC outputs manifest for each active
> forwarding, with covenants on the base layer one could create a single
> aggregated output on the commitment. The output amount being the sum
> of the active HTLCs (offered and received), alternatively one output
> for received and one for offered. When spending this output, you would
> only be entitled to the fraction of the amount corresponding to the
> HTLCs you know the preimage for (received), or that has timed out
> (offered).
>
> ## Impacts to transaction recycling
> Depending on the capabilities of the covenant available (e.g.
> restricting the number of inputs to the transaction) the transaction
> spending the aggregated HTLC output can be made self sustained: the
> spender will be able to claim what is theirs (preimage or timeout) and
> send it to whatever output they want, or to fees. The remainder will
> go back into a covenant restricted output with the leftover HTLCs.
> Note that this most likely requires Eltoo in order to not enable fee
> siphoning[7].
>
> ## Impacts to slot jamming
> With the aggregated output being a reality, it changes the nature of
> =E2=80=9Cslot jamming=E2=80=9D [1] significantly. While channel capacity =
must still be
> reserved for in-flight HTLCs, one no longer needs to allocate a
> commitment output for each up to some hardcoded limit.
>
> In today=E2=80=99s protocol this limit is 483, and I believe most
> implementations default to an even lower limit. This leads to channel
> jamming being quite inexpensive, as one can quickly fill a channel
> with small HTLCs, without needing a significant amount of capital to
> do so.
>
> The origins of the 483 slot limits is the worst case commitment size
> before getting into unstandard territory [3]. With an aggregated
> output this would no longer be the case, as adding HTLCs would no
> longer affect commitment size. Instead, the full on-chain footprint of
> an HTLC would be deferred until claim time.
>
> Does this mean one could lift, or even remove the limit for number of
> active HTLCs? Unfortunately, the obvious approach doesn=E2=80=99t seem to=
 get
> rid of the problem entirely, but mitigates it quite a bit.
>
> ### Slot jamming attack scenario
> Consider the scenario where an attacker sends a large number of
> non-dust* HTLCs across a channel, and the channel parties enforce no
> limit on the number of active HTLCs.
>
> The number of payments would not affect the size of the commitment
> transaction at all, only the size of the witness that must be
> presented when claiming or timing out the HTLCs. This means that there
> is still a point at which chain fees get high enough for the HTLC to
> be uneconomical to claim. This is no different than in today=E2=80=99s sp=
ec,
> and such HTLCs will just be stranded on-chain until chain fees
> decrease, at which point there is a race between the success and
> timeout spends.
>
> There seems to be no way around this; if you want to claim an HTLC
> on-chain, you need to put the preimage on-chain. And when the HTLC
> first reaches you, you have no way of predicting the future chain fee.
> With a large number of uneconomical HTLCs in play, the total BTC
> exposure could still be very large, so you might want to limit this
> somewhat.
>
> * Note that as long as the sum of HTLCs exceeds the dust limit, one
> could manifest the output on the transaction.
>
> ## The good news
> With an aggregated HTLC output, the number of HTLCs would no longer
> impact the commitment transaction size while the channel is open and
> operational.
>
> The marginal cost of claiming an HTLC with a preimage on-chain would
> be much lower; no new inputs or outputs, only a linear increase in the
> witness size. With a covenant primitive available, the extra footprint
> of the timeout and success transactions would no longer exist.
>
> Claiming timed out HTLCs could still be made close to constant size
> (no preimage to present), so no additional on-chain cost with more
> HTLCs.
>
> ## The bad news
> The most obvious problem is that we would need a new covenant
> primitive on L1 (see below). However, I think it could be beneficial
> to start exploring these ideas now in order to guide the L1 effort
> towards something we could utilize to its fullest on L2.
>
> As mentioned, even with a functioning covenant, we don=E2=80=99t escape t=
he
> fact that a preimage needs to go on-chain, pricing out HTLCs at
> certain fee rates. This is analogous to the dust exposure problem
> discussed in [6], and makes some sort of limit still required.
>
> ### Open question
> With PTLCs, could one create a compact proof showing that you know the
> preimage for m-of-n of the satoshis in the output? (some sort of
> threshold signature).
>
> If we could do this we would be able to remove the slot jamming issue
> entirely; any number of active PTLCs would not change the on-chain
> cost of claiming them.
>
> ## Covenant primitives
> A recursive covenant is needed to achieve this. Something like OP_CTV
> and OP_APO seems insufficient, since the number of ways the set of
> HTLCs could be claimed would cause combinatorial blowup in the number
> of possible spending transactions.
>
> Personally, I=E2=80=99ve found the simple yet powerful properties of
> OP_CHECKCONTRACTVERIFY [4] together with OP_CAT and amount inspection
> particularly interesting for the use case, but I=E2=80=99m certain many o=
f the
> other proposals could achieve the same thing. More direct inspection
> like you get from a proposal like OP_TX[9] would also most likely have
> the building blocks needed.
>
> ### Proof-of-concept
> I=E2=80=99ve implemented a rough demo** of spending an HTLC output that p=
ays
> to a script with OP_CHECKCONTRACTVERIFY to achieve this [5]. The idea
> is to commit to all active HTLCs in a merkle tree, and have the
> spender provide merkle proofs for the HTLCs to claim, claiming the sum
> into a new output. The remainder goes back into a new output with the
> claimed HTLCs removed from the merkle tree.
>
> An interesting trick one can do when creating the merkle tree, is
> sorting the HTLCs by expiry. This means that one in the timeout case
> claim a subtree of HTLCs using a single merkle proof (and RBF this
> batched timeout claim as more and more HTLCs expire) reducing the
> timeout case to constant size witness (or rather logarithmic in the
> total number of HTLCs).
>
> **Consider it an experiment, as it is missing a lot before it could be
> usable in any real commitment setting.
>
>
> [1]
> https://bitcoinops.org/en/topics/channel-jamming-attacks/#htlc-jamming-at=
tack
> [2] https://github.com/lightning/bolts/issues/845
> [3]
> https://github.com/lightning/bolts/blob/aad959a297ff66946effb165518143be1=
5777dd6/02-peer-protocol.md#rationale-7
> [4]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-November/021=
182.html
> [5]
> https://github.com/halseth/tapsim/blob/b07f29804cf32dce0168ab5bb40558cbb1=
8f2e76/examples/matt/claimpool/script.txt
> [6]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/00=
3257.html
> [7] https://github.com/lightning/bolts/issues/845#issuecomment-937736734
> [8]
> https://github.com/lightning/bolts/blob/8a64c6a1cef979b3f0cecb00ba7a48c2d=
28b3588/03-transactions.md?plain=3D1#L333
> [9]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/020450.h=
tml
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

--000000000000ab1685060aa084dc
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Johan,<div><br></div><div>Few comments.</div><div><br><=
/div><div>## Transaction recycling<br>The transaction recycling attack is m=
ade possible by the change made<br>to HTLC second level transactions for th=
e anchor channel type[8];<br>making it possible to add fees to the transact=
ion by adding inputs<br>without violating the signature. For the legacy cha=
nnel type this<br>attack was not possible, as all fees were taken from the =
HTLC outputs<br>themselves, and had to be agreed upon by channel counterpar=
ties during<br>signing (of course this has its own problems, which is why w=
e wanted<br>to change it).<br></div><div><br></div><div>The attack works on=
 legacy channels if the holder (or local) commitment transaction confirms f=
irst, the second-stage HTLC claim transaction is fully malleable by the cou=
nterparty.</div><div><br></div><div>See=C2=A0<a href=3D"https://github.com/=
lightning/bolts/blob/master/03-transactions.md#offered-htlc-outputs">https:=
//github.com/lightning/bolts/blob/master/03-transactions.md#offered-htlc-ou=
tputs</a> (only remote_htlcpubkey required)</div><div><br></div><div>Note a=
 replacement cycling attack works in a future package-relay world too.</div=
><div><br></div><div>See test:=C2=A0<a href=3D"https://github.com/ariard/bi=
tcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf">https://github.com/a=
riard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf</a></div><div=
><br></div><div>&gt; The idea of HTLC output aggregation is to collapse all=
 HTLC outputs on<br>&gt; the commitment to a single one. This has many bene=
fits (that I=E2=80=99ll get<br>&gt; to), one of them being the possibility =
to let the spender claim the<br>&gt; portion of the output that they=E2=80=
=99re right to, deciding how much should<br>&gt; go to fees. Note that this=
 requires a covenant to be possible.<br></div><div><br></div><div>Another a=
dvantage of HTLC output aggregation is the reduction of fee-bumping reserve=
s requirements on channel counterparties, as second-stage HTLC transactions=
 have common fields (nVersion, nLocktime, ...) *could* be shared.</div><div=
><br></div><div>&gt; ## A single HTLC output<br>&gt; Today, every forwarded=
 HTLC results in an output that needs to be<br>&gt; manifested on the commi=
tment transaction in order to claw back money<br>&gt; in case of an uncoope=
rative channel counterparty. This puts a limit on<br>&gt; the number of act=
ive HTLCs (in order for the commitment transaction to<br>&gt; not become to=
o large) which makes it possible to jam the channel with<br>&gt; small amou=
nts of capital [1]. It also turns out that having this limit<br>&gt; be lar=
ge makes it expensive and complicated to sweep the outputs<br>&gt; efficien=
tly [2].<br><br>&gt; Instead of having new HTLC outputs manifest for each a=
ctive<br>&gt; forwarding, with covenants on the base layer one could create=
 a single<br>&gt; aggregated output on the commitment. The output amount be=
ing the sum<br>&gt; of the active HTLCs (offered and received), alternative=
ly one output<br>&gt; for received and one for offered. When spending this =
output, you would<br>&gt; only be entitled to the fraction of the amount co=
rresponding to the<br>&gt; HTLCs you know the preimage for (received), or t=
hat has timed out<br>&gt; (offered).<br><br>&gt; ## Impacts to transaction =
recycling<br>&gt; Depending on the capabilities of the covenant available (=
e.g.<br>&gt; restricting the number of inputs to the transaction) the trans=
action<br>&gt; spending the aggregated HTLC output can be made self sustain=
ed: the<br>&gt; spender will be able to claim what is theirs (preimage or t=
imeout) and<br>&gt; send it to whatever output they want, or to fees. The r=
emainder will<br>&gt; go back into a covenant restricted output with the le=
ftover HTLCs.<br>&gt; Note that this most likely requires Eltoo in order to=
 not enable fee<br>&gt; siphoning[7].<br></div><div><br></div><div>I think =
one of the weaknesses of this approach is the level of malleability still l=
eft to the counterparty, where one might burn in miners fees all the HTLC a=
ccumulated value promised to the counterparty, and for which the preimages =
have been revealed off-chain.</div><div><br></div><div>I wonder if a more s=
afe approach, eliminating a lot of competing interests style of mempool gam=
es, wouldn&#39;t be to segregate HTLC claims in two separate outputs, with =
full replication of the HTLC lockscripts in both outputs, and let a covenan=
t accepts or rejects aggregated claims with satisfying witness and chain st=
ate condition for time lock.</div><div><br></div><div>&gt; ## Impacts to sl=
ot jamming<br>&gt; With the aggregated output being a reality, it changes t=
he nature of<br>&gt; =E2=80=9Cslot jamming=E2=80=9D [1] significantly. Whil=
e channel capacity must still be<br>&gt; reserved for in-flight HTLCs, one =
no longer needs to allocate a<br>&gt; commitment output for each up to some=
 hardcoded limit.<br><br>&gt; In today=E2=80=99s protocol this limit is 483=
, and I believe most<br>&gt; implementations default to an even lower limit=
. This leads to channel<br>&gt; jamming being quite inexpensive, as one can=
 quickly fill a channel<br>&gt; with small HTLCs, without needing a signifi=
cant amount of capital to<br>&gt; do so.<br><br>&gt; The origins of the 483=
 slot limits is the worst case commitment size<br>&gt; before getting into =
unstandard territory [3]. With an aggregated<br>&gt; output this would no l=
onger be the case, as adding HTLCs would no<br>&gt; longer affect commitmen=
t size. Instead, the full on-chain footprint of<br>&gt; an HTLC would be de=
ferred until claim time.<br><br>&gt; Does this mean one could lift, or even=
 remove the limit for number of<br>&gt; active HTLCs? Unfortunately, the ob=
vious approach doesn=E2=80=99t seem to get<br>&gt; rid of the problem entir=
ely, but mitigates it quite a bit.<br></div><div><br></div><div>Yes, protoc=
ol limit of 483 is a long-term limit on the payment throughput of the LN, t=
hough as an upper bound we have the dust limits and mempool fluctuations re=
ndering irrelevant the claim of such aggregated dust outputs. Aggregated cl=
aims might give a more dynamic margin of what is a tangible and trust-minim=
ized HTLC payment.</div><div><br></div><div>&gt; ### Slot jamming attack sc=
enario<br>&gt; Consider the scenario where an attacker sends a large number=
 of<br>&gt; non-dust* HTLCs across a channel, and the channel parties enfor=
ce no<br>&gt; limit on the number of active HTLCs.<br><br>&gt; The number o=
f payments would not affect the size of the commitment<br>&gt; transaction =
at all, only the size of the witness that must be<br>&gt; presented when cl=
aiming or timing out the HTLCs. This means that there<br>&gt; is still a po=
int at which chain fees get high enough for the HTLC to<br>&gt; be uneconom=
ical to claim. This is no different than in today=E2=80=99s spec,<br>&gt; a=
nd such HTLCs will just be stranded on-chain until chain fees<br>&gt; decre=
ase, at which point there is a race between the success and<br>&gt; timeout=
 spends.<br><br>&gt; There seems to be no way around this; if you want to c=
laim an HTLC<br>&gt; on-chain, you need to put the preimage on-chain. And w=
hen the HTLC<br>&gt; first reaches you, you have no way of predicting the f=
uture chain fee.<br>&gt; With a large number of uneconomical HTLCs in play,=
 the total BTC<br>&gt; exposure could still be very large, so you might wan=
t to limit this<br>&gt; somewhat.<br><br>&gt; * Note that as long as the su=
m of HTLCs exceeds the dust limit, one<br>&gt; could manifest the output on=
 the transaction.<br></div><div><br></div><div>Unless we introduce sliding =
windows during which the claim periods of an HTLC can be claimed and freeze=
 accordingly the HTLC-timeout path.</div><div><br></div><div>See:=C2=A0<a h=
ref=3D"https://fc22.ifca.ai/preproceedings/119.pdf">https://fc22.ifca.ai/pr=
eproceedings/119.pdf</a></div><div><br></div><div>Bad news: you will need o=
ff-chain consensus on the feerate threshold at which the sliding windows ki=
ck-out among all the routing nodes participating in the HTLC payment path.<=
/div><div><br></div><div>&gt; ## The good news<br>&gt; With an aggregated H=
TLC output, the number of HTLCs would no longer<br>&gt; impact the commitme=
nt transaction size while the channel is open and<br>&gt; operational.<br><=
br>&gt; The marginal cost of claiming an HTLC with a preimage on-chain woul=
d<br>&gt; be much lower; no new inputs or outputs, only a linear increase i=
n the<br>&gt; witness size. With a covenant primitive available, the extra =
footprint<br>&gt; of the timeout and success transactions would no longer e=
xist.<br><br>&gt; Claiming timed out HTLCs could still be made close to con=
stant size<br>&gt; (no preimage to present), so no additional on-chain cost=
 with more<br>&gt; HTLCs.<br></div><div><br></div><div>I wonder if in a PTL=
C world, you can generate an aggregate curve point for all the sub combinat=
ions of scalar plausible. Unrevealed curve points in a taproot branch are c=
heap. It might claim an offered HTLC near-constant size too.</div><div><br>=
</div><div>&gt; ## The bad news<br>&gt; The most obvious problem is that we=
 would need a new covenant<br>&gt; primitive on L1 (see below). However, I =
think it could be beneficial<br>&gt; to start exploring these ideas now in =
order to guide the L1 effort<br>&gt; towards something we could utilize to =
its fullest on L2.<br><br>&gt; As mentioned, even with a functioning covena=
nt, we don=E2=80=99t escape the<br>&gt; fact that a preimage needs to go on=
-chain, pricing out HTLCs at<br>&gt; certain fee rates. This is analogous t=
o the dust exposure problem<br>&gt; discussed in [6], and makes some sort o=
f limit still required.<br></div><div><br></div><div>Ideally such covenant =
mechanisms would generalize to the withdrawal phase of payment pools, where=
 dozens or hundreds of participants wish to confirm their non-competing wit=
hdrawal transactions concurrently. While unlocking preimage or scalar can b=
e aggregated in a single witness, there will still be a need to verify that=
 each withdrawal output associated with an unlocking secret is present in t=
he transaction.</div><div><br></div><div>Maybe few other L2s are answering =
this N-inputs-to-M-outputs pattern with advanced locking scripts conditions=
 to satisfy.</div><div><br></div><div>&gt; ### Open question<br>&gt; With P=
TLCs, could one create a compact proof showing that you know the<br>&gt; pr=
eimage for m-of-n of the satoshis in the output? (some sort of<br>&gt; thre=
shold signature).<br><br>&gt; If we could do this we would be able to remov=
e the slot jamming issue<br>&gt; entirely; any number of active PTLCs would=
 not change the on-chain<br>&gt; cost of claiming them.<br></div><div><br><=
/div><div>See comments above, I think there is a plausible scheme here you =
just generate all the point combinations possible, and only reveal the one =
you need at broadcast.</div><div><br></div><div>&gt; ## Covenant primitives=
<br>&gt; A recursive covenant is needed to achieve this. Something like OP_=
CTV<br>&gt; and OP_APO seems insufficient, since the number of ways the set=
 of<br>&gt; HTLCs could be claimed would cause combinatorial blowup in the =
number<br>&gt; of possible spending transactions.<br><br>&gt; Personally, I=
=E2=80=99ve found the simple yet powerful properties of<br>&gt; OP_CHECKCON=
TRACTVERIFY [4] together with OP_CAT and amount inspection<br>&gt; particul=
arly interesting for the use case, but I=E2=80=99m certain many of the<br>&=
gt; other proposals could achieve the same thing. More direct inspection<br=
>&gt; like you get from a proposal like OP_TX[9] would also most likely hav=
e<br>&gt; the building blocks needed.<br></div><div><br></div><div>As point=
ed out during the CTV drama and payment pool public discussion years ago, w=
hat would be very useful to tie-break among all covenant constructions woul=
d be an efficiency simulation framework. Even if the same semantic can be a=
chieved independently by multiple covenants, they certainly do not have the=
 same performance trade-offs (e.g average and worst-case witness size).=C2=
=A0</div><div><br></div><div>I don&#39;t think the blind approach of activa=
ting many complex covenants at the same time is conservative enough in Bitc=
oin, where one might design &quot;malicious&quot; L2 contracts, of which th=
e game-theory is not fully understood.</div><div><br></div><div>See e.g=C2=
=A0<a href=3D"https://blog.bitmex.com/txwithhold-smart-contracts/">https://=
blog.bitmex.com/txwithhold-smart-contracts/</a></div><div><br></div><div>&g=
t; ### Proof-of-concept<br>&gt; I=E2=80=99ve implemented a rough demo** of =
spending an HTLC output that pays<br>&gt; to a script with OP_CHECKCONTRACT=
VERIFY to achieve this [5]. The idea<br>&gt; is to commit to all active HTL=
Cs in a merkle tree, and have the<br>&gt; spender provide merkle proofs for=
 the HTLCs to claim, claiming the sum<br>&gt; into a new output. The remain=
der goes back into a new output with the<br>&gt; claimed HTLCs removed from=
 the merkle tree.<br><br>&gt; An interesting trick one can do when creating=
 the merkle tree, is<br>&gt; sorting the HTLCs by expiry. This means that o=
ne in the timeout case<br>&gt; claim a subtree of HTLCs using a single merk=
le proof (and RBF this<br>&gt; batched timeout claim as more and more HTLCs=
 expire) reducing the<br>&gt; timeout case to constant size witness (or rat=
her logarithmic in the<br>&gt; total number of HTLCs).<br><br>&gt; **Consid=
er it an experiment, as it is missing a lot before it could be<br>&gt; usab=
le in any real commitment setting.<br></div><div><br></div><div>I think thi=
s is an interesting question if more advanced cryptosystems based on assump=
tions other than the DL problem could constitute a factor of scalability of=
 LN payment throughput by orders of magnitude, by decoupling number of off-=
chain payments from the growth of the on-chain witness size need to claim t=
hem, without lowering in security as with trimmed HTLC due to dust limits.<=
/div><div><br></div><div>Best,</div><div>Antoine</div></div><br><div class=
=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">Le=C2=A0jeu. 26 oct.=
 2023 =C3=A0=C2=A020:28, Johan Tor=C3=A5s Halseth via bitcoin-dev &lt;<a hr=
ef=3D"mailto:bitcoin-dev@lists.linuxfoundation.org">bitcoin-dev@lists.linux=
foundation.org</a>&gt; a =C3=A9crit=C2=A0:<br></div><blockquote class=3D"gm=
ail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-l=
eft-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Hi all=
,<br>
<br>
After the transaction recycling has spurred some discussion the last<br>
week or so, I figured it could be worth sharing some research I=E2=80=99ve<=
br>
done into HTLC output aggregation, as it could be relevant for how to<br>
avoid this problem in a future channel type.<br>
<br>
TLDR; With the right covenant we can create HTLC outputs that are much<br>
more chain efficient, not prone to tx recycling and harder to jam.<br>
<br>
## Transaction recycling<br>
The transaction recycling attack is made possible by the change made<br>
to HTLC second level transactions for the anchor channel type[8];<br>
making it possible to add fees to the transaction by adding inputs<br>
without violating the signature. For the legacy channel type this<br>
attack was not possible, as all fees were taken from the HTLC outputs<br>
themselves, and had to be agreed upon by channel counterparties during<br>
signing (of course this has its own problems, which is why we wanted<br>
to change it).<br>
<br>
The idea of HTLC output aggregation is to collapse all HTLC outputs on<br>
the commitment to a single one. This has many benefits (that I=E2=80=99ll g=
et<br>
to), one of them being the possibility to let the spender claim the<br>
portion of the output that they=E2=80=99re right to, deciding how much shou=
ld<br>
go to fees. Note that this requires a covenant to be possible.<br>
<br>
## A single HTLC output<br>
Today, every forwarded HTLC results in an output that needs to be<br>
manifested on the commitment transaction in order to claw back money<br>
in case of an uncooperative channel counterparty. This puts a limit on<br>
the number of active HTLCs (in order for the commitment transaction to<br>
not become too large) which makes it possible to jam the channel with<br>
small amounts of capital [1]. It also turns out that having this limit<br>
be large makes it expensive and complicated to sweep the outputs<br>
efficiently [2].<br>
<br>
Instead of having new HTLC outputs manifest for each active<br>
forwarding, with covenants on the base layer one could create a single<br>
aggregated output on the commitment. The output amount being the sum<br>
of the active HTLCs (offered and received), alternatively one output<br>
for received and one for offered. When spending this output, you would<br>
only be entitled to the fraction of the amount corresponding to the<br>
HTLCs you know the preimage for (received), or that has timed out<br>
(offered).<br>
<br>
## Impacts to transaction recycling<br>
Depending on the capabilities of the covenant available (e.g.<br>
restricting the number of inputs to the transaction) the transaction<br>
spending the aggregated HTLC output can be made self sustained: the<br>
spender will be able to claim what is theirs (preimage or timeout) and<br>
send it to whatever output they want, or to fees. The remainder will<br>
go back into a covenant restricted output with the leftover HTLCs.<br>
Note that this most likely requires Eltoo in order to not enable fee<br>
siphoning[7].<br>
<br>
## Impacts to slot jamming<br>
With the aggregated output being a reality, it changes the nature of<br>
=E2=80=9Cslot jamming=E2=80=9D [1] significantly. While channel capacity mu=
st still be<br>
reserved for in-flight HTLCs, one no longer needs to allocate a<br>
commitment output for each up to some hardcoded limit.<br>
<br>
In today=E2=80=99s protocol this limit is 483, and I believe most<br>
implementations default to an even lower limit. This leads to channel<br>
jamming being quite inexpensive, as one can quickly fill a channel<br>
with small HTLCs, without needing a significant amount of capital to<br>
do so.<br>
<br>
The origins of the 483 slot limits is the worst case commitment size<br>
before getting into unstandard territory [3]. With an aggregated<br>
output this would no longer be the case, as adding HTLCs would no<br>
longer affect commitment size. Instead, the full on-chain footprint of<br>
an HTLC would be deferred until claim time.<br>
<br>
Does this mean one could lift, or even remove the limit for number of<br>
active HTLCs? Unfortunately, the obvious approach doesn=E2=80=99t seem to g=
et<br>
rid of the problem entirely, but mitigates it quite a bit.<br>
<br>
### Slot jamming attack scenario<br>
Consider the scenario where an attacker sends a large number of<br>
non-dust* HTLCs across a channel, and the channel parties enforce no<br>
limit on the number of active HTLCs.<br>
<br>
The number of payments would not affect the size of the commitment<br>
transaction at all, only the size of the witness that must be<br>
presented when claiming or timing out the HTLCs. This means that there<br>
is still a point at which chain fees get high enough for the HTLC to<br>
be uneconomical to claim. This is no different than in today=E2=80=99s spec=
,<br>
and such HTLCs will just be stranded on-chain until chain fees<br>
decrease, at which point there is a race between the success and<br>
timeout spends.<br>
<br>
There seems to be no way around this; if you want to claim an HTLC<br>
on-chain, you need to put the preimage on-chain. And when the HTLC<br>
first reaches you, you have no way of predicting the future chain fee.<br>
With a large number of uneconomical HTLCs in play, the total BTC<br>
exposure could still be very large, so you might want to limit this<br>
somewhat.<br>
<br>
* Note that as long as the sum of HTLCs exceeds the dust limit, one<br>
could manifest the output on the transaction.<br>
<br>
## The good news<br>
With an aggregated HTLC output, the number of HTLCs would no longer<br>
impact the commitment transaction size while the channel is open and<br>
operational.<br>
<br>
The marginal cost of claiming an HTLC with a preimage on-chain would<br>
be much lower; no new inputs or outputs, only a linear increase in the<br>
witness size. With a covenant primitive available, the extra footprint<br>
of the timeout and success transactions would no longer exist.<br>
<br>
Claiming timed out HTLCs could still be made close to constant size<br>
(no preimage to present), so no additional on-chain cost with more<br>
HTLCs.<br>
<br>
## The bad news<br>
The most obvious problem is that we would need a new covenant<br>
primitive on L1 (see below). However, I think it could be beneficial<br>
to start exploring these ideas now in order to guide the L1 effort<br>
towards something we could utilize to its fullest on L2.<br>
<br>
As mentioned, even with a functioning covenant, we don=E2=80=99t escape the=
<br>
fact that a preimage needs to go on-chain, pricing out HTLCs at<br>
certain fee rates. This is analogous to the dust exposure problem<br>
discussed in [6], and makes some sort of limit still required.<br>
<br>
### Open question<br>
With PTLCs, could one create a compact proof showing that you know the<br>
preimage for m-of-n of the satoshis in the output? (some sort of<br>
threshold signature).<br>
<br>
If we could do this we would be able to remove the slot jamming issue<br>
entirely; any number of active PTLCs would not change the on-chain<br>
cost of claiming them.<br>
<br>
## Covenant primitives<br>
A recursive covenant is needed to achieve this. Something like OP_CTV<br>
and OP_APO seems insufficient, since the number of ways the set of<br>
HTLCs could be claimed would cause combinatorial blowup in the number<br>
of possible spending transactions.<br>
<br>
Personally, I=E2=80=99ve found the simple yet powerful properties of<br>
OP_CHECKCONTRACTVERIFY [4] together with OP_CAT and amount inspection<br>
particularly interesting for the use case, but I=E2=80=99m certain many of =
the<br>
other proposals could achieve the same thing. More direct inspection<br>
like you get from a proposal like OP_TX[9] would also most likely have<br>
the building blocks needed.<br>
<br>
### Proof-of-concept<br>
I=E2=80=99ve implemented a rough demo** of spending an HTLC output that pay=
s<br>
to a script with OP_CHECKCONTRACTVERIFY to achieve this [5]. The idea<br>
is to commit to all active HTLCs in a merkle tree, and have the<br>
spender provide merkle proofs for the HTLCs to claim, claiming the sum<br>
into a new output. The remainder goes back into a new output with the<br>
claimed HTLCs removed from the merkle tree.<br>
<br>
An interesting trick one can do when creating the merkle tree, is<br>
sorting the HTLCs by expiry. This means that one in the timeout case<br>
claim a subtree of HTLCs using a single merkle proof (and RBF this<br>
batched timeout claim as more and more HTLCs expire) reducing the<br>
timeout case to constant size witness (or rather logarithmic in the<br>
total number of HTLCs).<br>
<br>
**Consider it an experiment, as it is missing a lot before it could be<br>
usable in any real commitment setting.<br>
<br>
<br>
[1] <a href=3D"https://bitcoinops.org/en/topics/channel-jamming-attacks/#ht=
lc-jamming-attack" rel=3D"noreferrer" target=3D"_blank">https://bitcoinops.=
org/en/topics/channel-jamming-attacks/#htlc-jamming-attack</a><br>
[2] <a href=3D"https://github.com/lightning/bolts/issues/845" rel=3D"norefe=
rrer" target=3D"_blank">https://github.com/lightning/bolts/issues/845</a><b=
r>
[3] <a href=3D"https://github.com/lightning/bolts/blob/aad959a297ff66946eff=
b165518143be15777dd6/02-peer-protocol.md#rationale-7" rel=3D"noreferrer" ta=
rget=3D"_blank">https://github.com/lightning/bolts/blob/aad959a297ff66946ef=
fb165518143be15777dd6/02-peer-protocol.md#rationale-7</a><br>
[4] <a href=3D"https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022=
-November/021182.html" rel=3D"noreferrer" target=3D"_blank">https://lists.l=
inuxfoundation.org/pipermail/bitcoin-dev/2022-November/021182.html</a><br>
[5] <a href=3D"https://github.com/halseth/tapsim/blob/b07f29804cf32dce0168a=
b5bb40558cbb18f2e76/examples/matt/claimpool/script.txt" rel=3D"noreferrer" =
target=3D"_blank">https://github.com/halseth/tapsim/blob/b07f29804cf32dce01=
68ab5bb40558cbb18f2e76/examples/matt/claimpool/script.txt</a><br>
[6] <a href=3D"https://lists.linuxfoundation.org/pipermail/lightning-dev/20=
21-October/003257.html" rel=3D"noreferrer" target=3D"_blank">https://lists.=
linuxfoundation.org/pipermail/lightning-dev/2021-October/003257.html</a><br=
>
[7] <a href=3D"https://github.com/lightning/bolts/issues/845#issuecomment-9=
37736734" rel=3D"noreferrer" target=3D"_blank">https://github.com/lightning=
/bolts/issues/845#issuecomment-937736734</a><br>
[8] <a href=3D"https://github.com/lightning/bolts/blob/8a64c6a1cef979b3f0ce=
cb00ba7a48c2d28b3588/03-transactions.md?plain=3D1#L333" rel=3D"noreferrer" =
target=3D"_blank">https://github.com/lightning/bolts/blob/8a64c6a1cef979b3f=
0cecb00ba7a48c2d28b3588/03-transactions.md?plain=3D1#L333</a><br>
[9] <a href=3D"https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022=
-May/020450.html" rel=3D"noreferrer" target=3D"_blank">https://lists.linuxf=
oundation.org/pipermail/bitcoin-dev/2022-May/020450.html</a><br>
_______________________________________________<br>
bitcoin-dev mailing list<br>
<a href=3D"mailto:bitcoin-dev@lists.linuxfoundation.org" target=3D"_blank">=
bitcoin-dev@lists.linuxfoundation.org</a><br>
<a href=3D"https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev" =
rel=3D"noreferrer" target=3D"_blank">https://lists.linuxfoundation.org/mail=
man/listinfo/bitcoin-dev</a><br>
</blockquote></div>

--000000000000ab1685060aa084dc--