summaryrefslogtreecommitdiff
path: root/7b/95923c58cf37458a338adf531a29a0ba019245
blob: e1bdd6b9b92fc64dc10a9a11df7ef714624c340d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
Return-Path: <kanzure@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 65E211096
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Thu,  8 Feb 2018 17:49:28 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-ua0-f181.google.com (mail-ua0-f181.google.com
	[209.85.217.181])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id AFC7B3C4
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Thu,  8 Feb 2018 17:49:25 +0000 (UTC)
Received: by mail-ua0-f181.google.com with SMTP id x4so3421198uaj.11
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Thu, 08 Feb 2018 09:49:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to; 
	bh=cgr5t7IwnEezU6G4/9kdq/siWID6IsQAU5DsqqIDxTw=;
	b=oqL+Yg4IWRcJXli0C2EyFaOVbcigGGsbHzEiY2tJLapFGem2AhvZhCdE2EhUALDD6B
	NOvt9eNXww3T3uPa09Ti0GQBvmgf89qGKxV24VqWlleItOn8u8C8R3yUNe8sxM2cWCdS
	/ua+FvhosfYrhKvHYH1TJ2CqF2oal5Z1HQmrog22iaRvPI+PcbrHaNU283mWIS4Z2a4P
	OWhuEpXy2yvHqKDwGgsLJ6zFfIK1KXEP9Sjy2P80rEIuogD38HnX7WDwT++1UbbCkwzD
	M1xlx785sYr/FyzNG3W4YCa0ONIY1fM1wx3QazjNiNBF0MpeBlxh2IEEbsLnn6wN15wX
	391A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20161025;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to;
	bh=cgr5t7IwnEezU6G4/9kdq/siWID6IsQAU5DsqqIDxTw=;
	b=IJ5SCsZSg+1M1Xa3ULemXsPQyga96uJjfwKWaG9kYnx2/UjMlQppCsMy6YdHDbVeei
	cwOyeQGMqdFu3WYqptFL9VldHrmtqOUvbbnb2JMcvm+VNhDmgdgsB4kfZcpDxmW+xtDp
	d0WPIP35Wj2S4ABwzLP2Oxulz9oUCCm01EgHTWbgnzrg64pBk5d7gDDkCgt8m77W4lEC
	LhY6xlquGq85GA3AQUpm0lpfrY3kJ8Q4DrPwKjDBDj59AKooXAaXIvyPd/kcwZ4t45Qi
	gZlDGKUUsMB4Z1vcuC3BV+z/0vFu/E3ZUgMJeiznISNqHGRLgwKCxSZKWcojlRS8hGYe
	SPbA==
X-Gm-Message-State: APf1xPDaoQSRHZ0rdVfp96YXClcgbZqssBp3OM65glEsfGKH/UTBaKH/
	au4mQvfvbQDGIpfl5NAw445yHHZSBH11LHSQ8mr1Kw==
X-Google-Smtp-Source: AH8x2262XacxQCV0ONF2QiPLh2f3nHtERZHHKkWnkG+oxhF33IqSWmwKOF3E9JUO/Pi6Vqy8IZ0UdRnoAhg7qhxePp0=
X-Received: by 10.176.22.41 with SMTP id k38mr3257uae.132.1518112164413; Thu,
	08 Feb 2018 09:49:24 -0800 (PST)
MIME-Version: 1.0
Received: by 10.176.37.4 with HTTP; Thu, 8 Feb 2018 09:49:23 -0800 (PST)
In-Reply-To: <CAO3Pvs-rH25U_Gd5HZ=Zh8-yLvEW09c0RSnNzEB4kW15C8GSVg@mail.gmail.com>
References: <CAO3Pvs-rH25U_Gd5HZ=Zh8-yLvEW09c0RSnNzEB4kW15C8GSVg@mail.gmail.com>
From: Bryan Bishop <kanzure@gmail.com>
Date: Thu, 8 Feb 2018 11:49:23 -0600
Message-ID: <CABaSBawRVmzSCMAjtV785pvZT2A7M-Picw2AuugiTefrC-UqdQ@mail.gmail.com>
To: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>,
	Bryan Bishop <kanzure@gmail.com>
Content-Type: multipart/alternative; boundary="001a11463b3e4a07db0564b708a8"
X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, HTML_MESSAGE,
	RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
X-Mailman-Approved-At: Thu, 08 Feb 2018 17:51:51 +0000
Subject: [bitcoin-dev] Fwd: [Lightning-dev] AMP: Atomic Multi-Path Payments
	over Lightning
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Thu, 08 Feb 2018 17:49:28 -0000

--001a11463b3e4a07db0564b708a8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

---------- Forwarded message ----------
From: Olaoluwa Osuntokun <laolu32@gmail.com>
Date: Mon, Feb 5, 2018 at 11:26 PM
Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning
To: lightning-dev <lightning-dev@lists.linuxfoundation.org>


Hi Y'all,

A common question I've seen concerning Lightning is: "I have five $2
channels, is it possible for me to *atomically* send $6 to fulfill a
payment?". The answer to this question is "yes", provided that the receiver
waits to pull all HTLC's until the sum matches their invoice. Typically, on=
e
assumes that the receiver will supply a payment hash, and the sender will
re-use the payment hash for all streams. This has the downside of payment
hash re-use across *multiple* payments (which can already easily be
correlated), and also has a failure mode where if the sender fails to
actually satisfy all the payment flows, then the receiver can still just
pull the monies (and possibly not disperse a service, or w/e).

Conner Fromknecht and I have come up with a way to achieve this over
Lightning while (1) not re-using any payment hashes across all payment
flows, and (2) adding a *strong* guarantee that the receiver won't be paid
until *all* partial payment flows are extended. We call this scheme AMP
(Atomic Multi-path Payments). It can be experimented with on Lightning
*today* with the addition of a new feature bit to gate this new
feature. The beauty of the scheme is that it requires no fundamental change=
s
to the protocol as is now, as the negotiation is strictly *end-to-end*
between sender and receiver.

TL;DR: we repurpose some unused space in the onion per-hop payload of the
onion blob to signal our protocol (and deliver some protocol-specific data)=
,
then use additive secret sharing to ensure that the receiver can't pull the
payment until they have enough shares to reconstruct the original pre-image=
.


Protocol Goals
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
1. Atomicity: The logical transaction should either succeed or fail in
entirety. Naturally, this implies that the receiver should not be unable to
settle *any* of the partial payments, until all of them have arrived.

2. Avoid Payment Hash Reuse: The payment preimages validated by the
consensus layer should be distinct for each partial payment.  Primarily,
this helps avoid correlation of the partial payments, and ensures that
malicious intermediaries straddling partial payments cannot steal funds.

3. Order Invariance: The protocol should be forgiving to the order in which
partial payments arrive at the destination, adding robustness in the face o=
f
delays or routing failures.

4. Non-interactive Setup: It should be possible for the sender to perform a=
n
AMP without directly coordinating with the receiving node. Predominantly,
this means that the *sender* is able to determine the number of partial
payments to use for a particular AMP, which makes sense since they will be
the one fronting the fees for the cost of this parameter. Plus, we can
always turn a non-interactive protocol into an interactive one for the
purposes of invoicing.


Protocol Benefits
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

Sending pay payments predominantly over an AMP-like protocol has several
clear benefits:

  - Eliminates the constraint that a single path from sender to receiver
    with sufficient directional capacity. This reduces the pressure to have
    larger channels in order to support larger payment flows. As a result,
    the payment graph be very diffused, without sacrificing payment
    utility

  - Reduces strain from larger payments on individual paths, and allows the
    liquidity imbalances to be more diffuse. We expect this to have a
    non-negligible impact on channel longevity. This is due to the fact tha=
t
    with usage of AMP, payment flows are typically *smaller* meaning that
    each payment will unbalance a channel to a lesser degree that
    with one giant flow.

  - Potential fee savings for larger payments, contingent on there being a
    super-linear component to routed fees. It's possible that with
    modifications to the fee schedule, it's actually *cheaper* to send
    payments over multiple flows rather than one giant flow.

  - Allows for logical payments larger than the current maximum value of an
    individual payment. Atm we have a (temporarily) limit on the max paymen=
t
    size. With AMP, this can be side stepped as each flow can be up the max
    size, with the sum of all flows exceeding the max.

  - Given sufficient path diversity, AMPs may improve the privacy of LN
    Intermediaries are now unaware to how much of the total payment they ar=
e
    forwarding, or even if they are forwarding a partial payment at all.

  - Using smaller payments increases the set of possible paths a partial
    payment could have taken, which reduces the effectiveness of static
    analysis techniques involving channel capacities and the plaintext
    values being forwarded.


Protocol Overview
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
This design can be seen as a generalization of the single, non-interactive
payment scheme, that uses decoding of extra onion blobs (EOBs?) to encode
extra data for the receiver. In that design, the extra data includes a
payment preimage that the receiver can use to settle back the payment. EOBs
and some method of parsing them are really the only requirement for this
protocol to work. Thus, only the sender and receiver need to implement this
feature in order for it to function, which can be announced using a feature
bit.

First, let's review the current format of the per-hop payload for each node
described in BOLT-0004.

=E2=94=8C=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=90
=E2=94=82Realm (1 byte) =E2=94=82Next Addr (8 bytes)=E2=94=82Amount (8 byte=
s)=E2=94=82Outgoing CLTV (4
bytes)=E2=94=82Unused (12 bytes)=E2=94=82 HMAC (32 bytes) =E2=94=82
=E2=94=94=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=98
=E2=96=A0=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=96=A0
                                              =E2=94=8C=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=90
                                              =E2=94=8265 Bytes Per Hop =E2=
=94=82
                                              =E2=94=94=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=98

Currently, *each* node gets a 65-byte payload. We use this payload to give
each node instructions on *how* to forward a payment. We tell each node: th=
e
realm (or chain to forward on), then next node to forward to, the amount to
forward (this is where fees are extracted by forwarding out less than in),
the outgoing CLTV (allows verification that the prior node didn't modify an=
y
values), and finally an HMAC over the entire thing.

Two important points:
  1. We have 12 bytes for each hop that are currently unpurposed and can be
  used by application protocols to signal new interpretation of bytes and
  also deliver additional encrypted+authenticated data to *each* hop.

  2. The protocol currently has a hard limit of 20-hops. With this feature
  we ensure that the packet stays fixed sized during processing in order to
  avoid leaking positional information. Typically most payments won't use
  all 20 hops, as a result, we can use the remaining hops to stuff in *even
  more* data.


Protocol Description
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
The solution we propose is Atomic Multi-path Payments (AMPs). At a high
level, this leverages EOBs to deliver additive shares of a base preimage,
from which the payment preimages of partial payments can be derived. The
receiver can only construct this value after having received all of the
partial payments, satisfying the atomicity constraint.

The basic protocol:

Primitives
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Let H be a CRH function.
Let || denote concatenation.
Let ^ denote xor.


Sender Requirements
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
The parameters to the sending procedure are a random identifier ID, the
number of partial payments n, and the total payment value V. Assume the
sender has some way of dividing V such that V =3D v_1 + =E2=80=A6 + v_n.

To begin, the sender builds the base preimage BP, from which n partial
preimages will be derived. Next, the sender samples n additive shares s_1,
=E2=80=A6, s_n, and takes the sum to compute BP =3D s_1 ^ =E2=80=A6 ^ s_n.

With the base preimage created, the sender now moves on to constructing the
n partial payments. For each i in [1,n], the sender deterministically
computes the partial preimage r_i =3D H(BP ||  i), by concatenating the
sequence number i to the base preimage and hashing the result. Afterwards,
it applies H to determine the payment hash to use in the i=E2=80=99th parti=
al
payment as h_i =3D H(r_i). Note that that with this preimage derivation
scheme, once the payments are pulled each pre-image is distinct and
indistinguishable from any other.

With all of the pieces in place, the sender initiates the i=E2=80=99th paym=
ent by
constructing a route to the destination with value v_i and payment hash h_i=
.
The tuple (ID, n, s_i) is included in the EOB to be opened by the receiver.

In order to include the three tuple within the per-hop payload for the fina=
l
destination, we repurpose the _first_ byte of the un-used padding bytes in
the payload to signal version 0x01 of the AMP protocol (note this is a PoC
outline, we would need to standardize signalling of these 12 bytes to
support other protocols). Typically this byte isn't set, so the existence o=
f
this means that we're (1) using AMP, and (2) the receiver should consume th=
e
_next_ hop as well. So if the payment length is actually 5, the sender tack=
s
on an additional dummy 6th hop, encrypted with the _same_ shared secret for
that hop to deliver the e2e encrypted data.

Note, the sender can retry partial payments just as they would normal
payments, since they are order invariant, and would be indistinguishable
from regular payments to intermediaries in the network.


Receiver Requirements
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

Upon the arrival of each partial payment, the receiver will iteratively
reconstruct BP, and do some bookkeeping to figure out when to settle the
partial payments. During this reconstruction process, the receiver does not
need to be aware of the order in which the payments were sent, and in fact
nothing about the incoming partial payments reveals this information to the
receiver, though this can be learned after reconstructing BP.

Each EOB is decoded to retrieve (ID, n, s_i), where i is the unique but
unknown index of the incoming partial payment. The receiver has access to
persistent key-value store DB that maps ID to (n, c*, BP*), where c*
represents the number of partial payments received, BP* is the sum of the
received additive shares, and the superscript * denotes that the value is
being updated iteratively. c* and BP* both have initial values of 0.

In the basic protocol, the receiver cache=E2=80=99s the first n it sees, an=
d
verifies that all incoming partial payments have the same n. The receiver
should reject all partial payments if any EOB deviates.  Next, the we updat=
e
our persistent store with DB[ID] =3D (n, c* + 1, BP* ^ s_i), advancing the
reconstruction by one step.

If c* + 1 < n, there are still more packets in flight, so we sit tight.
Otherwise, the receiver assumes all partial payments have arrived, and can
being settling them back. Using the base preimage BP =3D BP* ^ s_i from our
final iteration, the receiver can re-derive all n partial preimages and
payment hashes, using r_i =3D H(BP || i) and h_i =3D H(r_i) simply through
knowledge of n and BP.

Finally, the receiver settles back any outstanding payments that include
payment hash h_i using the partial preimage r_i. Each r_i will appear rando=
m
due to the nature of H, as will it=E2=80=99s corresponding h_i. Thus, each =
partial
payment should appear uncorrelated, and does not reveal that it is part of
an AMP nor the number of partial payments used.

Non-interactive to Interactive AMPs
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

Sender simply receives an ID and amount from the receiver in an invoice
before initiating the protocol. The receiver should only consider the
invoice settled if the total amount received in partial payments containing
ID matches or exceeds the amount specified in the invoice. With this
variant, the receiver is able to map all partial payments to a pre-generate=
d
invoice statement.


Additive Shares vs Threshold-Shares
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

The biggest reason to use additive shares seems to be atomicity. Threshold
shares open the door to some partial payments being settled, even if others
are left in flight. Haven=E2=80=99t yet come up with a good reason for usin=
g
threshold schemes, but there seem to be plenty against it.

Reconstruction of additive shares can be done iteratively, and is win for
the storage and computation requirements on the receiving end. If the sende=
r
decides to use fewer than n partial payments, the remaining shares could be
included in the EOB of the final partial payment to allow the sender to
reconstruct sooner. Sender could also optimistically do partial
reconstruction on this last aggregate value.


Adaptive AMPs
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

The sender may not always be aware of how many partial payments they wish t=
o
send at the time of the first partial payment, at which point the simplifie=
d
protocol would require n to be chosen. To accommodate, the above scheme can
be adapted to handle a dynamically chosen n by iteratively constructing the
shared secrets as follows.

Starting with a base preimage BP, the key trick is that the sender remember
the difference between the base preimage and the sum of all partial
preimages used so far. The relation is described using the following
equations:

    X_0 =3D 0
    X_i =3D X_{i-1} ^ s_i
    X_n =3D BP ^ X_{n-1}

where if n=3D1, X_1 =3D BP, implying that this is in fact a generalization =
of
the single, non-interactive payment scheme mentioned above. For i=3D1, ...,
n-1, the sender sends s_i in the EOB, and  X_n for the n-th share.

Iteratively reconstructing s_1 ^ =E2=80=A6. ^ s_{n-1} ^ X_n =3D BP, allows =
the
receiver to compute all relevant r_i =3D H(BP || i) and h_i =3D H(r_i). Las=
tly,
the final number of partial payments n could be signaled in the final EOB,
which would also serve as a sentinel value for signaling completion. In
response to DOS vectors stemming from unknown values of n, implementations
could consider advertising a maximum value for n, or adopting some sort of
framing pattern for conveying that more partial payments are on the way.

We can further modify our usage of the per-hop payloads to send (H(BP),
s_i) to
consume most of the EOB sent from sender to receiver. In this scenario, we'=
d
repurpose the 11-bytes *after* our signalling byte in the unused byte
section
to store the payment ID (which should be unique for each payment). In the
case
of a non-interactive payment, this will be unused. While for interactive
payments, this will be the ID within the invoice. To deliver this slimmer
2-tuple, we'll use 32-bytes for the hash of the BP, and 32-bytes for the
partial pre-image share, leaving an un-used byte in the payload.


Cross-Chain AMPs
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

AMPs can be used to pay a receiver in multiple currencies atomically...whic=
h
is pretty cool :D


Open Research Questions
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

The above is a protocol sketch to achieve atomic multi-path payments over
Lightning. The details concerning onion blob usage serves as a template tha=
t
future protocols can draw upon in order to deliver additional data to *any*
hop in the route. However, there are still a few open questions before
something like this can be feasibly deployed.

1. How does the sender decide how many chunked payments to send, and the
size of each payment?

  - Upon a closer examination, this seems to overlap with the task of
    congestion control within TCP. The sender may be able to utilize
    inspired heuristics to gauge: (1) how large the initial payment should
be
    and (2) how many subsequent payments may be required. Note that if the
    first payment succeeds, then the exchange is over in a signal round.

2. How can AMP and HORNET be composed?

  - If we eventually integrate HORNET, then a distinct communications
    sessions can be established to allow the sender+receiver to exchange
    up-to-date partial payment information. This may allow the sender to
more
    accurately size each partial payment.

3. Can the sender's initial strategy be governed by an instance of the
Push-relabel max flow algo?

4. How does this mesh with the current max HTLC limit on a commitment?

   - ATM, we have a max limit on the number of active HTLC's on a particula=
r
     commitment transaction. We do this, as otherwise it's possible that th=
e
     transaction is too large, and exceeds standardness w.r.t transaction
     size. In a world where most payments use an AMP-like protocol, then
     overall ant any given instance there will be several pending HTLC's on
     commitments network wise.

     This may incentivize nodes to open more channels in order to support
     the increased commitment space utilization.


Conclusion
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

We've presented a design outline of how to integrate atomic multi-path
payments (AMP) into Lightning. The existence of such a construct allows a
sender to atomically split a payment flow amongst several individual paymen=
t
flows. As a result, larger channels aren't as important as it's possible to
utilize one total outbound payment bandwidth to send several channels.
Additionally, in order to support the increased load, internal routing node=
s
are incensed have more active channels. The existence of AMP-like payments
may also increase the longevity of channels as there'll be smaller, more
numerous payment flows, making it unlikely that a single payment comes
across unbalances a channel entirely. We've also showed how one can utilize
the current onion packet format to deliver additional data from a sender to
receiver, that's still e2e authenticated.


-- Conner && Laolu


_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev




--=20
- Bryan
http://heybryan.org/
1 512 203 0507

--001a11463b3e4a07db0564b708a8
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><div class=3D"gmail_quote">---------- Forwarded messag=
e ----------<br>From: <b class=3D"gmail_sendername">Olaoluwa Osuntokun</b> =
<span dir=3D"ltr">&lt;<a href=3D"mailto:laolu32@gmail.com">laolu32@gmail.co=
m</a>&gt;</span><br>Date: Mon, Feb 5, 2018 at 11:26 PM<br>Subject: [Lightni=
ng-dev] AMP: Atomic Multi-Path Payments over Lightning<br>To: lightning-dev=
 &lt;<a href=3D"mailto:lightning-dev@lists.linuxfoundation.org">lightning-d=
ev@lists.linuxfoundation.org</a>&gt;<br><br><br><div dir=3D"ltr"><div>Hi Y&=
#39;all,=C2=A0</div><div><br></div><div>A common question I&#39;ve seen con=
cerning Lightning is: &quot;I have five $2</div><div>channels, is it possib=
le for me to *atomically* send $6 to fulfill a</div><div>payment?&quot;. Th=
e answer to this question is &quot;yes&quot;, provided that the receiver</d=
iv><div>waits to pull all HTLC&#39;s until the sum matches their invoice. T=
ypically, one</div><div>assumes that the receiver will supply a payment has=
h, and the sender will</div><div>re-use the payment hash for all streams. T=
his has the downside of payment</div><div>hash re-use across *multiple* pay=
ments (which can already easily be</div><div>correlated), and also has a fa=
ilure mode where if the sender fails to</div><div>actually satisfy all the =
payment flows, then the receiver can still just</div><div>pull the monies (=
and possibly not disperse a service, or w/e).</div><div><br></div><div>Conn=
er Fromknecht and I have come up with a way to achieve this over</div><div>=
Lightning while (1) not re-using any payment hashes across all payment</div=
><div>flows, and (2) adding a *strong* guarantee that the receiver won&#39;=
t be paid</div><div>until *all* partial payment flows are extended. We call=
 this scheme AMP</div><div>(Atomic Multi-path Payments). It can be experime=
nted with on Lightning</div><div>*today* with the addition of a new feature=
 bit to gate this new</div><div>feature. The beauty of the scheme is that i=
t requires no fundamental changes</div><div>to the protocol as is now, as t=
he negotiation is strictly *end-to-end*</div><div>between sender and receiv=
er.</div><div><br></div><div>TL;DR: we repurpose some unused space in the o=
nion per-hop payload of the</div><div>onion blob to signal our protocol (an=
d deliver some protocol-specific data),</div><div>then use additive secret =
sharing to ensure that the receiver can&#39;t pull the</div><div>payment un=
til they have enough shares to reconstruct the original pre-image.</div><di=
v><br></div><div><br></div><div>Protocol Goals</div><div>=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D</div><div>1. Atomicity: The logical transaction sh=
ould either succeed or fail in</div><div>entirety. Naturally, this implies =
that the receiver should not be unable to</div><div>settle *any* of the par=
tial payments, until all of them have arrived.</div><div><br></div><div>2. =
Avoid Payment Hash Reuse: The payment preimages validated by the</div><div>=
consensus layer should be distinct for each partial payment.=C2=A0 Primaril=
y,</div><div>this helps avoid correlation of the partial payments, and ensu=
res that</div><div>malicious intermediaries straddling partial payments can=
not steal funds.</div><div><br></div><div>3. Order Invariance: The protocol=
 should be forgiving to the order in which</div><div>partial payments arriv=
e at the destination, adding robustness in the face of</div><div>delays or =
routing failures.</div><div><br></div><div>4. Non-interactive Setup: It sho=
uld be possible for the sender to perform an</div><div>AMP without directly=
 coordinating with the receiving node. Predominantly,</div><div>this means =
that the *sender* is able to determine the number of partial</div><div>paym=
ents to use for a particular AMP, which makes sense since they will be</div=
><div>the one fronting the fees for the cost of this parameter. Plus, we ca=
n</div><div>always turn a non-interactive protocol into an interactive one =
for the</div><div>purposes of invoicing.</div><div><br></div><div><br></div=
><div>Protocol Benefits=E2=80=A8</div><div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D</div><div><br></div><div>Sending pay payments predomi=
nantly over an AMP-like protocol has several</div><div>clear benefits:</div=
><div><br></div><div>=C2=A0 - Eliminates the constraint that a single path =
from sender to receiver</div><div>=C2=A0 =C2=A0 with sufficient directional=
 capacity. This reduces the pressure to have</div><div>=C2=A0 =C2=A0 larger=
 channels in order to support larger payment flows. As a result,</div><div>=
=C2=A0 =C2=A0 the payment graph be very diffused, without sacrificing payme=
nt</div><div>=C2=A0 =C2=A0 utility</div><div><br></div><div>=C2=A0 - Reduce=
s strain from larger payments on individual paths, and allows the</div><div=
>=C2=A0 =C2=A0 liquidity imbalances to be more diffuse. We expect this to h=
ave a</div><div>=C2=A0 =C2=A0 non-negligible impact on channel longevity. T=
his is due to the fact that</div><div>=C2=A0 =C2=A0 with usage of AMP, paym=
ent flows are typically *smaller* meaning that</div><div>=C2=A0 =C2=A0 each=
 payment will unbalance a channel to a lesser degree that</div><div>=C2=A0 =
=C2=A0 with one giant flow.</div><div><br></div><div>=C2=A0 - Potential fee=
 savings for larger payments, contingent on there being a</div><div>=C2=A0 =
=C2=A0 super-linear component to routed fees. It&#39;s possible that with</=
div><div>=C2=A0 =C2=A0 modifications to the fee schedule, it&#39;s actually=
 *cheaper* to send</div><div>=C2=A0 =C2=A0 payments over multiple flows rat=
her than one giant flow.</div><div><br></div><div>=C2=A0 - Allows for logic=
al payments larger than the current maximum value of an</div><div>=C2=A0 =
=C2=A0 individual payment. Atm we have a (temporarily) limit on the max pay=
ment</div><div>=C2=A0 =C2=A0 size. With AMP, this can be side stepped as ea=
ch flow can be up the max</div><div>=C2=A0 =C2=A0 size, with the sum of all=
 flows exceeding the max.</div><div><br></div><div>=C2=A0 - Given sufficien=
t path diversity, AMPs may improve the privacy of LN</div><div>=C2=A0 =C2=
=A0 Intermediaries are now unaware to how much of the total payment they ar=
e</div><div>=C2=A0 =C2=A0 forwarding, or even if they are forwarding a part=
ial payment at all.</div><div><br></div><div>=C2=A0 - Using smaller payment=
s increases the set of possible paths a partial</div><div>=C2=A0 =C2=A0 pay=
ment could have taken, which reduces the effectiveness of static</div><div>=
=C2=A0 =C2=A0 analysis techniques involving channel capacities and the plai=
ntext</div><div>=C2=A0 =C2=A0 values being forwarded.</div><div><br></div><=
div><br></div><div>Protocol Overview</div><div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div>This design can be seen as a generali=
zation of the single, non-interactive</div><div>payment scheme, that uses d=
ecoding of extra onion blobs (EOBs?) to encode</div><div>extra data for the=
 receiver. In that design, the extra data includes a</div><div>payment prei=
mage that the receiver can use to settle back the payment. EOBs</div><div>a=
nd some method of parsing them are really the only requirement for this</di=
v><div>protocol to work. Thus, only the sender and receiver need to impleme=
nt this</div><div>feature in order for it to function, which can be announc=
ed using a feature</div><div>bit.=C2=A0</div><div><br></div><div>First, let=
&#39;s review the current format of the per-hop payload for each node</div>=
<div>described in BOLT-0004.</div><div><br></div><div>=E2=94=8C=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80<wbr>=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
<wbr>=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80<wbr>=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=90</div><div>=E2=94=82Realm (=
1 byte) =E2=94=82Next Addr (8 bytes)=E2=94=82Amount (8 bytes)=E2=94=82Outgo=
ing CLTV (4 bytes)=E2=94=82Unused (12 bytes)=E2=94=82 HMAC (32 bytes) =E2=
=94=82</div><div>=E2=94=94=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80<wbr>=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80<wbr>=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80<wbr>=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=B4=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=98</div><div>=E2=96=A0=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80<wbr>=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80<wbr>=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80<wbr>=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=96=A0</div><div>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =E2=94=8C=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=
=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=90</div><div>=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =E2=94=8265 Bytes Per Hop =E2=94=82</div><div>=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=E2=94=94=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=
=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=
=80=E2=94=80=E2=94=98</div><div><br></div><div>Currently, *each* node gets =
a 65-byte payload. We use this payload to give</div><div>each node instruct=
ions on *how* to forward a payment. We tell each node: the</div><div>realm =
(or chain to forward on), then next node to forward to, the amount to</div>=
<div>forward (this is where fees are extracted by forwarding out less than =
in),</div><div>the outgoing CLTV (allows verification that the prior node d=
idn&#39;t modify any</div><div>values), and finally an HMAC over the entire=
 thing.=C2=A0</div><div><br></div><div>Two important points:</div><div>=C2=
=A0 1. We have 12 bytes for each hop that are currently unpurposed and can =
be</div><div>=C2=A0 used by application protocols to signal new interpretat=
ion of bytes and</div><div>=C2=A0 also deliver additional encrypted+authent=
icated data to *each* hop.</div><div><br></div><div>=C2=A0 2. The protocol =
currently has a hard limit of 20-hops. With this feature</div><div>=C2=A0 w=
e ensure that the packet stays fixed sized during processing in order to</d=
iv><div>=C2=A0 avoid leaking positional information. Typically most payment=
s won&#39;t use</div><div>=C2=A0 all 20 hops, as a result, we can use the r=
emaining hops to stuff in *even</div><div>=C2=A0 more* data.</div><div><br>=
</div><div><br></div><div>Protocol Description</div><div>=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div>The solution we propos=
e is Atomic Multi-path Payments (AMPs). At a high</div><div>level, this lev=
erages EOBs to deliver additive shares of a base preimage,</div><div>from w=
hich the payment preimages of partial payments can be derived. The</div><di=
v>receiver can only construct this value after having received all of the</=
div><div>partial payments, satisfying the atomicity constraint.</div><div><=
br></div><div>The basic protocol:=E2=80=A8=E2=80=A8</div><div><br></div><di=
v>Primitives</div><div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div>Let H be a =
CRH function.</div><div>Let || denote concatenation.=C2=A0</div><div>Let ^ =
denote xor.=E2=80=A8=E2=80=A8</div><div><br></div><div><br></div><div>Sende=
r Requirements</div><div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D</div><div>The parameters to the sending procedure are a random id=
entifier ID, the</div><div>number of partial payments n, and the total paym=
ent value V. Assume the</div><div>sender has some way of dividing V such th=
at V =3D v_1 + =E2=80=A6 + v_n.</div><div><br></div><div>To begin, the send=
er builds the base preimage BP, from which n partial</div><div>preimages wi=
ll be derived. Next, the sender samples n additive shares s_1,</div><div>=
=E2=80=A6, s_n, and takes the sum to compute BP =3D s_1 ^ =E2=80=A6 ^ s_n.<=
/div><div><br></div><div>With the base preimage created, the sender now mov=
es on to constructing the</div><div>n partial payments. For each i in [1,n]=
, the sender deterministically</div><div>computes the partial preimage r_i =
=3D H(BP ||=C2=A0 i), by concatenating the</div><div>sequence number i to t=
he base preimage and hashing the result. Afterwards,</div><div>it applies H=
 to determine the payment hash to use in the i=E2=80=99th partial</div><div=
>payment as h_i =3D H(r_i). Note that that with this preimage derivation</d=
iv><div>scheme, once the payments are pulled each pre-image is distinct and=
</div><div>indistinguishable from any other.</div><div><br></div><div>With =
all of the pieces in place, the sender initiates the i=E2=80=99th payment b=
y</div><div>constructing a route to the destination with value v_i and paym=
ent hash h_i.</div><div>The tuple (ID, n, s_i) is included in the EOB to be=
 opened by the receiver.</div><div><br></div><div>In order to include the t=
hree tuple within the per-hop payload for the final</div><div>destination, =
we repurpose the _first_ byte of the un-used padding bytes in</div><div>the=
 payload to signal version 0x01 of the AMP protocol (note this is a PoC</di=
v><div>outline, we would need to standardize signalling of these 12 bytes t=
o</div><div>support other protocols). Typically this byte isn&#39;t set, so=
 the existence of</div><div>this means that we&#39;re (1) using AMP, and (2=
) the receiver should consume the</div><div>_next_ hop as well. So if the p=
ayment length is actually 5, the sender tacks</div><div>on an additional du=
mmy 6th hop, encrypted with the _same_ shared secret for</div><div>that hop=
 to deliver the e2e encrypted data.</div><div><br></div><div>Note, the send=
er can retry partial payments just as they would normal</div><div>payments,=
 since they are order invariant, and would be indistinguishable</div><div>f=
rom regular payments to intermediaries in the network.=C2=A0 =E2=80=A8</div=
><div><br></div><div><br></div><div>Receiver=E2=80=A8Requirements</div><div=
>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div>=
<br></div><div>Upon the arrival of each partial payment, the receiver will =
iteratively</div><div>reconstruct BP, and do some bookkeeping to figure out=
 when to settle the</div><div>partial payments. During this reconstruction =
process, the receiver does not</div><div>need to be aware of the order in w=
hich the payments were sent, and in fact</div><div>nothing about the incomi=
ng partial payments reveals this information to the</div><div>receiver, tho=
ugh this can be learned after reconstructing BP.</div><div><br></div><div>E=
ach EOB is decoded to retrieve (ID, n, s_i), where i is the unique but</div=
><div>unknown index of the incoming partial payment. The receiver has acces=
s to</div><div>persistent key-value store DB that maps ID to (n, c*, BP*), =
where c*</div><div>represents the number of partial payments received, BP* =
is the sum of the</div><div>received additive shares, and the superscript *=
 denotes that the value is</div><div>being updated iteratively. c* and BP* =
both have initial values of 0.</div><div><br></div><div>In the basic protoc=
ol, the receiver cache=E2=80=99s the first n it sees, and</div><div>verifie=
s that all incoming partial payments have the same n. The receiver</div><di=
v>should reject all partial payments if any EOB deviates.=C2=A0 Next, the w=
e update</div><div>our persistent store with DB[ID] =3D (n, c* + 1, BP* ^ s=
_i), advancing the</div><div>reconstruction by one step.</div><div><br></di=
v><div>If c* + 1 &lt; n, there are still more packets in flight, so we sit =
tight.</div><div>Otherwise, the receiver assumes all partial payments have =
arrived, and can</div><div>being settling them back. Using the base preimag=
e BP =3D BP* ^ s_i from our</div><div>final iteration, the receiver can re-=
derive all n partial preimages and</div><div>payment hashes, using r_i =3D =
H(BP || i) and h_i =3D H(r_i) simply through</div><div>knowledge of n and B=
P.=C2=A0</div><div><br></div><div>Finally, the receiver settles back any ou=
tstanding payments that include</div><div>payment hash h_i using the partia=
l preimage r_i. Each r_i will appear random</div><div>due to the nature of =
H, as will it=E2=80=99s corresponding h_i. Thus, each partial</div><div>pay=
ment should appear uncorrelated, and does not reveal that it is part of</di=
v><div>an AMP nor the number of partial payments used.=C2=A0</div><div><br>=
</div><div>Non-interactive to Interactive AMPs</div><div>=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<wb=
r>=3D=3D=3D=3D=3D</div><div><br></div><div>Sender simply receives an ID and=
 amount from the receiver in an invoice</div><div>before initiating the pro=
tocol. The receiver should only consider the</div><div>invoice settled if t=
he total amount received in partial payments containing</div><div>ID matche=
s or exceeds the amount specified in the invoice. With this</div><div>varia=
nt, the receiver is able to map all partial payments to a pre-generated</di=
v><div>invoice statement.</div><div><br></div><div><br></div><div>Additive =
Shares vs Threshold-Shares</div><div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<wbr>=3D=3D=3D=3D=3D<=
/div><div><br></div><div>The biggest reason to use additive shares seems to=
 be atomicity. Threshold</div><div>shares open the door to some partial pay=
ments being settled, even if others</div><div>are left in flight. Haven=E2=
=80=99t yet come up with a good reason for using</div><div>threshold scheme=
s, but there seem to be plenty against it.=C2=A0</div><div><br></div><div>R=
econstruction of additive shares can be done iteratively, and is win for</d=
iv><div>the storage and computation requirements on the receiving end. If t=
he sender</div><div>decides to use fewer than n partial payments, the remai=
ning shares could be</div><div>included in the EOB of the final partial pay=
ment to allow the sender to</div><div>reconstruct sooner. Sender could also=
 optimistically do partial</div><div>reconstruction on this last aggregate =
value.</div><div><br></div><div><br></div><div>Adaptive AMPs</div><div>=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div><br></div><div>The sender ma=
y not always be aware of how many partial payments they wish to</div><div>s=
end at the time of the first partial payment, at which point the simplified=
</div><div>protocol would require n to be chosen. To accommodate, the above=
 scheme can</div><div>be adapted to handle a dynamically chosen n by iterat=
ively constructing the</div><div>shared secrets as follows.</div><div><br><=
/div><div>Starting with a base preimage BP, the key trick is that the sende=
r remember</div><div>the difference between the base preimage and the sum o=
f all partial</div><div>preimages used so far. The relation is described us=
ing the following</div><div>equations:</div><div><br></div><div>=C2=A0 =C2=
=A0 X_0 =3D 0=E2=80=A8</div><div>=C2=A0 =C2=A0 X_i =3D X_{i-1} ^ s_i=E2=80=
=A8</div><div>=C2=A0 =C2=A0 X_n =3D BP ^ X_{n-1}=C2=A0</div><div><br></div>=
<div>where if n=3D1, X_1 =3D BP, implying that this is in fact a generaliza=
tion of</div><div>the single, non-interactive payment scheme mentioned abov=
e. For i=3D1, ...,</div><div>n-1, the sender sends s_i in the EOB, and=C2=
=A0 X_n for the n-th share.=C2=A0</div><div><br></div><div>Iteratively reco=
nstructing s_1 ^ =E2=80=A6. ^ s_{n-1} ^ X_n =3D BP, allows the</div><div>re=
ceiver to compute all relevant r_i =3D H(BP || i) and h_i =3D H(r_i). Lastl=
y,</div><div>the final number of partial payments n could be signaled in th=
e final EOB,</div><div>which would also serve as a sentinel value for signa=
ling completion. In</div><div>response to DOS vectors stemming from unknown=
 values of n, implementations</div><div>could consider advertising a maximu=
m value for n, or adopting some sort of</div><div>framing pattern for conve=
ying that more partial payments are on the way.</div><div><br></div><div>We=
 can further modify our usage of the per-hop payloads to send (H(BP), s_i) =
to</div><div>consume most of the EOB sent from sender to receiver. In this =
scenario, we&#39;d</div><div>repurpose the 11-bytes *after* our signalling =
byte in the unused byte section</div><div>to store the payment ID (which sh=
ould be unique for each payment). In the case</div><div>of a non-interactiv=
e payment, this will be unused. While for interactive</div><div>payments, t=
his will be the ID within the invoice. To deliver this slimmer</div><div>2-=
tuple, we&#39;ll use 32-bytes for the hash of the BP, and 32-bytes for the<=
/div><div>partial pre-image share, leaving an un-used byte in the payload.<=
/div><div><br></div><div><br></div><div>Cross-Chain AMPs</div><div>=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div><br></div><div>AMPs ca=
n be used to pay a receiver in multiple currencies atomically...which</div>=
<div>is pretty cool :D</div><div><br></div><div><br></div><div>Open Researc=
h Questions</div><div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D</div><div><br></div><div>The above is a protocol sketch =
to achieve atomic multi-path payments over</div><div>Lightning. The details=
 concerning onion blob usage serves as a template that</div><div>future pro=
tocols can draw upon in order to deliver additional data to *any*</div><div=
>hop in the route. However, there are still a few open questions before</di=
v><div>something like this can be feasibly deployed.</div><div><br></div><d=
iv>1. How does the sender decide how many chunked payments to send, and the=
</div><div>size of each payment?</div><div><br></div><div>=C2=A0 - Upon a c=
loser examination, this seems to overlap with the task of</div><div>=C2=A0 =
=C2=A0 congestion control within TCP. The sender may be able to utilize</di=
v><div>=C2=A0 =C2=A0 inspired heuristics to gauge: (1) how large the initia=
l payment should be</div><div>=C2=A0 =C2=A0 and (2) how many subsequent pay=
ments may be required. Note that if the</div><div>=C2=A0 =C2=A0 first payme=
nt succeeds, then the exchange is over in a signal round.</div><div><br></d=
iv><div>2. How can AMP and HORNET be composed?</div><div><br></div><div>=C2=
=A0 - If we eventually integrate HORNET, then a distinct communications</di=
v><div>=C2=A0 =C2=A0 sessions can be established to allow the sender+receiv=
er to exchange</div><div>=C2=A0 =C2=A0 up-to-date partial payment informati=
on. This may allow the sender to more</div><div>=C2=A0 =C2=A0 accurately si=
ze each partial payment.</div><div>=C2=A0 =C2=A0</div><div>3. Can the sende=
r&#39;s initial strategy be governed by an instance of the</div><div>Push-r=
elabel max flow algo?</div><div><br></div><div>4. How does this mesh with t=
he current max HTLC limit on a commitment?</div><div><br></div><div>=C2=A0 =
=C2=A0- ATM, we have a max limit on the number of active HTLC&#39;s on a pa=
rticular</div><div>=C2=A0 =C2=A0 =C2=A0commitment transaction. We do this, =
as otherwise it&#39;s possible that the</div><div>=C2=A0 =C2=A0 =C2=A0trans=
action is too large, and exceeds standardness w.r.t transaction</div><div>=
=C2=A0 =C2=A0 =C2=A0size. In a world where most payments use an AMP-like pr=
otocol, then</div><div>=C2=A0 =C2=A0 =C2=A0overall ant any given instance t=
here will be several pending HTLC&#39;s on</div><div>=C2=A0 =C2=A0 =C2=A0co=
mmitments network wise.=C2=A0</div><div><br></div><div>=C2=A0 =C2=A0 =C2=A0=
This may incentivize nodes to open more channels in order to support</div><=
div>=C2=A0 =C2=A0 =C2=A0the increased commitment space utilization.</div><d=
iv><br></div><div><br></div><div>Conclusion</div><div>=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D</div><div><br></div><div>We&#39;ve presented a design outline of =
how to integrate atomic multi-path</div><div>payments (AMP) into Lightning.=
 The existence of such a construct allows a</div><div>sender to atomically =
split a payment flow amongst several individual payment</div><div>flows. As=
 a result, larger channels aren&#39;t as important as it&#39;s possible to<=
/div><div>utilize one total outbound payment bandwidth to send several chan=
nels.</div><div>Additionally, in order to support the increased load, inter=
nal routing nodes</div><div>are incensed have more active channels. The exi=
stence of AMP-like payments</div><div>may also increase the longevity of ch=
annels as there&#39;ll be smaller, more</div><div>numerous payment flows, m=
aking it unlikely that a single payment comes</div><div>across unbalances a=
 channel entirely. We&#39;ve also showed how one can utilize</div><div>the =
current onion packet format to deliver additional data from a sender to</di=
v><div>receiver, that&#39;s still e2e authenticated.</div><div><br></div><d=
iv><br></div><div>-- Conner &amp;&amp; Laolu</div><div><br></div></div>
<br>______________________________<wbr>_________________<br>
Lightning-dev mailing list<br>
<a href=3D"mailto:Lightning-dev@lists.linuxfoundation.org">Lightning-dev@li=
sts.<wbr>linuxfoundation.org</a><br>
<a href=3D"https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev=
" rel=3D"noreferrer" target=3D"_blank">https://lists.linuxfoundation.<wbr>o=
rg/mailman/listinfo/<wbr>lightning-dev</a><br>
<br></div><br><br clear=3D"all"><div><br></div>-- <br><div class=3D"gmail_s=
ignature" data-smartmail=3D"gmail_signature">- Bryan<br><a href=3D"http://h=
eybryan.org/" target=3D"_blank">http://heybryan.org/</a><br>1 512 203 0507<=
/div>
</div>

--001a11463b3e4a07db0564b708a8--