Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 65E211096 for ; Thu, 8 Feb 2018 17:49:28 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-ua0-f181.google.com (mail-ua0-f181.google.com [209.85.217.181]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id AFC7B3C4 for ; Thu, 8 Feb 2018 17:49:25 +0000 (UTC) Received: by mail-ua0-f181.google.com with SMTP id x4so3421198uaj.11 for ; Thu, 08 Feb 2018 09:49:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=cgr5t7IwnEezU6G4/9kdq/siWID6IsQAU5DsqqIDxTw=; b=oqL+Yg4IWRcJXli0C2EyFaOVbcigGGsbHzEiY2tJLapFGem2AhvZhCdE2EhUALDD6B NOvt9eNXww3T3uPa09Ti0GQBvmgf89qGKxV24VqWlleItOn8u8C8R3yUNe8sxM2cWCdS /ua+FvhosfYrhKvHYH1TJ2CqF2oal5Z1HQmrog22iaRvPI+PcbrHaNU283mWIS4Z2a4P OWhuEpXy2yvHqKDwGgsLJ6zFfIK1KXEP9Sjy2P80rEIuogD38HnX7WDwT++1UbbCkwzD M1xlx785sYr/FyzNG3W4YCa0ONIY1fM1wx3QazjNiNBF0MpeBlxh2IEEbsLnn6wN15wX 391A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=cgr5t7IwnEezU6G4/9kdq/siWID6IsQAU5DsqqIDxTw=; b=IJ5SCsZSg+1M1Xa3ULemXsPQyga96uJjfwKWaG9kYnx2/UjMlQppCsMy6YdHDbVeei cwOyeQGMqdFu3WYqptFL9VldHrmtqOUvbbnb2JMcvm+VNhDmgdgsB4kfZcpDxmW+xtDp d0WPIP35Wj2S4ABwzLP2Oxulz9oUCCm01EgHTWbgnzrg64pBk5d7gDDkCgt8m77W4lEC LhY6xlquGq85GA3AQUpm0lpfrY3kJ8Q4DrPwKjDBDj59AKooXAaXIvyPd/kcwZ4t45Qi gZlDGKUUsMB4Z1vcuC3BV+z/0vFu/E3ZUgMJeiznISNqHGRLgwKCxSZKWcojlRS8hGYe SPbA== X-Gm-Message-State: APf1xPDaoQSRHZ0rdVfp96YXClcgbZqssBp3OM65glEsfGKH/UTBaKH/ au4mQvfvbQDGIpfl5NAw445yHHZSBH11LHSQ8mr1Kw== X-Google-Smtp-Source: AH8x2262XacxQCV0ONF2QiPLh2f3nHtERZHHKkWnkG+oxhF33IqSWmwKOF3E9JUO/Pi6Vqy8IZ0UdRnoAhg7qhxePp0= X-Received: by 10.176.22.41 with SMTP id k38mr3257uae.132.1518112164413; Thu, 08 Feb 2018 09:49:24 -0800 (PST) MIME-Version: 1.0 Received: by 10.176.37.4 with HTTP; Thu, 8 Feb 2018 09:49:23 -0800 (PST) In-Reply-To: References: From: Bryan Bishop Date: Thu, 8 Feb 2018 11:49:23 -0600 Message-ID: To: Bitcoin Dev , Bryan Bishop Content-Type: multipart/alternative; boundary="001a11463b3e4a07db0564b708a8" X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, HTML_MESSAGE, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Mailman-Approved-At: Thu, 08 Feb 2018 17:51:51 +0000 Subject: [bitcoin-dev] Fwd: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Feb 2018 17:49:28 -0000 --001a11463b3e4a07db0564b708a8 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable ---------- Forwarded message ---------- From: Olaoluwa Osuntokun Date: Mon, Feb 5, 2018 at 11:26 PM Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning To: lightning-dev Hi Y'all, A common question I've seen concerning Lightning is: "I have five $2 channels, is it possible for me to *atomically* send $6 to fulfill a payment?". The answer to this question is "yes", provided that the receiver waits to pull all HTLC's until the sum matches their invoice. Typically, on= e assumes that the receiver will supply a payment hash, and the sender will re-use the payment hash for all streams. This has the downside of payment hash re-use across *multiple* payments (which can already easily be correlated), and also has a failure mode where if the sender fails to actually satisfy all the payment flows, then the receiver can still just pull the monies (and possibly not disperse a service, or w/e). Conner Fromknecht and I have come up with a way to achieve this over Lightning while (1) not re-using any payment hashes across all payment flows, and (2) adding a *strong* guarantee that the receiver won't be paid until *all* partial payment flows are extended. We call this scheme AMP (Atomic Multi-path Payments). It can be experimented with on Lightning *today* with the addition of a new feature bit to gate this new feature. The beauty of the scheme is that it requires no fundamental change= s to the protocol as is now, as the negotiation is strictly *end-to-end* between sender and receiver. TL;DR: we repurpose some unused space in the onion per-hop payload of the onion blob to signal our protocol (and deliver some protocol-specific data)= , then use additive secret sharing to ensure that the receiver can't pull the payment until they have enough shares to reconstruct the original pre-image= . Protocol Goals =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 1. Atomicity: The logical transaction should either succeed or fail in entirety. Naturally, this implies that the receiver should not be unable to settle *any* of the partial payments, until all of them have arrived. 2. Avoid Payment Hash Reuse: The payment preimages validated by the consensus layer should be distinct for each partial payment. Primarily, this helps avoid correlation of the partial payments, and ensures that malicious intermediaries straddling partial payments cannot steal funds. 3. Order Invariance: The protocol should be forgiving to the order in which partial payments arrive at the destination, adding robustness in the face o= f delays or routing failures. 4. Non-interactive Setup: It should be possible for the sender to perform a= n AMP without directly coordinating with the receiving node. Predominantly, this means that the *sender* is able to determine the number of partial payments to use for a particular AMP, which makes sense since they will be the one fronting the fees for the cost of this parameter. Plus, we can always turn a non-interactive protocol into an interactive one for the purposes of invoicing. Protocol Benefits =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Sending pay payments predominantly over an AMP-like protocol has several clear benefits: - Eliminates the constraint that a single path from sender to receiver with sufficient directional capacity. This reduces the pressure to have larger channels in order to support larger payment flows. As a result, the payment graph be very diffused, without sacrificing payment utility - Reduces strain from larger payments on individual paths, and allows the liquidity imbalances to be more diffuse. We expect this to have a non-negligible impact on channel longevity. This is due to the fact tha= t with usage of AMP, payment flows are typically *smaller* meaning that each payment will unbalance a channel to a lesser degree that with one giant flow. - Potential fee savings for larger payments, contingent on there being a super-linear component to routed fees. It's possible that with modifications to the fee schedule, it's actually *cheaper* to send payments over multiple flows rather than one giant flow. - Allows for logical payments larger than the current maximum value of an individual payment. Atm we have a (temporarily) limit on the max paymen= t size. With AMP, this can be side stepped as each flow can be up the max size, with the sum of all flows exceeding the max. - Given sufficient path diversity, AMPs may improve the privacy of LN Intermediaries are now unaware to how much of the total payment they ar= e forwarding, or even if they are forwarding a partial payment at all. - Using smaller payments increases the set of possible paths a partial payment could have taken, which reduces the effectiveness of static analysis techniques involving channel capacities and the plaintext values being forwarded. Protocol Overview =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D This design can be seen as a generalization of the single, non-interactive payment scheme, that uses decoding of extra onion blobs (EOBs?) to encode extra data for the receiver. In that design, the extra data includes a payment preimage that the receiver can use to settle back the payment. EOBs and some method of parsing them are really the only requirement for this protocol to work. Thus, only the sender and receiver need to implement this feature in order for it to function, which can be announced using a feature bit. First, let's review the current format of the per-hop payload for each node described in BOLT-0004. =E2=94=8C=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80 =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=90 =E2=94=82Realm (1 byte) =E2=94=82Next Addr (8 bytes)=E2=94=82Amount (8 byte= s)=E2=94=82Outgoing CLTV (4 bytes)=E2=94=82Unused (12 bytes)=E2=94=82 HMAC (32 bytes) =E2=94=82 =E2=94=94=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80 =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=98 =E2=96=A0=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80 =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=96=A0 =E2=94=8C=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=90 =E2=94=8265 Bytes Per Hop =E2= =94=82 =E2=94=94=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=98 Currently, *each* node gets a 65-byte payload. We use this payload to give each node instructions on *how* to forward a payment. We tell each node: th= e realm (or chain to forward on), then next node to forward to, the amount to forward (this is where fees are extracted by forwarding out less than in), the outgoing CLTV (allows verification that the prior node didn't modify an= y values), and finally an HMAC over the entire thing. Two important points: 1. We have 12 bytes for each hop that are currently unpurposed and can be used by application protocols to signal new interpretation of bytes and also deliver additional encrypted+authenticated data to *each* hop. 2. The protocol currently has a hard limit of 20-hops. With this feature we ensure that the packet stays fixed sized during processing in order to avoid leaking positional information. Typically most payments won't use all 20 hops, as a result, we can use the remaining hops to stuff in *even more* data. Protocol Description =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The solution we propose is Atomic Multi-path Payments (AMPs). At a high level, this leverages EOBs to deliver additive shares of a base preimage, from which the payment preimages of partial payments can be derived. The receiver can only construct this value after having received all of the partial payments, satisfying the atomicity constraint. The basic protocol: Primitives =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Let H be a CRH function. Let || denote concatenation. Let ^ denote xor. Sender Requirements =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The parameters to the sending procedure are a random identifier ID, the number of partial payments n, and the total payment value V. Assume the sender has some way of dividing V such that V =3D v_1 + =E2=80=A6 + v_n. To begin, the sender builds the base preimage BP, from which n partial preimages will be derived. Next, the sender samples n additive shares s_1, =E2=80=A6, s_n, and takes the sum to compute BP =3D s_1 ^ =E2=80=A6 ^ s_n. With the base preimage created, the sender now moves on to constructing the n partial payments. For each i in [1,n], the sender deterministically computes the partial preimage r_i =3D H(BP || i), by concatenating the sequence number i to the base preimage and hashing the result. Afterwards, it applies H to determine the payment hash to use in the i=E2=80=99th parti= al payment as h_i =3D H(r_i). Note that that with this preimage derivation scheme, once the payments are pulled each pre-image is distinct and indistinguishable from any other. With all of the pieces in place, the sender initiates the i=E2=80=99th paym= ent by constructing a route to the destination with value v_i and payment hash h_i= . The tuple (ID, n, s_i) is included in the EOB to be opened by the receiver. In order to include the three tuple within the per-hop payload for the fina= l destination, we repurpose the _first_ byte of the un-used padding bytes in the payload to signal version 0x01 of the AMP protocol (note this is a PoC outline, we would need to standardize signalling of these 12 bytes to support other protocols). Typically this byte isn't set, so the existence o= f this means that we're (1) using AMP, and (2) the receiver should consume th= e _next_ hop as well. So if the payment length is actually 5, the sender tack= s on an additional dummy 6th hop, encrypted with the _same_ shared secret for that hop to deliver the e2e encrypted data. Note, the sender can retry partial payments just as they would normal payments, since they are order invariant, and would be indistinguishable from regular payments to intermediaries in the network. Receiver Requirements =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Upon the arrival of each partial payment, the receiver will iteratively reconstruct BP, and do some bookkeeping to figure out when to settle the partial payments. During this reconstruction process, the receiver does not need to be aware of the order in which the payments were sent, and in fact nothing about the incoming partial payments reveals this information to the receiver, though this can be learned after reconstructing BP. Each EOB is decoded to retrieve (ID, n, s_i), where i is the unique but unknown index of the incoming partial payment. The receiver has access to persistent key-value store DB that maps ID to (n, c*, BP*), where c* represents the number of partial payments received, BP* is the sum of the received additive shares, and the superscript * denotes that the value is being updated iteratively. c* and BP* both have initial values of 0. In the basic protocol, the receiver cache=E2=80=99s the first n it sees, an= d verifies that all incoming partial payments have the same n. The receiver should reject all partial payments if any EOB deviates. Next, the we updat= e our persistent store with DB[ID] =3D (n, c* + 1, BP* ^ s_i), advancing the reconstruction by one step. If c* + 1 < n, there are still more packets in flight, so we sit tight. Otherwise, the receiver assumes all partial payments have arrived, and can being settling them back. Using the base preimage BP =3D BP* ^ s_i from our final iteration, the receiver can re-derive all n partial preimages and payment hashes, using r_i =3D H(BP || i) and h_i =3D H(r_i) simply through knowledge of n and BP. Finally, the receiver settles back any outstanding payments that include payment hash h_i using the partial preimage r_i. Each r_i will appear rando= m due to the nature of H, as will it=E2=80=99s corresponding h_i. Thus, each = partial payment should appear uncorrelated, and does not reveal that it is part of an AMP nor the number of partial payments used. Non-interactive to Interactive AMPs =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Sender simply receives an ID and amount from the receiver in an invoice before initiating the protocol. The receiver should only consider the invoice settled if the total amount received in partial payments containing ID matches or exceeds the amount specified in the invoice. With this variant, the receiver is able to map all partial payments to a pre-generate= d invoice statement. Additive Shares vs Threshold-Shares =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The biggest reason to use additive shares seems to be atomicity. Threshold shares open the door to some partial payments being settled, even if others are left in flight. Haven=E2=80=99t yet come up with a good reason for usin= g threshold schemes, but there seem to be plenty against it. Reconstruction of additive shares can be done iteratively, and is win for the storage and computation requirements on the receiving end. If the sende= r decides to use fewer than n partial payments, the remaining shares could be included in the EOB of the final partial payment to allow the sender to reconstruct sooner. Sender could also optimistically do partial reconstruction on this last aggregate value. Adaptive AMPs =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The sender may not always be aware of how many partial payments they wish t= o send at the time of the first partial payment, at which point the simplifie= d protocol would require n to be chosen. To accommodate, the above scheme can be adapted to handle a dynamically chosen n by iteratively constructing the shared secrets as follows. Starting with a base preimage BP, the key trick is that the sender remember the difference between the base preimage and the sum of all partial preimages used so far. The relation is described using the following equations: X_0 =3D 0 X_i =3D X_{i-1} ^ s_i X_n =3D BP ^ X_{n-1} where if n=3D1, X_1 =3D BP, implying that this is in fact a generalization = of the single, non-interactive payment scheme mentioned above. For i=3D1, ..., n-1, the sender sends s_i in the EOB, and X_n for the n-th share. Iteratively reconstructing s_1 ^ =E2=80=A6. ^ s_{n-1} ^ X_n =3D BP, allows = the receiver to compute all relevant r_i =3D H(BP || i) and h_i =3D H(r_i). Las= tly, the final number of partial payments n could be signaled in the final EOB, which would also serve as a sentinel value for signaling completion. In response to DOS vectors stemming from unknown values of n, implementations could consider advertising a maximum value for n, or adopting some sort of framing pattern for conveying that more partial payments are on the way. We can further modify our usage of the per-hop payloads to send (H(BP), s_i) to consume most of the EOB sent from sender to receiver. In this scenario, we'= d repurpose the 11-bytes *after* our signalling byte in the unused byte section to store the payment ID (which should be unique for each payment). In the case of a non-interactive payment, this will be unused. While for interactive payments, this will be the ID within the invoice. To deliver this slimmer 2-tuple, we'll use 32-bytes for the hash of the BP, and 32-bytes for the partial pre-image share, leaving an un-used byte in the payload. Cross-Chain AMPs =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D AMPs can be used to pay a receiver in multiple currencies atomically...whic= h is pretty cool :D Open Research Questions =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The above is a protocol sketch to achieve atomic multi-path payments over Lightning. The details concerning onion blob usage serves as a template tha= t future protocols can draw upon in order to deliver additional data to *any* hop in the route. However, there are still a few open questions before something like this can be feasibly deployed. 1. How does the sender decide how many chunked payments to send, and the size of each payment? - Upon a closer examination, this seems to overlap with the task of congestion control within TCP. The sender may be able to utilize inspired heuristics to gauge: (1) how large the initial payment should be and (2) how many subsequent payments may be required. Note that if the first payment succeeds, then the exchange is over in a signal round. 2. How can AMP and HORNET be composed? - If we eventually integrate HORNET, then a distinct communications sessions can be established to allow the sender+receiver to exchange up-to-date partial payment information. This may allow the sender to more accurately size each partial payment. 3. Can the sender's initial strategy be governed by an instance of the Push-relabel max flow algo? 4. How does this mesh with the current max HTLC limit on a commitment? - ATM, we have a max limit on the number of active HTLC's on a particula= r commitment transaction. We do this, as otherwise it's possible that th= e transaction is too large, and exceeds standardness w.r.t transaction size. In a world where most payments use an AMP-like protocol, then overall ant any given instance there will be several pending HTLC's on commitments network wise. This may incentivize nodes to open more channels in order to support the increased commitment space utilization. Conclusion =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D We've presented a design outline of how to integrate atomic multi-path payments (AMP) into Lightning. The existence of such a construct allows a sender to atomically split a payment flow amongst several individual paymen= t flows. As a result, larger channels aren't as important as it's possible to utilize one total outbound payment bandwidth to send several channels. Additionally, in order to support the increased load, internal routing node= s are incensed have more active channels. The existence of AMP-like payments may also increase the longevity of channels as there'll be smaller, more numerous payment flows, making it unlikely that a single payment comes across unbalances a channel entirely. We've also showed how one can utilize the current onion packet format to deliver additional data from a sender to receiver, that's still e2e authenticated. -- Conner && Laolu _______________________________________________ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev --=20 - Bryan http://heybryan.org/ 1 512 203 0507 --001a11463b3e4a07db0564b708a8 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

---------- Forwarded messag= e ----------
From: Olaoluwa Osuntokun = <laolu32@gmail.co= m>
Date: Mon, Feb 5, 2018 at 11:26 PM
Subject: [Lightni= ng-dev] AMP: Atomic Multi-Path Payments over Lightning
To: lightning-dev= <lightning-d= ev@lists.linuxfoundation.org>


Hi Y&= #39;all,=C2=A0

A common question I've seen con= cerning Lightning is: "I have five $2
channels, is it possib= le for me to *atomically* send $6 to fulfill a
payment?". Th= e answer to this question is "yes", provided that the receiver
waits to pull all HTLC's until the sum matches their invoice. T= ypically, one
assumes that the receiver will supply a payment has= h, and the sender will
re-use the payment hash for all streams. T= his has the downside of payment
hash re-use across *multiple* pay= ments (which can already easily be
correlated), and also has a fa= ilure mode where if the sender fails to
actually satisfy all the = payment flows, then the receiver can still just
pull the monies (= and possibly not disperse a service, or w/e).

Conn= er Fromknecht and I have come up with a way to achieve this over
= Lightning while (1) not re-using any payment hashes across all payment
flows, and (2) adding a *strong* guarantee that the receiver won'= t be paid
until *all* partial payment flows are extended. We call= this scheme AMP
(Atomic Multi-path Payments). It can be experime= nted with on Lightning
*today* with the addition of a new feature= bit to gate this new
feature. The beauty of the scheme is that i= t requires no fundamental changes
to the protocol as is now, as t= he negotiation is strictly *end-to-end*
between sender and receiv= er.

TL;DR: we repurpose some unused space in the o= nion per-hop payload of the
onion blob to signal our protocol (an= d deliver some protocol-specific data),
then use additive secret = sharing to ensure that the receiver can't pull the
payment un= til they have enough shares to reconstruct the original pre-image.


Protocol Goals
=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D
1. Atomicity: The logical transaction sh= ould either succeed or fail in
entirety. Naturally, this implies = that the receiver should not be unable to
settle *any* of the par= tial payments, until all of them have arrived.

2. = Avoid Payment Hash Reuse: The payment preimages validated by the
= consensus layer should be distinct for each partial payment.=C2=A0 Primaril= y,
this helps avoid correlation of the partial payments, and ensu= res that
malicious intermediaries straddling partial payments can= not steal funds.

3. Order Invariance: The protocol= should be forgiving to the order in which
partial payments arriv= e at the destination, adding robustness in the face of
delays or = routing failures.

4. Non-interactive Setup: It sho= uld be possible for the sender to perform an
AMP without directly= coordinating with the receiving node. Predominantly,
this means = that the *sender* is able to determine the number of partial
paym= ents to use for a particular AMP, which makes sense since they will be
the one fronting the fees for the cost of this parameter. Plus, we ca= n
always turn a non-interactive protocol into an interactive one = for the
purposes of invoicing.


Protocol Benefits=E2=80=A8
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D

Sending pay payments predomi= nantly over an AMP-like protocol has several
clear benefits:

=C2=A0 - Eliminates the constraint that a single path = from sender to receiver
=C2=A0 =C2=A0 with sufficient directional= capacity. This reduces the pressure to have
=C2=A0 =C2=A0 larger= channels in order to support larger payment flows. As a result,
= =C2=A0 =C2=A0 the payment graph be very diffused, without sacrificing payme= nt
=C2=A0 =C2=A0 utility

=C2=A0 - Reduce= s strain from larger payments on individual paths, and allows the
=C2=A0 =C2=A0 liquidity imbalances to be more diffuse. We expect this to h= ave a
=C2=A0 =C2=A0 non-negligible impact on channel longevity. T= his is due to the fact that
=C2=A0 =C2=A0 with usage of AMP, paym= ent flows are typically *smaller* meaning that
=C2=A0 =C2=A0 each= payment will unbalance a channel to a lesser degree that
=C2=A0 = =C2=A0 with one giant flow.

=C2=A0 - Potential fee= savings for larger payments, contingent on there being a
=C2=A0 = =C2=A0 super-linear component to routed fees. It's possible that with
=C2=A0 =C2=A0 modifications to the fee schedule, it's actually= *cheaper* to send
=C2=A0 =C2=A0 payments over multiple flows rat= her than one giant flow.

=C2=A0 - Allows for logic= al payments larger than the current maximum value of an
=C2=A0 = =C2=A0 individual payment. Atm we have a (temporarily) limit on the max pay= ment
=C2=A0 =C2=A0 size. With AMP, this can be side stepped as ea= ch flow can be up the max
=C2=A0 =C2=A0 size, with the sum of all= flows exceeding the max.

=C2=A0 - Given sufficien= t path diversity, AMPs may improve the privacy of LN
=C2=A0 =C2= =A0 Intermediaries are now unaware to how much of the total payment they ar= e
=C2=A0 =C2=A0 forwarding, or even if they are forwarding a part= ial payment at all.

=C2=A0 - Using smaller payment= s increases the set of possible paths a partial
=C2=A0 =C2=A0 pay= ment could have taken, which reduces the effectiveness of static
= =C2=A0 =C2=A0 analysis techniques involving channel capacities and the plai= ntext
=C2=A0 =C2=A0 values being forwarded.

<= div>
Protocol Overview
=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D
This design can be seen as a generali= zation of the single, non-interactive
payment scheme, that uses d= ecoding of extra onion blobs (EOBs?) to encode
extra data for the= receiver. In that design, the extra data includes a
payment prei= mage that the receiver can use to settle back the payment. EOBs
a= nd some method of parsing them are really the only requirement for this
protocol to work. Thus, only the sender and receiver need to impleme= nt this
feature in order for it to function, which can be announc= ed using a feature
bit.=C2=A0

First, let= 's review the current format of the per-hop payload for each node
=
described in BOLT-0004.

=E2=94=8C=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=AC=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=90
=E2=94=82Realm (= 1 byte) =E2=94=82Next Addr (8 bytes)=E2=94=82Amount (8 bytes)=E2=94=82Outgo= ing CLTV (4 bytes)=E2=94=82Unused (12 bytes)=E2=94=82 HMAC (32 bytes) =E2= =94=82
=E2=94=94=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=B4=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=B4= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=98
=E2=96=A0=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=96=A0
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =E2=94=8C=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80= =E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=90
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =E2=94=8265 Bytes Per Hop =E2=94=82
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =E2=94=94=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2= =94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94=80=E2=94= =80=E2=94=80=E2=94=98

Currently, *each* node gets = a 65-byte payload. We use this payload to give
each node instruct= ions on *how* to forward a payment. We tell each node: the
realm = (or chain to forward on), then next node to forward to, the amount to
=
forward (this is where fees are extracted by forwarding out less than = in),
the outgoing CLTV (allows verification that the prior node d= idn't modify any
values), and finally an HMAC over the entire= thing.=C2=A0

Two important points:
=C2= =A0 1. We have 12 bytes for each hop that are currently unpurposed and can = be
=C2=A0 used by application protocols to signal new interpretat= ion of bytes and
=C2=A0 also deliver additional encrypted+authent= icated data to *each* hop.

=C2=A0 2. The protocol = currently has a hard limit of 20-hops. With this feature
=C2=A0 w= e ensure that the packet stays fixed sized during processing in order to
=C2=A0 avoid leaking positional information. Typically most payment= s won't use
=C2=A0 all 20 hops, as a result, we can use the r= emaining hops to stuff in *even
=C2=A0 more* data.

=

Protocol Description
=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
The solution we propos= e is Atomic Multi-path Payments (AMPs). At a high
level, this lev= erages EOBs to deliver additive shares of a base preimage,
from w= hich the payment preimages of partial payments can be derived. The
receiver can only construct this value after having received all of the
partial payments, satisfying the atomicity constraint.
<= br>
The basic protocol:=E2=80=A8=E2=80=A8

Primitives
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Let H be a = CRH function.
Let || denote concatenation.=C2=A0
Let ^ = denote xor.=E2=80=A8=E2=80=A8


Sende= r Requirements
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D
The parameters to the sending procedure are a random id= entifier ID, the
number of partial payments n, and the total paym= ent value V. Assume the
sender has some way of dividing V such th= at V =3D v_1 + =E2=80=A6 + v_n.

To begin, the send= er builds the base preimage BP, from which n partial
preimages wi= ll be derived. Next, the sender samples n additive shares s_1,
= =E2=80=A6, s_n, and takes the sum to compute BP =3D s_1 ^ =E2=80=A6 ^ s_n.<= /div>

With the base preimage created, the sender now mov= es on to constructing the
n partial payments. For each i in [1,n]= , the sender deterministically
computes the partial preimage r_i = =3D H(BP ||=C2=A0 i), by concatenating the
sequence number i to t= he base preimage and hashing the result. Afterwards,
it applies H= to determine the payment hash to use in the i=E2=80=99th partial
payment as h_i =3D H(r_i). Note that that with this preimage derivation
scheme, once the payments are pulled each pre-image is distinct and=
indistinguishable from any other.

With = all of the pieces in place, the sender initiates the i=E2=80=99th payment b= y
constructing a route to the destination with value v_i and paym= ent hash h_i.
The tuple (ID, n, s_i) is included in the EOB to be= opened by the receiver.

In order to include the t= hree tuple within the per-hop payload for the final
destination, = we repurpose the _first_ byte of the un-used padding bytes in
the= payload to signal version 0x01 of the AMP protocol (note this is a PoC
outline, we would need to standardize signalling of these 12 bytes t= o
support other protocols). Typically this byte isn't set, so= the existence of
this means that we're (1) using AMP, and (2= ) the receiver should consume the
_next_ hop as well. So if the p= ayment length is actually 5, the sender tacks
on an additional du= mmy 6th hop, encrypted with the _same_ shared secret for
that hop= to deliver the e2e encrypted data.

Note, the send= er can retry partial payments just as they would normal
payments,= since they are order invariant, and would be indistinguishable
f= rom regular payments to intermediaries in the network.=C2=A0 =E2=80=A8


Receiver=E2=80=A8Requirements
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
=
Upon the arrival of each partial payment, the receiver will = iteratively
reconstruct BP, and do some bookkeeping to figure out= when to settle the
partial payments. During this reconstruction = process, the receiver does not
need to be aware of the order in w= hich the payments were sent, and in fact
nothing about the incomi= ng partial payments reveals this information to the
receiver, tho= ugh this can be learned after reconstructing BP.

E= ach EOB is decoded to retrieve (ID, n, s_i), where i is the unique but
unknown index of the incoming partial payment. The receiver has acces= s to
persistent key-value store DB that maps ID to (n, c*, BP*), = where c*
represents the number of partial payments received, BP* = is the sum of the
received additive shares, and the superscript *= denotes that the value is
being updated iteratively. c* and BP* = both have initial values of 0.

In the basic protoc= ol, the receiver cache=E2=80=99s the first n it sees, and
verifie= s that all incoming partial payments have the same n. The receiver
should reject all partial payments if any EOB deviates.=C2=A0 Next, the w= e update
our persistent store with DB[ID] =3D (n, c* + 1, BP* ^ s= _i), advancing the
reconstruction by one step.

If c* + 1 < n, there are still more packets in flight, so we sit = tight.
Otherwise, the receiver assumes all partial payments have = arrived, and can
being settling them back. Using the base preimag= e BP =3D BP* ^ s_i from our
final iteration, the receiver can re-= derive all n partial preimages and
payment hashes, using r_i =3D = H(BP || i) and h_i =3D H(r_i) simply through
knowledge of n and B= P.=C2=A0

Finally, the receiver settles back any ou= tstanding payments that include
payment hash h_i using the partia= l preimage r_i. Each r_i will appear random
due to the nature of = H, as will it=E2=80=99s corresponding h_i. Thus, each partial
pay= ment should appear uncorrelated, and does not reveal that it is part of
an AMP nor the number of partial payments used.=C2=A0

=
Non-interactive to Interactive AMPs
=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

Sender simply receives an ID and= amount from the receiver in an invoice
before initiating the pro= tocol. The receiver should only consider the
invoice settled if t= he total amount received in partial payments containing
ID matche= s or exceeds the amount specified in the invoice. With this
varia= nt, the receiver is able to map all partial payments to a pre-generated
invoice statement.


Additive = Shares vs Threshold-Shares
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<= /div>

The biggest reason to use additive shares seems to= be atomicity. Threshold
shares open the door to some partial pay= ments being settled, even if others
are left in flight. Haven=E2= =80=99t yet come up with a good reason for using
threshold scheme= s, but there seem to be plenty against it.=C2=A0

R= econstruction of additive shares can be done iteratively, and is win for
the storage and computation requirements on the receiving end. If t= he sender
decides to use fewer than n partial payments, the remai= ning shares could be
included in the EOB of the final partial pay= ment to allow the sender to
reconstruct sooner. Sender could also= optimistically do partial
reconstruction on this last aggregate = value.


Adaptive AMPs
=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

The sender ma= y not always be aware of how many partial payments they wish to
s= end at the time of the first partial payment, at which point the simplified=
protocol would require n to be chosen. To accommodate, the above= scheme can
be adapted to handle a dynamically chosen n by iterat= ively constructing the
shared secrets as follows.

<= /div>
Starting with a base preimage BP, the key trick is that the sende= r remember
the difference between the base preimage and the sum o= f all partial
preimages used so far. The relation is described us= ing the following
equations:

=C2=A0 =C2= =A0 X_0 =3D 0=E2=80=A8
=C2=A0 =C2=A0 X_i =3D X_{i-1} ^ s_i=E2=80= =A8
=C2=A0 =C2=A0 X_n =3D BP ^ X_{n-1}=C2=A0

=
where if n=3D1, X_1 =3D BP, implying that this is in fact a generaliza= tion of
the single, non-interactive payment scheme mentioned abov= e. For i=3D1, ...,
n-1, the sender sends s_i in the EOB, and=C2= =A0 X_n for the n-th share.=C2=A0

Iteratively reco= nstructing s_1 ^ =E2=80=A6. ^ s_{n-1} ^ X_n =3D BP, allows the
re= ceiver to compute all relevant r_i =3D H(BP || i) and h_i =3D H(r_i). Lastl= y,
the final number of partial payments n could be signaled in th= e final EOB,
which would also serve as a sentinel value for signa= ling completion. In
response to DOS vectors stemming from unknown= values of n, implementations
could consider advertising a maximu= m value for n, or adopting some sort of
framing pattern for conve= ying that more partial payments are on the way.

We= can further modify our usage of the per-hop payloads to send (H(BP), s_i) = to
consume most of the EOB sent from sender to receiver. In this = scenario, we'd
repurpose the 11-bytes *after* our signalling = byte in the unused byte section
to store the payment ID (which sh= ould be unique for each payment). In the case
of a non-interactiv= e payment, this will be unused. While for interactive
payments, t= his will be the ID within the invoice. To deliver this slimmer
2-= tuple, we'll use 32-bytes for the hash of the BP, and 32-bytes for the<= /div>
partial pre-image share, leaving an un-used byte in the payload.<= /div>


Cross-Chain AMPs
=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

AMPs ca= n be used to pay a receiver in multiple currencies atomically...which
=
is pretty cool :D


Open Researc= h Questions
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D

The above is a protocol sketch = to achieve atomic multi-path payments over
Lightning. The details= concerning onion blob usage serves as a template that
future pro= tocols can draw upon in order to deliver additional data to *any*
hop in the route. However, there are still a few open questions before
something like this can be feasibly deployed.

1. How does the sender decide how many chunked payments to send, and the=
size of each payment?

=C2=A0 - Upon a c= loser examination, this seems to overlap with the task of
=C2=A0 = =C2=A0 congestion control within TCP. The sender may be able to utilize
=C2=A0 =C2=A0 inspired heuristics to gauge: (1) how large the initia= l payment should be
=C2=A0 =C2=A0 and (2) how many subsequent pay= ments may be required. Note that if the
=C2=A0 =C2=A0 first payme= nt succeeds, then the exchange is over in a signal round.

2. How can AMP and HORNET be composed?

=C2= =A0 - If we eventually integrate HORNET, then a distinct communications
=C2=A0 =C2=A0 sessions can be established to allow the sender+receiv= er to exchange
=C2=A0 =C2=A0 up-to-date partial payment informati= on. This may allow the sender to more
=C2=A0 =C2=A0 accurately si= ze each partial payment.
=C2=A0 =C2=A0
3. Can the sende= r's initial strategy be governed by an instance of the
Push-r= elabel max flow algo?

4. How does this mesh with t= he current max HTLC limit on a commitment?

=C2=A0 = =C2=A0- ATM, we have a max limit on the number of active HTLC's on a pa= rticular
=C2=A0 =C2=A0 =C2=A0commitment transaction. We do this, = as otherwise it's possible that the
=C2=A0 =C2=A0 =C2=A0trans= action is too large, and exceeds standardness w.r.t transaction
= =C2=A0 =C2=A0 =C2=A0size. In a world where most payments use an AMP-like pr= otocol, then
=C2=A0 =C2=A0 =C2=A0overall ant any given instance t= here will be several pending HTLC's on
=C2=A0 =C2=A0 =C2=A0co= mmitments network wise.=C2=A0

=C2=A0 =C2=A0 =C2=A0= This may incentivize nodes to open more channels in order to support
<= div>=C2=A0 =C2=A0 =C2=A0the increased commitment space utilization.


Conclusion
=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D

We've presented a design outline of = how to integrate atomic multi-path
payments (AMP) into Lightning.= The existence of such a construct allows a
sender to atomically = split a payment flow amongst several individual payment
flows. As= a result, larger channels aren't as important as it's possible to<= /div>
utilize one total outbound payment bandwidth to send several chan= nels.
Additionally, in order to support the increased load, inter= nal routing nodes
are incensed have more active channels. The exi= stence of AMP-like payments
may also increase the longevity of ch= annels as there'll be smaller, more
numerous payment flows, m= aking it unlikely that a single payment comes
across unbalances a= channel entirely. We've also showed how one can utilize
the = current onion packet format to deliver additional data from a sender to
receiver, that's still e2e authenticated.


-- Conner && Laolu


_______________________________________________
Lightning-dev mailing list
Lightning-dev@li= sts.linuxfoundation.org
https://lists.linuxfoundation.o= rg/mailman/listinfo/lightning-dev




--
- Bryan
http://heybryan.org/
1 512 203 0507<= /div>
--001a11463b3e4a07db0564b708a8--