Return-Path: Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 1AD3AC0175; Wed, 22 Apr 2020 04:13:56 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 09D1D86191; Wed, 22 Apr 2020 04:13:56 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ulq8lYnMMo4L; Wed, 22 Apr 2020 04:13:54 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) by fraxinus.osuosl.org (Postfix) with ESMTPS id C021886151; Wed, 22 Apr 2020 04:13:54 +0000 (UTC) Received: by mail-qk1-f181.google.com with SMTP id x66so1120891qkd.9; Tue, 21 Apr 2020 21:13:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=eL9ReUOvJiyghUeCj4kb6laVhwm/aitgh/spZ/ghc/4=; b=ocmQQuMcxpbJEe6nporVFA3tn57pkTIdyJmJrSwyR8MOlby0AV81eCY7qSfq3P4yD7 eb7XhQFp3bX5dsNqBrkIXpSpLPGjgitGDV4ARFDvgo3tFGEuMk+cWeLvN+ZU8AnrY08M /KX2nTKR7wtxIfVA7LR/XJ/NQOt/YdgWI/KsmJmuTtgmBFzJo1U29lv1/stFJPlhRRxm dzr7X10v4q45hylBjGACr+xyjz89grMCcTuPzS3W4D2ZRr8qDDbTHNIdOXllKOTAYn3d ImQRYl5vc5er02vuCT5y1yM88BGe1ObiameFxeABKKbpIZyQcjKJABkMa4BRyBcIYAnG H7iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=eL9ReUOvJiyghUeCj4kb6laVhwm/aitgh/spZ/ghc/4=; b=APfz4QF8r9aQK80swGaX9d/JNNVPMwYvKkeEo6/fTaGj2lPwmZyVhwyCQc6Td2zo7Q Ot9dI2r4FH5SVejP4+XAiB4TRSr2Uo18Eult16beqmo8cZu/pvjfruCNm47gVZikI7Hm vbx70tfPMeY6Q//ryETgqYxVvDkdlbJxAVeqtChgr+Yju6ngTy5c3bTbxu1TfCd8RDvx 7d5r5cVWI/rU45RaEyzZBrXR+qnLtq2CzXFOFq2NWs5U9L0mtv+vIdxYDN70aszQFoZN +BcW1XjWQXbCz5DUSCy2Vrmsmwoh0QWVbmWwxcSBGq32PFYBV3Ia6PTmaRXGrqwfab1f YSSg== X-Gm-Message-State: AGi0PuYmwQ+u2Bi22SpElt93zzdSe/leL4C/a8UZHXgt7c7e6vK0a6jd GHwB5BRrzIfU1XsmxdyoJhfQWHgUJbQRGuYUAz08P2uV X-Google-Smtp-Source: APiQypIvs1a92D5coUu8K4x1FsVoqmB9Cb2Pg2XHtN6/4RpqNdtXvWfseYXE7YBoPue/34+YKcRHS/5U7/2ePlLSb4M= X-Received: by 2002:a37:602:: with SMTP id 2mr24050546qkg.255.1587528833387; Tue, 21 Apr 2020 21:13:53 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Olaoluwa Osuntokun Date: Tue, 21 Apr 2020 21:13:34 -0700 Message-ID: To: Matt Corallo Content-Type: multipart/alternative; boundary="0000000000002f51ec05a3d95cb0" Cc: Bitcoin Protocol Discussion , lightning-dev Subject: Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Apr 2020 04:13:56 -0000 --0000000000002f51ec05a3d95cb0 Content-Type: text/plain; charset="UTF-8" Hi Matt, > While this is somewhat unintuitive, there are any number of good anti-DoS > reasons for this, eg: None of these really strikes me as "good" reasons for this limitation, which is at the root of this issue, and will also plague any more complex Bitcoin contracts which rely on nested trees of transaction to confirm (CTV, Duplex, channel factories, etc). Regarding the various (seemingly arbitrary) package limits it's likely the case that any issues w.r.t computational complexity that may arise when trying to calculate evictions can be ameliorated with better choice of internal data structures. In the end, the simplest heuristic (accept the higher fee rate package) side steps all these issues and is also the most economically rationale from a miner's perspective. Why would one prefer a higher absolute fee package (which could be very large) over another package with a higher total _fee rate_? > You'll note that B would be just fine if they had a way to safely monitor the > global mempool, and while this seems like a prudent mitigation for > lightning implementations to deploy today, it is itself a quagmire of > complexity Is it really all that complex? Assuming we're talking about just watching for a certain script template (the HTLC scipt) in the mempool to be able to pull a pre-image as soon as possible. Early versions of lnd used the mempool for commitment broadcast detection (which turned out to be a bad idea so we removed it), but at a glance I don't see why watching the mempool is so complex. > Further, this is a really obnoxious assumption to hoist onto lightning > nodes - having an active full node with an in-sync mempool is a lot more > CPU, bandwidth, and complexity than most lightning users were expecting to > face. This would only be a requirement for Lightning nodes that seek to be a part of the public routing network with a desire to _forward_ HTLCs. This isn't doesn't affect laptops or mobile phones which likely mostly have private channels and don't participate in HTLC forwarding. I think it's pretty reasonable to expect a "proper" routing node on the network to be backed by a full-node. The bandwidth concern is valid, but we'd need concrete numbers that compare the bandwidth over head of mempool awareness (assuming the latest and greatest mempool syncing) compared with the overhead of the channel update gossip and gossip queries over head which LN nodes face today as is to see how much worse off they really would be. As detailed a bit below, if nodes watch the mempool, then this class of attack assuming the anchor output format as described in the open lightning-rfc PR is mitigated. At a glance, watching the mempool seems like a far less involved process compared to modifying the state machine as its defined today. By watching the mempool and implementing the changes in #lightning-rfc/688, then this issue can be mitigated _today_. lnd 0.10 doesn't yet watch the mempool (but does include anchors [1]), but unless I'm missing something it should be pretty straight forward to add which mor or less resolves this issue all together. > not fixing this issue seems to render the whole exercise somewhat useless Depends on if one considers watching the mempool a fix. But even with that a base version of anchors still resolves a number of issues including: eliminating the commitment fee guessing game, allowing users to pay less on force close, being able to coalesce 2nd level HTLC transactions with the same CLTV expiry, and actually being able to reliably enforce multi-hop HTLC resolution. > Instead of making the HTLC output spending more free-form with > SIGHASH_ANYONECAN_PAY|SIGHASH_SINGLE, we clearly need to go the other > direction - all HTLC output spends need to be pre-signed. I'm not sure this is actually immediately workable (need to think about it more). To see why, remember that the commit_sig message includes HTLC signatures for the _remote_ party's commitment transaction, so they can spend the HTLCs if they broadcast their version of the commitment (force close). If we don't somehow also _gain_ signatures (our new HTLC signatures) allowing us to spend HTLCs on _their_ version of the commitment, then if they broadcast that commitment (without revoking), then we're unable to redeem any of those HTLCs at all, possibly losing money. In an attempt to counteract this, we might say ok, the revoke message also now includes HTLC signatures for their new commitment allowing us to spend our HTLCs. This resolves things in a weaker security model, but doesn't address the issue generally, as after they receive the commit_sig, they can broadcast immediately, again leaving us without a way to redeem our HTLCs. I'd need to think about it more, but it seems that following this path would require an overhaul in the channel state machine to make presenting a new commitment actually take at least _two phases_ (at least a full round trip). The first phase would tender the commitment, but render them unable to broadcast it. The second phase would then enter a new sub-protocol which upon conclusion, gives the commitment proposer valid HTLC signatures, and gives the responder what they need to be able to broadcast their commitment and claim their HTCLs in an atomic manner. -- Laolu [1]: https://github.com/lightningnetwork/lnd/pull/3821 --0000000000002f51ec05a3d95cb0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Matt,


> While this is somewhat unintuitiv= e, there are any number of good anti-DoS
> reasons for this, eg:
<= br>None of these really strikes me as "good" reasons for this lim= itation, which
is at the root of this issue, and will also plague any mo= re complex Bitcoin
contracts which rely on nested trees of transaction t= o confirm (CTV, Duplex,
channel factories, etc). Regarding the various (= seemingly arbitrary) package
limits it's likely the case that any is= sues w.r.t computational complexity
that may arise when trying to calcul= ate evictions can be ameliorated with
better choice of internal data str= uctures.

In the end, the simplest heuristic (accept the higher fee r= ate package) side
steps all these issues and is also the most economical= ly rationale from a
miner's perspective. Why would one prefer a high= er absolute fee package
(which could be very large) over another package= with a higher total _fee
rate_?

> You'll note that B wou= ld be just fine if they had a way to safely monitor the
> global memp= ool, and while this seems like a prudent mitigation for
> lightning i= mplementations to deploy today, it is itself a quagmire of
> complexi= ty

Is it really all that complex? Assuming we're talking about j= ust watching
for a certain script template (the HTLC scipt) in the mempo= ol to be able to
pull a pre-image as soon as possible. Early versions of= lnd used the mempool
for commitment broadcast detection (which turned o= ut to be a bad idea so we
removed it), but at a glance I don't see w= hy watching the mempool is so
complex.

> Further, this is a re= ally obnoxious assumption to hoist onto lightning
> nodes - having an= active full node with an in-sync mempool is a lot more
> CPU, bandwi= dth, and complexity than most lightning users were expecting to
> fac= e.

This would only be a requirement for Lightning nodes that seek to= be a part
of the public routing network with a desire to _forward_ HTLC= s. This isn't
doesn't affect laptops or mobile phones which like= ly mostly have private
channels and don't participate in HTLC forwar= ding. I think it's pretty
reasonable to expect a "proper" = routing node on the network to be backed by
a full-node. The bandwidth c= oncern is valid, but we'd need concrete numbers
that compare the ban= dwidth over head of mempool awareness (assuming the
latest and greatest = mempool syncing) compared with the overhead of the
channel update gossip= and gossip queries over head which LN nodes face today
as is to see how= much worse off they really would be.

As detailed a bit below, if no= des watch the mempool, then this class of
attack assuming the anchor out= put format as described in the open
lightning-rfc PR is mitigated. At a = glance, watching the mempool seems like
a far less involved process comp= ared to modifying the state machine as its
defined today. By watching th= e mempool and implementing the changes in
#lightning-rfc/688, then this = issue can be mitigated _today_. lnd 0.10
doesn't yet watch the mempo= ol (but does include anchors [1]), but unless I'm
missing something = it should be pretty straight forward to add which mor or less
resolves t= his issue all together.

> not fixing this issue seems to render t= he whole exercise somewhat useless

Depends on if one considers watch= ing the mempool a fix. But even with that a
base version of anchors stil= l resolves a number of issues including:
eliminating the commitment fee = guessing game, allowing users to pay less on
force close, being able to = coalesce 2nd level HTLC transactions with the
same CLTV expiry, and actu= ally being able to reliably enforce multi-hop HTLC
resolution.

&g= t; Instead of making the HTLC output spending more free-form with
> S= IGHASH_ANYONECAN_PAY|SIGHASH_SINGLE, we clearly need to go the other
>= ; direction - all HTLC output spends need to be pre-signed.

I'm = not sure this is actually immediately workable (need to think about it
m= ore). To see why, remember that the commit_sig message includes HTLC
sig= natures for the _remote_ party's commitment transaction, so they canspend the HTLCs if they broadcast their version of the commitment (forceclose). If we don't somehow also _gain_ signatures (our new HTLC sign= atures)
allowing us to spend HTLCs on _their_ version of the commitment,= then if
they broadcast that commitment (without revoking), then we'= re unable to
redeem any of those HTLCs at all, possibly losing money.
In an attempt to counteract this, we might say ok, the revoke message = also
now includes HTLC signatures for their new commitment allowing us t= o spend
our HTLCs. This resolves things in a weaker security model, but = doesn't
address the issue generally, as after they receive the commi= t_sig, they can
broadcast immediately, again leaving us without a way to= redeem our HTLCs.

I'd need to think about it more, but it seems= that following this path would
require an overhaul in the channel state= machine to make presenting a new
commitment actually take at least _two= phases_ (at least a full round trip).
The first phase would tender the = commitment, but render them unable to
broadcast it. The second phase wou= ld then <insert something something
scriptless scripts here> enter= a new sub-protocol which upon conclusion,
gives the commitment proposer= valid HTLC signatures, and gives the responder
what they need to be abl= e to broadcast their commitment and claim their
HTCLs in an atomic manne= r.

-- Laolu

[1]: https://github.com/lightningnetwork/lnd/pull/3821
=
--0000000000002f51ec05a3d95cb0--