Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id BEC5B1EEA for ; Sat, 12 Oct 2019 16:28:23 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id B74136E0 for ; Sat, 12 Oct 2019 16:28:20 +0000 (UTC) Received: by mail-wr1-f50.google.com with SMTP id r3so15035976wrj.6 for ; Sat, 12 Oct 2019 09:28:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=GHkL6Fh2pk8orPKiYOqNW8E4Lqqlg8InYFuE4edeSJw=; b=KNJcLKTkUXH8uEg2wwz5GXP/0d+0zrbB+EL2xH/EaFtPQANdq60adE2AQbzDtwSLOS p3KdbYJI0TnxWkls4BjAUmnIqWFz5Kq0i3tSbP05e+sitKM4wRta6+ZWdOxWVwNck4y1 pej/7oj3pSE4xIgPFKnKQ3uuK7tHP7oB4atp9b02JIaItzu3/KBpQSoFsyLCT7I5TNL4 CSACSv3J0ntzbmVEiWN+p3dheaJTj0ll1eOlIIaP+VxxKNE0c2GNlawlStg/tBjGQRUQ QzhVFA5MD8e5pvHOf8NW4QQuH5wtW/ICqGLP+Wy+82B3sy1RLoa/C+cEWM22VQU70wSO 4jjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=GHkL6Fh2pk8orPKiYOqNW8E4Lqqlg8InYFuE4edeSJw=; b=JURxv3upkyOvSUt0LgnEQJOdZQ2T94DhHT9VQ6j0kGET9dCIJXxHT064Had/gDLZOm C3xflvEob4WqKMpK1orMMsmI9yq2e9VnL1+afoXXNWEdRM4l1KdDu2unrb9XgHfHkg1D 7T9my/n7EZDzRxcfnmyOaUajFMrxS9uTSfvl0XNYJ5SfkmA+RgyzJL9oqkoYK16m+nhu /P28MrC4ZGC3IagZ51uGSzRDvIuIkC0o+4jjSmSk0DfKJ/faxkwEmP5/GY3yoOpQCi1l Y7eoQ0sNQI+hGKPOtJdTySvXHk3v5HgtnCaUk1gHXXAb3wv+WJXTgWUcxLAdGigxTN7O +ZOQ== X-Gm-Message-State: APjAAAXo062rI++1WbPeRoxr3JwwVY0iv5eXlq24HJ973em+9mN+AUgz XhuXqe9Aoex6z/ISuPDhwoWx1YH46hDV0rGt5DE0JW0Z X-Google-Smtp-Source: APXvYqxWU3+NGJezmBuOs0zyHt+d+NOIbDjaEsiJGGt1m2OeTDuyAmdQPumX/f0MR0rljBYiv9BOgBbe8itiZv2Ckp0= X-Received: by 2002:a5d:4142:: with SMTP id c2mr6631184wrq.208.1570897698998; Sat, 12 Oct 2019 09:28:18 -0700 (PDT) MIME-Version: 1.0 References: <42cd5ffd-63e8-b738-c4ea-13d0699b1268@purse.io> In-Reply-To: From: Tier Nolan Date: Sat, 12 Oct 2019 17:27:42 +0100 Message-ID: To: Braydon Fuller via bitcoin-dev Content-Type: multipart/alternative; boundary="00000000000053cbdc0594b91fc7" X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, HTML_MESSAGE, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: Re: [bitcoin-dev] Chain width expansion X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Oct 2019 16:28:23 -0000 --00000000000053cbdc0594b91fc7 Content-Type: text/plain; charset="UTF-8" On Thu, Oct 10, 2019 at 5:20 PM Braydon Fuller via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > It would be interesting to have a succinct chainwork proof > for all cases. Chainwork being a sum of the total proof-of-work in a > chain. Such proofs currently only require a few headers for common cases > and the other cases can be identified. > I wonder if a "seed" based system would be useful. A seed is defined as a header with a very low digest. When a new peer connects, you ask him to send you the header with the lowest digest on his main chain. Chains ending at the strongest seeds are kept preferentially when discarding chains. This requires a way to download chains backwards, which the protocol doesn't support at the moment. The most chain work chain is overwhelmingly likely to contain the header with the strongest digest. This means that the honest peer's chain would be kept preferentially. It also means that a node that is synced to the main chain can easily discard noise from dishonest peers. Before downloading, they could ask the peer to provide a header with at least 1% of the POW of the best header on the main chain starting at the fork point. If they can't then their fork probably has less POW than the main chain. > A peer could > broadcast a few low-work header chains, reconnect and repeat ad nauseam. > I meant connected peer rather than peer. If a peer disconnects and then reconnects as a new peer, then their allocation of bandwidth/RAM resets to zero. Each peer would be allocated a certain bandwidth per minute for headers as in a token bucket system. New peers would start with empty buckets. If an active (outgoing) peer is building on a header chain, then that chain is preferentially kept. Essentially, the last chain that each outgoing peer built on may not be discarded. In retrospect, that works out as the same as throttling peer download, just with a different method for throttling. In your system, peers who extend the best chain don't get throttled, but the other peers do (but with a gradual transition). This could be accomplished by adding 80 bytes into the peers bucket if it extends the main chain. > For example, let's assume a case that the initial chain of headers was > dishonest and with low chainwork. The initial block download retrieves > the header chain from a single loader peer first. Once recent time is > reached, header chains are downloaded from all outgoing peers. The key it that it must not be possible to prevent a single honest peer from making progress by flooding with other peers and getting the honest peer's chain discarded. I think parallel downloading would be better than focusing on one peer initially. Otherwise, a dishonest peer can slowly send their headers to prevent moving to parallel mode. Each connected peer is given a bandwidth and RAM allowance. If a connected peer forks off their own chain before reaching current time, then the fork is just discarded. The RAM allowance would be sufficient to hold one header per minute since genesis. The header chains are relatively small (50MB), so it is not unreasonable to expect the honest peer to send the entire chain in one go. I wonder if there is a formula that gives the minimum chain work required to have a particular chain length by now. 1 minute per header would mean that the difficulty would increase every adjustment, so it couldn't be maintained without an exponentially rising total chain work. On Sat, Oct 12, 2019 at 2:41 AM Braydon Fuller via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > - Nodes are vulnerable during the initial sync when joining the > network until the minimum chainwork is achieved. Nodes should stay "headers-only" until they have hit the threshold. It isn't really any different from a checkpoint anyway. Download headers until you hit this header is about the same as download headers until you hit this chain work. It would be different if header chains were downloaded from the final checkpoint backwards. You would start at a final checkpoint and work backwards. Each ancestor header is committed to by the final checkpoint, so it would not be possible a dishonest peer to fool the node during IBD. > This is possible if the > loader peer is the attacker. To mitigate this there would need to be a > minimum chainwork defined based on the current chainwork. However, such > could also be used to prevent nodes from joining the network as it's > rejecting rather that throttling. > I think mixing two different concepts makes this problem more complex than needed. It looks like they are aiming for hard-coding A) "The main chain has at least C chainwork" B) "All blocks after A is satisfied have at least X POW" To me, this is equivalent to a checkpoint, without it having it be called a checkpoint. The point about excluding checkpoints is that it means that (in theory) two clients can't end up on incompatible forks due to different checkpoints. The "checkpoint" is replaced by a statement by the dev team that "There exists at least one valid chain with C chainwork" which is equivalent to "The longest valid chain has at least C chainwork" Two client making those statements can't cause a permanent incompatibility. If they pick a different C, then eventually, once the main chain has more than the larger chain work, they will agree again. Checkpoints don't automatically heal. Adding in a minimum POW requirement could break the requirement for that to happen. Just because B was met on the original main chain, a fork isn't required to meet it. - It's technically a consensus change each time the minimum difficulty > or best chainwork is updated. It is a similar consensus change as > maintaining the last checkpoint, as it's used to prevent forking prior > to the last checkpoint. > I agree on the min difficulty being a consensus change. The minimum chain work is just the devs making a true statement and then using it to optimize things. --00000000000053cbdc0594b91fc7 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On Thu, Oct 10, 2019 at 5:20 PM Braydon Fuller via bitcoin-dev <<= a href=3D"mailto:bitcoin-dev@lists.linuxfoundation.org">bitcoin-dev@lists.l= inuxfoundation.org> wrote:
=C2=A0It would be interesting to have a succinct chainwor= k proof
for all cases. Chainwork being a sum of the total proof-of-work in a
chain. Such proofs currently only require a few headers for common cases and the other cases can be identified.

= I wonder if a "seed" based system would be useful.

=
A seed is defined as a header with a very low digest.=C2=A0
=

When a new peer connects, you ask him to send you= the header with the lowest digest on his main chain.

<= div>Chains ending at the strongest seeds are kept preferentially when disca= rding chains.

This requires a way to download chai= ns backwards, which the protocol doesn't support at the moment.

The most chain work chain is overwhelmingly likely to= contain the header with the strongest digest.

Thi= s means that the honest peer's chain would be kept preferentially.

It also means that a node that is synced to the main c= hain can easily discard noise from dishonest peers.=C2=A0 Before downloadin= g, they could ask the peer to provide a header with at least 1% of the POW = of the best header on the main chain starting at the fork point.=C2=A0 If t= hey can't then their fork probably has less POW than the main chain.
=C2=A0
A= peer could
broadcast a few low-work header chains, reconnect and repeat ad nauseam.

I meant connected peer rather than peer.= =C2=A0 If a peer disconnects and then reconnects as a new peer, then their = allocation of bandwidth/RAM resets to zero.

Each p= eer would be allocated a certain bandwidth per minute for headers as in a t= oken bucket system.=C2=A0=C2=A0 New peers would start with empty buckets.

If an active (outgoing) peer is building on a heade= r chain, then that chain is preferentially kept.=C2=A0 Essentially, the las= t chain that each outgoing peer built on may not be discarded.

In retrospect, that works out as the same as throttling peer download,= just with a different method for throttling.

In your system, peers who extend the best chain d= on't get throttled, but the other peers do (but with a gradual transiti= on).=C2=A0

This could be accomplished by addi= ng 80 bytes into the peers bucket if it extends the main chain.
=C2=A0
For example, let's assume a case that the initial chain of headers was<= br> dishonest and with low chainwork. The initial block download retrieves
the header chain from a single loader peer first. Once recent time is
reached, header chains are downloaded from all outgoing peers.
=

The key it that it must not be possible to prevent a si= ngle honest peer from making progress by flooding with other peers and gett= ing the honest peer's chain discarded.

I think parallel downloading would be better = than focusing on one peer initially.=C2=A0 Otherwise, a dishonest peer can = slowly send their headers to prevent moving to parallel mode.
Each connected peer is given a bandwidth and RAM allowance.=C2= =A0 If a connected peer forks off their own chain before reaching current t= ime, then the fork is just discarded.

The RAM allo= wance would be sufficient to hold one header per minute since genesis.
<= /div>

The header chains are relatively small (50MB), so = it is not unreasonable to expect the honest peer to send the entire chain i= n one go.

I wonder if there is a formula that give= s the minimum chain work required to have a particular chain length by now.=

1 minute per header would mean that the diffi= culty would increase every adjustment, so it couldn't be maintained wit= hout an exponentially rising total chain work.
=C2=A0
On Sat, Oct 12, 2019 at 2:41 AM Brayd= on Fuller via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
<= blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-l= eft:1px solid rgb(204,204,204);padding-left:1ex"> =C2=A0 - Nodes are vulnerable during the initial sync when joining the
network until the minimum chainwork is achieved.

Nodes should stay "headers-only" until they have hit the th= reshold.

It isn't really any different from a = checkpoint anyway.=C2=A0

Download headers unt= il you hit this header is about the same as download headers until you hit = this chain work.

It would be different if header c= hains were downloaded from the final checkpoint backwards.

You would start at a final checkpoint and work backwards.=C2=A0 Ea= ch ancestor header is committed to by the final checkpoint, so it would not= be possible a dishonest peer to fool the node during IBD.
<= /div>
=C2=A0
Th= is is possible if the
loader peer is the attacker. To mitigate this there would need to be a
minimum chainwork defined based on the current chainwork. However, such
could also be used to prevent nodes from joining the network as it's rejecting rather that throttling.

I thi= nk mixing two different concepts makes this problem more complex than neede= d.

It looks like they are aiming for hard-coding

A) "The main chain has at least C chainwork&qu= ot;
B) "All blocks after A is satisfied have at least X = POW"

To me, this is equivalent to a chec= kpoint, without it having it be called a checkpoint.

The point about excluding checkpoints is that it means that (in theo= ry) two clients can't end up on incompatible forks due to different che= ckpoints.

The "checkpoint" is replaced b= y a statement by the dev team that

"There exi= sts at least one valid chain with C chainwork"

which is equivalent to

"The longest valid c= hain has at least C chainwork"

Two client mak= ing those statements can't cause a permanent incompatibility.=C2=A0 If = they pick a different C, then eventually, once the main chain has more than= the larger chain work, they will agree again.

Che= ckpoints don't automatically heal.

Adding in a= minimum POW requirement could break the requirement for that to happen.

Just because B was met on the original main chain, a= fork isn't required to meet it.

=C2=A0 - It's technically a consensus change each time the minimum diff= iculty
or best chainwork is updated. It is a similar consensus change as
maintaining the last checkpoint, as it's used to prevent forking prior<= br> to the last checkpoint.

I agree on the = min difficulty being a consensus change.

The minim= um chain work is just the devs making a true statement and then using it to= optimize things.
--00000000000053cbdc0594b91fc7--