Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 61663E16 for ; Tue, 8 Sep 2015 23:11:51 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-io0-f178.google.com (mail-io0-f178.google.com [209.85.223.178]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 9632D9C for ; Tue, 8 Sep 2015 23:11:50 +0000 (UTC) Received: by ioii196 with SMTP id i196so2583176ioi.3 for ; Tue, 08 Sep 2015 16:11:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=rpGebRRAE2OWviQOiMFva4mt46Ulnfjqsa/FWdmdTKo=; b=gRd/H5w34uiCn4PLpz/q7uZ05LLKm0FxtRmVi6swEF3zBLB61IdzgxIXl8IfgdPaOx 841+4uVx/kgwNTSJEPpgpLv+iPJ/cGhg13oY5f8qyIhwR02vT6aNSLqH/KoCE/EbY9j3 O33Km/7iMSF6BAvBVDClsYoV2UnK9vOBQepTH/9fWgVxBXrb2A/Bw8KyXrD7+OK/9g9Q OqCXcF4WLlsaKqoyIh/OSyCqloP9ga2OGaT6nlZJKLMJPYKrmCYqVvBO4lgVs5RIexc1 XLxCMMBFiHNn70M4wySXQN+hQP+2IXnaKGMRzu+m9twfOjouwO+tYzrDMbvjNqYoO2NC TM/A== MIME-Version: 1.0 X-Received: by 10.107.18.167 with SMTP id 39mr47573742ios.34.1441753910017; Tue, 08 Sep 2015 16:11:50 -0700 (PDT) Received: by 10.107.178.12 with HTTP; Tue, 8 Sep 2015 16:11:49 -0700 (PDT) In-Reply-To: References: Date: Wed, 9 Sep 2015 09:11:49 +1000 Message-ID: From: Washington Sanchez To: Gavin Andresen Content-Type: multipart/alternative; boundary=001a113f2de8a9078d051f447d16 X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Bitcoin Dev Subject: Re: [bitcoin-dev] Dynamic limit to the block size - BIP draft discussion X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Development Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Sep 2015 23:11:51 -0000 --001a113f2de8a9078d051f447d16 Content-Type: text/plain; charset=UTF-8 > > If you want me to take your proposal seriously, you need to justify why > 60% full is a good answer > Sure thing Gavin. If you want blocks to be at least 60% full... First off, I do not want blocks to be at least 60% full, so let me try and explain where I got this number from - The idea of this parameter is set a *triggering level* for an increase in the block size - The triggering level is the point where a reasonable medium-term trend can be observed. That trend is an increase in the transaction volume that, left unchecked, would fill up blocks. - Determining the appropriate triggering level is difficult, and it consists of 3 parameters: 1. Evaluation period - *Period of time where you check to see if the conditions to trigger a raise the block size are true * - Ideally you want an increase to occur in response to a real increase of transaction volume from the market, and not some short term spam attack. - Too short, spam attacks can be used to trigger multiple increases (at least early on). Too long, the block size doesn't increase fast enough to transaction demand. - I selected a period of *4032 blocks* 2. Capacity - *The capacity level that a majority of blocks would demonstrate in order to trigger a block size increase* - The capacity level, in tandem with the evaluation period and threshold level, needs to reflect an underlying trend towards filling blocks. - If the capacity level is too low, block size increases can be triggered prematurely. If the capacity level is too high, the network could be unnecessarily jammed with the transactions before an increase can kick in. - I selected a capacity level of *60%*. 3. Threshold - *The number of blocks during the evaluation period that are above the capacity level in order to trigger a block size increase.* - If blocks are getting larger than 60% over a 4032 block period, how many reflect a market-driven increase transaction volume? - If the threshold is too low, increases could be triggered artificially or prematurely. If the threshold is too high, the easier it gets for 1-2 mining pools to prevent any increases in the block size or the block size doesn't respond fast enough to a real increase in transaction volume. - I selected a threshold of *2000 blocks or ~50%*. - So in my proposal, if 2000+ nodes have a block size >= 60%, this is an indication that real transaction volume has increased and we're approaching a time where block could be filled to capacity without an increase. The block size increase, 10%, is triggered. A centralized decision, presumably by Satoshi, was made on the parameters that alter the target difficulty, rather than attempt to forecast hash rates based on his CPU power. He allowed the system to scale to a level where real market demand would take it. I believe the same approach should be replicated for the block size. The trick of course is settling on the right variables. I hope this proposal is a good way to do that. *Some additional calculations* Block sizes for each year are *theoretical maximums* if ALL trigger points are activated in my proposal (unlikely, but anyway). These calculations assume zero transactions are taken off-chain by third party processors or the LN, and no efficiency improvements. - 2015 - 1 MB/block - 2 tps (conservative factor, also carried on below) - 0.17 million tx/day - 2016 - 3.45 MB/block - 7 tps - 0.6 million tx/day - 2017 - 12 MB/block - 24 tps - 2 million tx/day - 2018 - 41 MB/block - 82 tps - 7 million tx/day - 2019 - 142 MB/block - 284 tps - 25 million tx/day - 2020 - 490 MB/block - 980 tps - 85 million tx/day By way of comparison, Alipay (payment processor for the Alibaba Group's ecosystem) processes 30 million escrow transactions per day. This gives us at least 4-5 years to reach the present day transaction processing capacity of 1 corporation... in reality it will take a little longer as I doubt all block size triggers will be activated. This also gives us at least 4-5 years to develop efficiency improvements within the protocol, develop the LN to take many of these transactions off-chain, and network infrastructure to be significantly improved (and anything else this ecosystem can come up with). (let me know if any of these calculations are off) --001a113f2de8a9078d051f447d16 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
If you w= ant me to take your proposal seriously, you need to justify why 60% full is= a good answer

Sure thing Gavin.
=
If you want blo= cks to be at least 60% full...

First= off, I do not want blocks to be at least 60% full, so let me try and expla= in where I got this number from
  • The idea of this parameter is se= t a triggering level for an increase in the block size=C2=A0
  • The triggering level is the point where a reasonable medium-term trend can= be observed. That trend is an increase in the transaction volume that, lef= t unchecked, would fill up blocks.
  • Determining the appropriate trig= gering level is difficult, and it consists of 3 parameters:
    1. Eva= luation period
      • Period of time where you check to see if the = conditions to trigger a raise the block size are true=C2=A0
      • Ide= ally you want an increase to occur in response to a real increase of transa= ction volume from the market, and not some short term spam attack.
      • = Too short, spam attacks can be used to trigger multiple increases (at least= early on). Too long, the block size doesn't increase fast enough to tr= ansaction demand.
      • I selected a period of=C2=A04032 blocks
    2. Capacity
      • The capacity level that a majority o= f blocks would=C2=A0demonstrate in order to trigger a block size increase
      • The capacity level, in tandem with the evaluation period and thr= eshold level, needs to reflect an underlying trend towards filling blocks.<= /li>
      • If the capacity level is too low, block size increases can be trigg= ered prematurely. If the capacity level is too high, the network could be u= nnecessarily jammed with the transactions before an increase can kick in.
      • I selected a capacity level of 60%.
    3. Threshold
      • The number of blocks during the evaluation period that are abo= ve the capacity level in order to trigger a block size increase.
      • If blocks are getting larger than 60% over a 4032 block period, how many = reflect a market-driven increase transaction volume?
      • If the thresho= ld is too low, increases could be triggered artificially or prematurely. If= the threshold is too high, the easier it gets for 1-2 mining pools to prev= ent any increases in the block size or the block size doesn't respond f= ast enough to a real increase in transaction volume.
      • I selected a t= hreshold of 2000 blocks or ~50%.
  • So in my proposal= , if 2000+ nodes have a block size >=3D 60%, this is an indication that = real transaction volume has increased and we're approaching a time wher= e block could be filled to capacity without an increase. The block size inc= rease, 10%, is triggered.
A centralized decision, presumably by Sa= toshi, was made on the parameters that alter the target difficulty, rather = than attempt to forecast hash rates based on his CPU power. He allowed the = system to scale to a level where real market demand would take it. I believ= e the same approach should be replicated for the block size. The trick of c= ourse is settling on the right variables. I hope this proposal is a good wa= y to do that.

Some additional calculations<= /u>=C2=A0

Block sizes for each year are=C2=A0th= eoretical maximums if ALL trigger points are activated in my proposal (= unlikely, but anyway).
These calculations assume zero transaction= s are taken off-chain by third party processors or the LN, and no efficienc= y improvements.
  • 2015
    • 1 MB/block
    • = 2 tps (conservative factor, also carried on below)
    • 0.17 million tx/= day
  • 2016
    • 3.45 MB/block
    • 7 tps
    • 0.6 m= illion tx/day=C2=A0
  • 2017
    • 12 MB/block
    • 24 tp= s
    • 2 million tx/day=C2=A0
  • 2018
    • 41 MB/block<= /li>
    • 82 tps
    • 7 million tx/day
  • 2019
    • 142 M= B/block
    • 284 tps
    • 25 million tx/day
  • 2020
  • 490 MB/block
  • 980 tps
  • 85 million tx/day
<= /div>
By way of comparison, Alipay (payment processor for the Alibaba G= roup's ecosystem) processes 30 million escrow transactions per day. Thi= s gives us at least 4-5 years to reach the present day transaction processi= ng capacity of 1 corporation... in reality it will take a little longer as = I doubt all block size triggers will be activated. This also gives us at le= ast 4-5 years to develop efficiency improvements within the protocol, devel= op the LN to take many of these transactions off-chain, and network infrast= ructure to be significantly improved (and anything else this ecosystem can = come up with).

(let me know if any of these calcul= ations are off)

--001a113f2de8a9078d051f447d16--