Received: from sog-mx-1.v43.ch3.sourceforge.com ([172.29.43.191] helo=mx.sourceforge.net) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1YqoyY-0002N7-LZ for bitcoin-development@lists.sourceforge.net; Fri, 08 May 2015 20:34:26 +0000 X-ACL-Warn: Received: from mail-ie0-f171.google.com ([209.85.223.171]) by sog-mx-1.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1YqoyT-0002Xw-Eg for bitcoin-development@lists.sourceforge.net; Fri, 08 May 2015 20:34:26 +0000 Received: by ieczm2 with SMTP id zm2so72506320iec.2 for ; Fri, 08 May 2015 13:34:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=I5WKcRBkcTQij98cdmbMM0QYcYfk4eTgKBJXj5r/prU=; b=eO8/eWOx1THLuw+526bEASZKgf1SfbB+QOPywBERmrZ5EV4cAyt6TZFeNQC5P6boYL p1s9NPwZLxoUTLP+K0IqLGZMsQ1tRfMKklFytcvXblWfRelLSVBsV7Ve82j3DdjGONS/ +6jLfyYe9xyIG39BTJO/YYrCIghb3NshFeDHxN2y+BZxuxDx8tbViePg17sW+ndk28St Rn4QfBAWg2tDbWJPuN02QhRFPVPyjl6oooniASDZXXnYOSmXESZZcMR/JPJzSixYX/ph pmecgqilITKMnqwY1BUV+DYHIztELzfvtlPmlg6u/9nXjfm5OqoLabHQ6GkY5SyZ5jdb 9BZw== X-Gm-Message-State: ALoCoQm5ZwZNhjCHB3JvNlc3My+HuMhq5kek5DlAj2MABueagHT+osILxlNOfFR8T4aQPkJg+2W9 X-Received: by 10.50.62.148 with SMTP id y20mr1182702igr.17.1431117253987; Fri, 08 May 2015 13:34:13 -0700 (PDT) MIME-Version: 1.0 Received: by 10.107.25.203 with HTTP; Fri, 8 May 2015 13:33:53 -0700 (PDT) X-Originating-IP: [173.228.107.141] In-Reply-To: <16096345.A1MpJQQkRW@crushinator> References: <16096345.A1MpJQQkRW@crushinator> From: Mark Friedenbach Date: Fri, 8 May 2015 13:33:53 -0700 Message-ID: To: Matt Whitlock , Bitcoin Development Content-Type: multipart/alternative; boundary=047d7bdcab348e6a08051597f3d1 X-Spam-Score: 1.0 (+) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. 1.0 HTML_MESSAGE BODY: HTML included in message X-Headers-End: 1YqoyT-0002Xw-Eg Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB step function X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 May 2015 20:34:26 -0000 --047d7bdcab348e6a08051597f3d1 Content-Type: text/plain; charset=UTF-8 It is my professional opinion that raising the block size by merely adjusting a constant without any sort of feedback mechanism would be a dangerous and foolhardy thing to do. We are custodians of a multi-billion dollar asset, and it falls upon us to weigh the consequences of our own actions against the combined value of the entire bitcoin ecosystem. Ideally we would take no action for which we are not absolutely certain of the ramifications, with the information that can be made available to us. But of course that is not always possible: there are unknown-unknowns, time pressures, and known-unknowns where information has too high a marginal cost. So where certainty is unobtainable, we must instead hedge against unwanted outcomes. The proposal to raise the block size now by redefining a constant carries with it risk associated with infrastructure scaling, centralization pressures, and delaying the necessary development of a constraint-based fee economy. It also simply kicks the can down the road in settling these issues because a larger but realistic hard limit must still exist, meaning a future hard fork may still be required. But whatever new hard limit is chosen, there is also a real possibility that it may be too high. The standard response is that it is a soft-fork change to impose a lower block size limit, which miners could do with a minimal amount of coordination. This is however undermined by the unfortunate reality that so many mining operations are absentee-run businesses, or run by individuals without a strong background in bitcoin protocol policy, or with interests which are not well aligned with other users or holders of bitcoin. We cannot rely on miners being vigilant about issues that develop, as they develop, or able to respond in the appropriate fashion that someone with full domain knowledge and an objective perspective would. The alternative then is to have some sort of dynamic block size limit controller, and ideally one which applies a cost to raising the block size in some way the preserves the decentralization and/or long-term stability features that we care about. I will now describe one such proposal: * For each block, the miner is allowed to select a different difficulty (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, and this miner-selected difficulty is used for the proof of work check. In addition to adjusting the hashcash target, selecting a different difficulty also raises or lowers the maximum block size for that block by a function of the difference in difficulty. So increasing the difficulty of the block by an additional 25% raises the block limit for that block from 100% of the current limit to 125%, and lowering the difficulty by 10% would also lower the maximum block size for that block from 100% to 90% of the current limit. For simplicity I will assume a linear identity transform as the function, but a quadratic or other function with compounding marginal cost may be preferred. * The default maximum block size limit is then adjusted at regular intervals. For simplicity I will assume an adjustment at the end of each 2016 block interval, at the same time that difficulty is adjusted, but there is no reason these have to be aligned. The adjustment algorithm itself is either the selection of the median, or perhaps some sort of weighted average that respects the "middle majority." There would of course be limits on how quickly the block size limit can adjusted in any one period, just as there are min/max limits on the difficulty adjustment. * To prevent perverse mining incentives, the original difficulty without adjustment is used in the aggregate work calculations for selecting the most-work chain, and the allowable miner-selected adjustment to difficulty would have to be tightly constrained. These rules create an incentive environment where raising the block size has a real cost associated with it: a more difficult hashcash target for the same subsidy reward. For rational miners that cost must be counter-balanced by additional fees provided in the larger block. This allows block size to increase, but only within the confines of a self-supporting fee economy. When the subsidy goes away or is reduced to an insignificant fraction of the block reward, this incentive structure goes away. Hopefully at that time we would have sufficient information to soft-fork set a hard block size maximum. But in the mean time, the block size limit controller constrains the maximum allowed block size to be within a range supported by fees on the network, providing an emergency relief valve that we can be assured will only be used at significant cost. Mark Friedenbach * There has over time been various discussions on the bitcointalk forums about dynamically adjusting block size limits. The true origin of the idea is unclear at this time (citations would be appreciated!) but a form of it was implemented in Bytecoin / Monero using subsidy burning to increase the block size. That approach has various limitations. These were corrected in Greg Maxwell's suggestion to adjust the difficulty/nBits field directly, which also has the added benefit of providing incentive for bidirectional movement during the subsidy period. The description in this email and any errors are my own. On Fri, May 8, 2015 at 12:20 AM, Matt Whitlock wrote: > Between all the flames on this list, several ideas were raised that did > not get much attention. I hereby resubmit these ideas for consideration and > discussion. > > - Perhaps the hard block size limit should be a function of the actual > block sizes over some trailing sampling period. For example, take the > median block size among the most recent 2016 blocks and multiply it by 1.5. > This allows Bitcoin to scale up gradually and organically, rather than > having human beings guessing at what is an appropriate limit. > > - Perhaps the hard block size limit should be determined by a vote of the > miners. Each miner could embed a desired block size limit in the coinbase > transactions of the blocks it publishes. The effective hard block size > limit would be that size having the greatest number of votes within a > sliding window of most recent blocks. > > - Perhaps the hard block size limit should be a function of block-chain > length, so that it can scale up smoothly rather than jumping immediately to > 20 MB. This function could be linear (anticipating a breakdown of Moore's > Law) or quadratic. > > I would be in support of any of the above, but I do not support Mike > Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the > road without actually solving the problem, and it does so in a > controversial (step function) way. > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > --047d7bdcab348e6a08051597f3d1 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
It is my professional opinion that raising the block = size by merely adjusting a constant without any sort of feedback mechanism = would be a dangerous and foolhardy thing to do. We are custodians of a mult= i-billion dollar asset, and it falls upon us to weigh the consequences of o= ur own actions against the combined value of the entire bitcoin ecosystem. = Ideally we would take no action for which we are not absolutely certain of = the ramifications, with the information that can be made available to us. B= ut of course that is not always possible: there are unknown-unknowns, time = pressures, and known-unknowns where information has too high a marginal cos= t. So where certainty is unobtainable, we must instead hedge against unwant= ed outcomes.

The proposal to raise the block size now by redefining = a constant carries with it risk associated with infrastructure scaling, cen= tralization pressures, and delaying the necessary development of a constrai= nt-based fee economy. It also simply kicks the can down the road in settlin= g these issues because a larger but realistic hard limit must still exist, = meaning a future hard fork may still be required.

But whatever new h= ard limit is chosen, there is also a real possibility that it may be too hi= gh. The standard response is that it is a soft-fork change to impose a lowe= r block size limit, which miners could do with a minimal amount of coordina= tion. This is however undermined by the unfortunate reality that so many mi= ning operations are absentee-run businesses, or run by individuals without = a strong background in bitcoin protocol policy, or with interests which are= not well aligned with other users or holders of bitcoin. We cannot rely on= miners being vigilant about issues that develop, as they develop, or able = to respond in the appropriate fashion that someone with full domain knowled= ge and an objective perspective would.

The alternative then is to ha= ve some sort of dynamic block size limit controller, and ideally one which = applies a cost to raising the block size in some way the preserves the dece= ntralization and/or long-term stability features that we care about. I will= now describe one such proposal:

=C2=A0 * For each block, the miner = is allowed to select a different difficulty (nBits) within a certain range,= e.g. +/- 25% of the expected difficulty, and this miner-selected difficult= y is used for the proof of work check. In addition to adjusting the hashcas= h target, selecting a different difficulty also raises or lowers the maximu= m block size for that block by a function of the difference in difficulty. = So increasing the difficulty of the block by an additional 25% raises the b= lock limit for that block from 100% of the current limit to 125%, and lower= ing the difficulty by 10% would also lower the maximum block size for that = block from 100% to 90% of the current limit. For simplicity I will assume a= linear identity transform as the function, but a quadratic or other functi= on with compounding marginal cost may be preferred.

=C2=A0 * The def= ault maximum block size limit is then adjusted at regular intervals. For si= mplicity I will assume an adjustment at the end of each 2016 block interval= , at the same time that difficulty is adjusted, but there is no reason thes= e have to be aligned. The adjustment algorithm itself is either the selecti= on of the median, or perhaps some sort of weighted average that respects th= e "middle majority." There would of course be limits on how quick= ly the block size limit can adjusted in any one period, just as there are m= in/max limits on the difficulty adjustment.

=C2=A0 * To prevent perv= erse mining incentives, the original difficulty without adjustment is used = in the aggregate work calculations for selecting the most-work chain, and t= he allowable miner-selected adjustment to difficulty would have to be tight= ly constrained.

These rules create an incentive environment where ra= ising the block size has a real cost associated with it: a more difficult h= ashcash target for the same subsidy reward. For rational miners that cost m= ust be counter-balanced by additional fees provided in the larger block. Th= is allows block size to increase, but only within the confines of a self-su= pporting fee economy.

When the subsidy goes away or is reduced to an= insignificant fraction of the block reward, this incentive structure goes = away. Hopefully at that time we would have sufficient information to soft-f= ork set a hard block size maximum. But in the mean time, the block size lim= it controller constrains the maximum allowed block size to be within a rang= e supported by fees on the network, providing an emergency relief valve tha= t we can be assured will only be used at significant cost.

Mark Frie= denbach

* There has over time been various discussions on the bitcoi= ntalk forums about dynamically adjusting block size limits. The true origin= of the idea is unclear at this time (citations would be appreciated!) but = a form of it was implemented in Bytecoin / Monero using subsidy burning to = increase the block size. That approach has various limitations. These were = corrected in Greg Maxwell's suggestion to adjust the difficulty/nBits f= ield directly, which also has the added benefit of providing incentive for = bidirectional movement during the subsidy period. The description in this e= mail and any errors are my own.
<= br>
On Fri, May 8, 2015 at 12:20 AM, Matt Whitloc= k <bip@mattwhitlock.name> wrote:
Between all the flames on this list, several ideas were raised t= hat did not get much attention. I hereby resubmit these ideas for considera= tion and discussion.

- Perhaps the hard block size limit should be a function of the actual bloc= k sizes over some trailing sampling period. For example, take the median bl= ock size among the most recent 2016 blocks and multiply it by 1.5. This all= ows Bitcoin to scale up gradually and organically, rather than having human= beings guessing at what is an appropriate limit.

- Perhaps the hard block size limit should be determined by a vote of the m= iners. Each miner could embed a desired block size limit in the coinbase tr= ansactions of the blocks it publishes. The effective hard block size limit = would be that size having the greatest number of votes within a sliding win= dow of most recent blocks.

- Perhaps the hard block size limit should be a function of block-chain len= gth, so that it can scale up smoothly rather than jumping immediately to 20= MB. This function could be linear (anticipating a breakdown of Moore's= Law) or quadratic.

I would be in support of any of the above, but I do not support Mike Hearn&= #39;s proposed jump to 20 MB. Hearn's proposal kicks the can down the r= oad without actually solving the problem, and it does so in a controversial= (step function) way.

---------------------------------------------------------------------------= ---
One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Bitcoin-development mailing list
Bitcoin-develo= pment@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-de= velopment

--047d7bdcab348e6a08051597f3d1--