Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194] helo=mx.sourceforge.net) by sfs-ml-2.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1YyOLr-000892-6u for bitcoin-development@lists.sourceforge.net; Fri, 29 May 2015 17:45:47 +0000 Received-SPF: pass (sog-mx-4.v43.ch3.sourceforge.com: domain of gmail.com designates 209.85.220.171 as permitted sender) client-ip=209.85.220.171; envelope-from=voisine@gmail.com; helo=mail-qk0-f171.google.com; Received: from mail-qk0-f171.google.com ([209.85.220.171]) by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1YyOLp-00081N-OI for bitcoin-development@lists.sourceforge.net; Fri, 29 May 2015 17:45:47 +0000 Received: by qkhg32 with SMTP id g32so49120272qkh.0 for ; Fri, 29 May 2015 10:45:40 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.140.97.136 with SMTP id m8mr11001657qge.32.1432921540202; Fri, 29 May 2015 10:45:40 -0700 (PDT) Received: by 10.140.91.37 with HTTP; Fri, 29 May 2015 10:45:39 -0700 (PDT) In-Reply-To: References: <16096345.A1MpJQQkRW@crushinator> Date: Fri, 29 May 2015 10:45:39 -0700 Message-ID: From: Aaron Voisine To: Gavin Andresen Content-Type: multipart/alternative; boundary=001a113a5d5265159105173c0b29 X-Spam-Score: -0.6 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for sender-domain 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (voisine[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record 1.0 HTML_MESSAGE BODY: HTML included in message -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-Headers-End: 1YyOLp-00081N-OI Cc: Bitcoin Dev Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB step function X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 29 May 2015 17:45:47 -0000 --001a113a5d5265159105173c0b29 Content-Type: text/plain; charset=UTF-8 > miners would definitely be squeezing out transactions / putting pressure to increase transaction fees I'd just like to re-iterate that transactions getting "squeezed out" (failure after a lengthy period of uncertainty) is a radical change from the current behavior of the network. There are plenty of avenues to create fee pressure without resorting to such a drastic change in how the network works today. Aaron Voisine co-founder and CEO breadwallet.com On Thu, May 28, 2015 at 8:53 AM, Gavin Andresen wrote: > On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock > wrote: > >> Between all the flames on this list, several ideas were raised that did >> not get much attention. I hereby resubmit these ideas for consideration and >> discussion. >> >> - Perhaps the hard block size limit should be a function of the actual >> block sizes over some trailing sampling period. For example, take the >> median block size among the most recent 2016 blocks and multiply it by 1.5. >> This allows Bitcoin to scale up gradually and organically, rather than >> having human beings guessing at what is an appropriate limit. >> > > A lot of people like this idea, or something like it. It is nice and > simple, which is really important for consensus-critical code. > > With this rule in place, I believe there would be more "fee pressure" > (miners would be creating smaller blocks) today. I created a couple of > histograms of block sizes to infer what policy miners are ACTUALLY > following today with respect to block size: > > Last 1,000 blocks: > http://bitcoincore.org/~gavin/sizes_last1000.html > > Notice a big spike at 750K -- the default size for Bitcoin Core. > This graph might be misleading, because transaction volume or fees might > not be high enough over the last few days to fill blocks to whatever limit > miners are willing to mine. > > So I graphed a time when (according to statoshi.info) there WERE a lot of > transactions waiting to be confirmed: > http://bitcoincore.org/~gavin/sizes_357511.html > > That might also be misleading, because it is possible there were a lot of > transactions waiting to be confirmed because miners who choose to create > small blocks got lucky and found more blocks than normal. In fact, it > looks like that is what happened: more smaller-than-normal blocks were > found, and the memory pool backed up. > > So: what if we had a dynamic maximum size limit based on recent history? > > The average block size is about 400K, so a 1.5x rule would make the max > block size 600K; miners would definitely be squeezing out transactions / > putting pressure to increase transaction fees. Even a 2x rule (implying > 800K max blocks) would, today, be squeezing out transactions / putting > pressure to increase fees. > > Using a median size instead of an average means the size can increase or > decrease more quickly. For example, imagine the rule is "median of last > 2016 blocks" and 49% of miners are producing 0-size blocks and 51% are > producing max-size blocks. The median is max-size, so the 51% have total > control over making blocks bigger. Swap the roles, and the median is > min-size. > > Because of that, I think using an average is better-- it means the max > size will change (up or down) more slowly. > > I also think 2016 blocks is too long, because transaction volumes change > quicker than that. An average over 144 blocks (last 24 hours) would be > better able to handle increased transaction volume around major holidays, > and would also be able to react more quickly if an economically irrational > attacker attempted to flood the network with fee-paying transactions. > > So my straw-man proposal would be: max size 2x average size over last 144 > blocks, calculated at every block. > > There are a couple of other changes I'd pair with that consensus change: > > + Make the default mining policy for Bitcoin Core neutral-- have its > target block size be the average size, so miners that don't care will "go > along with the people who do care." > > + Use something like Greg's formula for size instead of bytes-on-the-wire, > to discourage bloating the UTXO set. > > > --------- > > When I've proposed (privately, to the other core committers) some dynamic > algorithm the objection has been "but that gives miners complete control > over the max block size." > > I think that worry is unjustified right now-- certainly, until we have > size-independent new block propagation there is an incentive for miners to > keep their blocks small, and we see miners creating small blocks even when > there are fee-paying transactions waiting to be confirmed. > > I don't even think it will be a problem if/when we do have > size-independent new block propagation, because I think the combination of > the random timing of block-finding plus a dynamic limit as described above > will create a healthy system. > > If I'm wrong, then it seems to me the miners will have a very strong > incentive to, collectively, impose whatever rules are necessary (maybe a > soft-fork to put a hard cap on block size) to make the system healthy again. > > > -- > -- > Gavin Andresen > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > --001a113a5d5265159105173c0b29 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
> miners would definitel= y be squeezing out transactions / putting pressure to increase transaction = fees

I&= #39;d just like to re-iterate that transactions getting "squeezed out&= quot; (failure after a lengthy period of uncertainty) is a radical change f= rom the current behavior of the network. There are plenty of avenues to cre= ate fee=C2=A0pressure without resorting to such a drastic change in how the= network works today.


Aaron Voisine
co-founder and CEO
breadwallet.com

On Thu, May 28, 2015 at 8:53 AM, Gavin Andre= sen <gavinandresen@gmail.com> wrote:
On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock= <bip@mattwhitlock.name> wrote:
Betw= een all the flames on this list, several ideas were raised that did not get= much attention. I hereby resubmit these ideas for consideration and discus= sion.

- Perhaps the hard block size limit should be a function of the actual bloc= k sizes over some trailing sampling period. For example, take the median bl= ock size among the most recent 2016 blocks and multiply it by 1.5. This all= ows Bitcoin to scale up gradually and organically, rather than having human= beings guessing at what is an appropriate limit.

=
A lot of people like this idea, or something like it. It = is nice and simple, which is really important for consensus-critical code.<= /div>

With this rule in place, I believe there would be = more "fee pressure" (miners would be creating smaller blocks) tod= ay. I created a couple of histograms of block sizes to infer what policy mi= ners are ACTUALLY following today with respect to block size:
Last 1,000 blocks:

Notice a big = spike at 750K -- the default size for Bitcoin Core.
This graph mi= ght be misleading, because transaction volume or fees might not be high eno= ugh over the last few days to fill blocks to whatever limit miners are will= ing to mine.

So I graphed a time when (accordi= ng to statoshi.info)= there WERE a lot of transactions waiting to be confirmed:
<= br>
That might also be misleading, because it is possible there w= ere a lot of transactions waiting to be confirmed because miners who choose= to create small blocks got lucky and found more blocks than normal.=C2=A0 = In fact, it looks like that is what happened: more smaller-than-normal bloc= ks were found, and the memory pool backed up.

So: = what if we had a dynamic maximum size limit based on recent history?
<= div>
The average block size is about 400K, so a 1.5x rule wou= ld make the max block size 600K; miners would definitely be squeezing out t= ransactions / putting pressure to increase transaction fees. Even a 2x rule= (implying 800K max blocks) would, today, be squeezing out transactions / p= utting pressure to increase fees.

Using a median s= ize instead of an average means the size can increase or decrease more quic= kly. For example, imagine the rule is "median of last 2016 blocks"= ; and 49% of miners are producing 0-size blocks and 51% are producing max-s= ize blocks. The median is max-size, so the 51% have total control over maki= ng blocks bigger.=C2=A0 Swap the roles, and the median is min-size.

Because of that, I think using an average is better-- it = means the max size will change (up or down) more slowly.

I also think 2016 blocks is too long, because transaction volumes ch= ange quicker than that. An average over 144 blocks (last 24 hours) would be= better able to handle increased transaction volume around major holidays, = and would also be able to react more quickly if an economically irrational = attacker attempted to flood the network with fee-paying transactions.
=

So my straw-man proposal would be: =C2=A0max size 2x av= erage size over last 144 blocks, calculated at every block.

<= /div>
There are a couple of other changes I'd pair with that consen= sus change:

+ Make the default mining policy for B= itcoin Core neutral-- have its target block size be the average size, so mi= ners that don't care will "go along with the people who do care.&q= uot;

+ Use something like Greg's formula for s= ize instead of bytes-on-the-wire, to discourage bloating the UTXO set.


---------

When= I've proposed (privately, to the other core committers) some dynamic a= lgorithm the objection has been "but that gives miners complete contro= l over the max block size."

I think that worr= y is unjustified right now-- certainly, until we have size-independent new = block propagation there is an incentive for miners to keep their blocks sma= ll, and we see miners creating small blocks even when there are fee-paying = transactions waiting to be confirmed.

I don't = even think it will be a problem if/when we do have size-independent new blo= ck propagation, because I think the combination of the random timing of blo= ck-finding plus a dynamic limit as described above will create a healthy sy= stem.

If I'm wrong, then it seems to me the mi= ners will have a very strong incentive to, collectively, impose whatever ru= les are necessary (maybe a soft-fork to put a hard cap on block size) to ma= ke the system healthy again.


--
--
Gavin Andresen=


-----------------------------------------------------------------------= -------

_______________________________________________
Bitcoin-development mailing list
Bitcoin-develo= pment@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-de= velopment


--001a113a5d5265159105173c0b29--