Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194] helo=mx.sourceforge.net) by sfs-ml-3.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1Yy2Ud-0000r5-QA for bitcoin-development@lists.sourceforge.net; Thu, 28 May 2015 18:25:23 +0000 Received-SPF: pass (sog-mx-4.v43.ch3.sourceforge.com: domain of gmail.com designates 209.85.216.48 as permitted sender) client-ip=209.85.216.48; envelope-from=steven.pine@gmail.com; helo=mail-vn0-f48.google.com; Received: from mail-vn0-f48.google.com ([209.85.216.48]) by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1Yy2Uc-0002VM-Kx for bitcoin-development@lists.sourceforge.net; Thu, 28 May 2015 18:25:23 +0000 Received: by vnbf7 with SMTP id f7so5665216vnb.13 for ; Thu, 28 May 2015 11:25:17 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.52.233.166 with SMTP id tx6mr3497069vdc.91.1432837517183; Thu, 28 May 2015 11:25:17 -0700 (PDT) Received: by 10.52.26.80 with HTTP; Thu, 28 May 2015 11:25:17 -0700 (PDT) Received: by 10.52.26.80 with HTTP; Thu, 28 May 2015 11:25:17 -0700 (PDT) In-Reply-To: References: Date: Thu, 28 May 2015 14:25:17 -0400 Message-ID: From: Steven Pine To: bitcoin-development@lists.sourceforge.net, gavinandresen@gmail.com Content-Type: multipart/alternative; boundary=089e011601d23b922b0517287be6 X-Spam-Score: -0.6 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for sender-domain 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (steven.pine[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record 1.0 HTML_MESSAGE BODY: HTML included in message -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-Headers-End: 1Yy2Uc-0002VM-Kx Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB step function X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 May 2015 18:25:23 -0000 --089e011601d23b922b0517287be6 Content-Type: text/plain; charset=UTF-8 My understanding, which is very likely wrong in one way or another, is transaction size and block size are two slightly different things but perhaps it's so negligible that block size is a fine stand-in for total transaction throughput. Potentially Doubling the block size everyday is frankly imprudent. The logarithmic increases in difficulty, which were often closer to 10% or 20% every 2016 blocks was and is plenty fast, potentially changing blocksize by twice daily is the mentality I would expect from a startup with the move fast break things motto. Infrastructure takes time, not everyone wants to run a node on a virtual amazon instance, provisioning additional hard drive and bandwidth can't happen overnight and trying to plan when block size from one week to the next is a total mystery would be extremely difficult. Anyone who has spent time examining the mining difficulty increases and trajectory knows future planning is very very hard, allowing block size to double daily would make it impossible. Perhaps a middle way would be 300% increase every 2016 blocks, that will scale to 20mbs within a month or two The problem is logarithmic increases seem slow until they seem fast. If the network begins to grow and block size hits 20, then the next day 40, 80... Small nodes could get swamped within a week or less. As for your point about Christmas, Bitcoin is a global network, Christmas, while widely celebrated, isn't the only holiday, and planning around American buying habits seems short sighted and no different from developers trying to choose what the right fee pressure is. On May 28, 2015 1:22 PM, "Gavin Andresen" wrote: > > On Thu, May 28, 2015 at 12:30 PM, Steven Pine wrote: >> >> I would support a dynamic block size increase as outlined. I have a few questions though. >> >> Is scaling by average block size the best and easiest method, why not scale by transactions confirmed instead? Anyone can write and relay a transaction, and those are what we want to scale for, why not measure it directly? > > > What do you mean? Transactions aren't confirmed until they're in a block... > >> >> I would prefer changes every 2016 blocks, it is a well known change and a reasonable time period for planning on changes. Two weeks is plenty fast, especially at a 50% rate increase, in a few months the block size could be dramatically larger. > > > What type of planning do you imagine is necessary? > > And have you looked at transaction volumes for credit-card payment networks around Christmas? > >> >> Daily change to size seems confusing especially considering that max block size will be dipping up and down. Also if something breaks trying to fix it in a day seems problematic. The hard fork database size difference error comes to mind. Finally daily 50% increases could quickly crowd out smaller nodes if changes happen too quickly to adapt for. > > The bottleneck is transaction volume; blocks won't get bigger unless there are fee-paying transactions around to pay them. What scenario are you imagining where transaction volume increases by 50% a day for a sustained period of time? > > -- > -- > Gavin Andresen --089e011601d23b922b0517287be6 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

My understanding, which is very likely wrong in one way or a= nother, is transaction size and block size are two slightly different thing= s but perhaps it's so negligible that block size is a fine stand-in for= total transaction throughput.

Potentially Doubling the block size everyday is frankly impr= udent. The logarithmic increases in difficulty, which were often closer to = 10% or 20% every 2016 blocks was and is plenty fast, potentially changing b= locksize by twice daily is the mentality I would expect from a startup with= the move fast break things motto.

Infrastructure takes time, not everyone wants to run a node = on a virtual amazon instance, provisioning additional hard drive and bandwi= dth can't happen overnight and trying to plan when block size from one = week to the next is a total mystery would be extremely difficult.

Anyone who has spent time examining the mining difficulty in= creases and trajectory knows future planning is very very hard, allowing bl= ock size to double daily would make it impossible.

Perhaps a middle way would be 300%=C2=A0 increase every 2016= blocks, that will scale to 20mbs within a=C2=A0 month or two

The problem is logarithmic increases seem slow until they se= em fast. If the network begins to grow and block size hits 20, then the nex= t day 40, 80... Small nodes could get swamped within a week or less.

As for your point about Christmas, Bitcoin is a global netwo= rk, Christmas, while widely celebrated, isn't the only holiday, and pla= nning around American buying habits seems short sighted and no different fr= om developers trying to choose what the right fee pressure is.

On May 28, 2015 1:22 PM, "Gavin Andresen" <gavinandresen@gmail.com> wrot= e:
>
> On Thu, May 28, 2015 at 12:30 PM, Steven Pine <steven.pine@gmail.com> wrote:
>>
>> I would support a dynamic block size increase as outlined. I have = a few questions though.
>>
>> Is scaling by average block size the best and easiest method, why = not scale by transactions confirmed instead? Anyone can write and relay a t= ransaction, and those are what we want to scale for, why not measure it dir= ectly?
>
>
> What do you mean? Transactions aren't confirmed until they're = in a block...
> =C2=A0
>>
>> I would prefer changes every 2016 blocks, it is a well known chang= e and a reasonable time period for planning on changes. Two weeks is plenty= fast, especially at a 50% rate increase, in a few months the block size co= uld be dramatically larger.
>
>
> What type of planning do you imagine is necessary?
>
> And have you looked at transaction volumes for credit-card payment net= works around Christmas?
> =C2=A0
>>
>> Daily change to size seems confusing especially considering that m= ax block size will be dipping up and down. Also if something breaks trying = to fix it in a day seems problematic. The hard fork database size differenc= e error comes to mind. Finally daily 50% increases could quickly crowd out = smaller nodes if changes happen too quickly to adapt for.
>
> The bottleneck is transaction volume; blocks won't get bigger unle= ss there are fee-paying transactions around to pay them. What scenario are = you imagining where transaction volume increases by 50% a day for a sustain= ed period of time?
>
> --
> --
> Gavin Andresen

--089e011601d23b922b0517287be6--