Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194] helo=mx.sourceforge.net) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1YzVvV-0006pw-Cz for bitcoin-development@lists.sourceforge.net; Mon, 01 Jun 2015 20:03:13 +0000 X-ACL-Warn: Received: from mail-wi0-f181.google.com ([209.85.212.181]) by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1YzVvR-0004c6-1a for bitcoin-development@lists.sourceforge.net; Mon, 01 Jun 2015 20:03:13 +0000 Received: by wibdq8 with SMTP id dq8so39590387wib.1 for ; Mon, 01 Jun 2015 13:03:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=FkkO5ykCT652pOEHw+mCcT8FmhK3mqH3EbTIKx1oRaU=; b=DN68CA/gvuV7Bvj+bEGNY7aWNCxCLpmz6AMfgoi/LXa6kPl2/BHi5zknFuWijzDb57 OmFUSesJbEcfZaEHcqiJ38PPCl/nUy1bCTbM5BQt+mYF4hgfycReSVoTFcCuXqoMQJZw Tbuxz6maXKgxkYCLg1rwANzRucmWljbZ/CFDihxiiN50jbEUr5wymPRRKPJTZz/2Srt0 EzTEQQ9TI5DCrpe1snWv1/1fiX8ZEAuYKXygfgAemBe8yyjyu7wqO/Z4iKSCyA3l+czB f53VXwAmvSmCXxJ3wsqyayRai8DMn0dvGaq9zlsRKM0LZO2nLD9ZuHEc/mxyZpSMgnsG hnMg== X-Gm-Message-State: ALoCoQmbgeD5zwTKaEqMJ//WC00mro9tUId1Nyu5Z88np4jD2/uRrURRRY6CNfeHS/Ao1qfHHZTU X-Received: by 10.194.100.42 with SMTP id ev10mr42327694wjb.50.1433188982960; Mon, 01 Jun 2015 13:03:02 -0700 (PDT) MIME-Version: 1.0 Received: by 10.194.246.69 with HTTP; Mon, 1 Jun 2015 13:02:31 -0700 (PDT) In-Reply-To: References: From: Jim Phillips Date: Mon, 1 Jun 2015 15:02:31 -0500 Message-ID: To: Stephen Morse Content-Type: multipart/alternative; boundary=089e0160aa4839ca9a05177a50fb X-Spam-Score: 1.0 (+) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. 1.0 HTML_MESSAGE BODY: HTML included in message 0.0 T_REMOTE_IMAGE Message contains an external image X-Headers-End: 1YzVvR-0004c6-1a Cc: Bitcoin Dev Subject: Re: [Bitcoin-development] Why do we need a MAX_BLOCK_SIZE at all? X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Jun 2015 20:03:13 -0000 --089e0160aa4839ca9a05177a50fb Content-Type: text/plain; charset=UTF-8 > > 1. To Maintaining Consensus > There has to be clearly defined rules about which blocks are valid and > which are not for the network to agree. Obviously no node will accept a > block that is 10 million terabytes, it would be near impossible to download > even if it were valid. So where do you set the limit? And what if one nodes > sets their limit differently than other nodes on the network? If this were > to happen, the network would no longer be in consensus about which blocks > were valid when a block was broadcasted that met some nodes' size limits > and did not meet others. > Setting a network limit on the maximum block size ensures that everyone is > in agreement about which blocks are valid and which are not, so that > consensus is achieved. It is as impossible to upload a 10 million terabyte block as it is to download it. But even on a more realistic scale, of say a 2GB block, there are other factors that prevent a rogue miner from being able to flood the network using large blocks -- such as the ability to get that block propagated before it can be orphaned. A simple solution to these large blocks is for relays to set configurable limits on the size of blocks that they will relay. If the rogue miner can't get his megablock propagated before it is orphaned, his attack will not succeed. It doesn't make the block invalid, just useless as a DoS tool. And over time, relays can raise the limits they set on block sizes they will propagate according to what they can handle. As more and more relays accept larger and larger blocks, the true maximum block size can grow naturally and not require a hard fork. 2. To Avoid (further) Centralization of Pools Suppose we remove the 1 MB cap entirely. A large pool says to itself, "I > wish I had a larger percentage of the network hashrate so I could make more > profit." > Then they realize that since there's no block size limit, they can make a > block that is 4 GB large by filling it with nonsense. They and a few other > pools have bandwidth large enough to download a block of this size in a > reasonable time, but a smaller pool does not. The tiny pool is then stuck > trying to download a block that is too large, and continuing to mine on > their previous block until they finish downloading the new block. This > means the small pool is now wasting their time mining blocks that are > likely never to be accepted even if they were solved, since they wouldn't > be in the 'longest' chain. Since their hash power is wasted, the original > pool operator now has effectively forced smaller pools out of the network, > and simultaneously increased their percentage of the network hashrate. Yet another issue that can be addressed by allowing relays to restrict propagation. Relays are just as impacted by large blocks filled with nonsense as small miners. If a relay downloads a block and sees that it's full of junk or comes from a miner notorious for producing bad blocks, he can refuse to relay it. If a bad block doesn't propagate, it can't hurt anyone. Large miners also typically have to use static IPs. Anonymizing networks like TOR aren't geared towards handling that type of traffic. They can't afford to have the reputation of the IPs they release blocks with tarnished, so why would they risk getting blacklisted by relays? > 3. To Make Full Nodes Feasible > Essentially, larger blocks means fewer people that can download and verify > the chain, which results fewer people willing to run full nodes and store > all of the blockchain data. > If there were no block size limit, malicious persons could artificially > bloat the block with nonsense and increase the server costs for everyone > running a full node, in addition to making it infeasible for people with > just home computers to even keep up with the network. > The goal is to find a block size limit with the right tradeoff between > resource restrictions (so that someone on their home computer can still run > a full node), and functional requirements (being able to process X number > of transactions per second). Eventually, transactions will likely be done > off-chain using micropayment channels, but no such solution currently > exists. This same attack could be achieved simply by sending lots of spam transactions and bloating the UTXO database or the mempool. In fact, given that block storage is substantially cheaper than UTXO/mempool storage, I'd be far more concerned with that type of attack. And this particular attack vector has already been largely mitigated by pruning and could be further mitigated by allowing relays to decide which blocks they propagate. -- *James G. Phillips IV* *"Don't bunt. Aim out of the ball park. Aim for the company of immortals." -- David Ogilvy* *This message was created with 100% recycled electrons. Please think twice before printing.* On Mon, Jun 1, 2015 at 2:02 PM, Stephen Morse wrote: > This exact question came up on the Bitcoin Stack Exchange once. I gave an > answer here: > http://bitcoin.stackexchange.com/questions/37292/whats-the-purpose-of-a-maximum-block-size/37303#37303 > > On Mon, Jun 1, 2015 at 2:32 PM, Jim Phillips wrote: > >> Ok, I understand at least some of the reason that blocks have to be kept >> to a certain size. I get that blocks which are too big will be hard to >> propagate by relays. Miners will have more trouble uploading the large >> blocks to the network once they've found a hash. We need block size >> constraints to create a fee economy for the miners. >> >> But these all sound to me like issues that affect some, but not others. >> So it seems to me like it ought to be a configurable setting. We've already >> witnessed with last week's stress test that most miners aren't even >> creating 1MB blocks but are still using the software defaults of 730k. If >> there are configurable limits, why does there have to be a hard limit? >> Can't miners just use the configurable limit to decide what size blocks >> they can afford to and are thus willing to create? They could just as >> easily use that to create a fee economy. If the miners with the most >> hashpower are not willing to mine blocks larger than 1 or 2 megs, then they >> are able to slow down confirmations of transactions. It may take several >> blocks before a miner willing to include a particular transaction finds a >> block. This would actually force miners to compete with each other and find >> a block size naturally instead of having it forced on them by the protocol. >> Relays would be able to participate in that process by restricting the >> miners ability to propagate large blocks. You know, like what happens in a >> FREE MARKET economy, without burdensome regulation which can be manipulated >> through politics? Isn't that what's really happening right now? Different >> political factions with different agendas are fighting over how best to >> regulate the Bitcoin protocol. >> >> I know the limit was originally put in place to prevent spamming. But >> that was when we were mining with CPUs and just beginning to see the >> occasional GPU which could take control over the network and maliciously >> spam large blocks. But with ASIC mining now catching up to Moore's Law, >> that's not really an issue anymore. No one malicious entity can really just >> take over the network now without spending more money than it's worth -- >> and that's just going to get truer with time as hashpower continues to >> grow. And it's not like the hard limit really does anything anymore to >> prevent spamming. If a spammer wants to create thousands or millions of >> transactions, a hard limit on the block size isn't going to stop him.. >> He'll just fill up the mempool or UTXO database instead of someone's block >> database.. And block storage media is generally the cheapest storage.. I >> mean they could be written to tape and be just as valid as if they're >> stored in DRAM. Combine that with pruning, and block storage costs are >> almost a non-issue for anyone who isn't running an archival node. >> >> And can't relay nodes just configure a limit on the size of blocks they >> will relay? Sure they'd still need to download a big block occasionally, >> but that's not really that big a deal, and they're under no obligation to >> propagate it.. Even if it's a 2GB block, it'll get downloaded eventually. >> It's only if it gets to the point where the average home connection is too >> slow to keep up with the transaction & block flow that there's any real >> issue there, and that would happen regardless of how big the blocks are. I >> personally would much prefer to see hardware limits act as the bottleneck >> than to introduce an artificial bottleneck into the protocol that has to be >> adjusted regularly. The software and protocol are TECHNICALLY capable of >> scaling to handle the world's entire transaction set. The real issue with >> scaling to this size is limitations on hardware, which are regulated by >> Moore's Law. Why do we need arbitrary soft limits? Why can't we allow >> Bitcoin to grow naturally within the ever increasing limits of our >> hardware? Is it because nobody will ever need more than 640k of RAM? >> >> Am I missing something here? Is there some big reason that I'm >> overlooking why there has to be some hard-coded limit on the block size >> that affects the entire network and creates ongoing issues in the future? >> >> -- >> >> *James G. Phillips IV* >> >> >> *"Don't bunt. Aim out of the ball park. Aim for the company of >> immortals." -- David Ogilvy* >> >> *This message was created with 100% recycled electrons. Please think >> twice before printing.* >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bitcoin-development mailing list >> Bitcoin-development@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/bitcoin-development >> >> > --089e0160aa4839ca9a05177a50fb Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
1. To Maintaining Consensus
=C2=A0
There has to be cle= arly defined rules about which blocks are valid and which are not for the n= etwork to agree. Obviously no node will accept a block that is 10 million t= erabytes, it would be near impossible to download even if it were valid. So= where do you set the limit? And what if one nodes sets their limit differe= ntly than other nodes on the network? If this were to happen, the network w= ould no longer be in consensus about which blocks were valid when a block w= as broadcasted that met some nodes' size limits and did not meet others= .
Setting a network limit on the maximum block size ensures that everyon= e is in agreement about which blocks are valid and which are not, so that c= onsensus is achieved.

It is as impossible t= o upload a 10 million terabyte block as it is to download it. But even on a= more realistic scale, of say a 2GB block, there are other factors that pre= vent a rogue miner from being able to flood the network using large blocks = -- such as the ability to get that block propagated before it can be orphan= ed. A simple solution to these large blocks is for relays to set configurab= le limits on the size of blocks that they will relay. If the rogue miner ca= n't get his megablock propagated before it is orphaned, his attack will= not succeed. It doesn't make the block invalid, just useless as a DoS = tool. And over time, relays can raise the limits they set on block sizes th= ey will propagate according to what they can handle. As more and more relay= s accept larger and larger blocks, the true maximum block size can grow nat= urally and not require a hard fork.

= 2. To Avoid (further) Centralization of Pools=C2=A0
=C2=A0
Suppose we remove the 1 MB cap en= tirely. A large pool says to itself, "I wish I had a larger percentage= of the network hashrate so I could make more profit."
=C2=A0
Then they realize tha= t since there's no block size limit, they can make a block that is 4 GB= large by filling it with nonsense. They and a few other pools have bandwid= th large enough to download a block of this size in a reasonable time, but = a smaller pool does not. The tiny pool is then stuck trying to download a b= lock that is too large, and continuing to mine on their previous block unti= l they finish downloading the new block. This means the small pool is now w= asting their time mining blocks that are likely never to be accepted even i= f they were solved, since they wouldn't be in the 'longest' cha= in. Since their hash power is wasted, the original pool operator now has ef= fectively forced smaller pools out of the network, and simultaneously incre= ased their percentage of the network hashrate.
=C2=A0

Yet another issue that can be addressed by allowing rel= ays to restrict propagation. Relays are just as impacted by large blocks fi= lled with nonsense as small miners. If a relay downloads a block and sees t= hat it's full of junk or comes from a miner notorious for producing bad= blocks, he can refuse to relay it. If a bad block doesn't propagate, i= t can't hurt anyone. Large miners also typically have to use static IPs= . Anonymizing networks like TOR aren't geared towards handling that typ= e of traffic. They can't afford to have the reputation of the IPs they = release blocks with tarnished, so why would they risk getting blacklisted b= y relays?

3. To Make Full Nodes Feasible
=C2=A0
Essentially, = larger blocks means fewer people that can download and verify the chain, wh= ich results fewer people willing to run full nodes and store all of the blo= ckchain data.
=C2=A0
If there were no block size limit, malicious persons could artifici= ally bloat the block with nonsense and increase the server costs for everyo= ne running a full node, in addition to making it infeasible for people with= just home computers to even keep up with the network.
The goal is to fi= nd a block size limit with the right tradeoff between resource restrictions= (so that someone on their home computer can still run a full node), and fu= nctional requirements (being able to process X number of transactions per s= econd). Eventually, transactions will likely be done off-chain using microp= ayment channels, but no such solution currently exists.
This same attack could be achieved simply by sending lots of s= pam transactions and bloating the UTXO database or the mempool. In fact, gi= ven that block storage is substantially cheaper than UTXO/mempool storage, = I'd be far more concerned with that type of attack. And this particular= attack vector has already been largely mitigated by pruning and could be f= urther mitigated by allowing relays to decide which blocks they propagate.<= /div>
=C2=A0

--
James G. Phillips IV= =C2=A0=C2=A0
"Don't bunt. Aim out of the ball park. Aim for the comp= any of immortals." -- David Ogilvy

=C2=A0This message was created with 100% recycled electrons. Please th= ink twice before printing.

On Mon, Jun 1, 2015 at 2:02 PM, Stephen Mors= e <stephencalebmorse@gmail.com> wrote:
This exact question came up on the = Bitcoin Stack Exchange once. I gave an answer here:=C2=A0http://bitcoin.stackexchange.com/qu= estions/37292/whats-the-purpose-of-a-maximum-block-size/37303#37303

On Mon, Jun 1, 2015 at 2:32 PM, Jim Phillips <= jim@ergophobia.org<= /a>> wrote:
Ok, I understand at least some of the reas= on that blocks have to be kept to a certain size. I get that blocks which a= re too big will be hard to propagate by relays. Miners will have more troub= le uploading the large blocks to the network once they've found a hash.= We need block size constraints to create a fee economy for the miners.
But these all sound to me like issues that affect some, but not others= . So it seems to me like it ought to be a configurable setting. We've a= lready witnessed with last week's stress test that most miners aren'= ;t even creating 1MB blocks but are still using the software defaults of 73= 0k. If there are configurable limits, why does there have to be a hard limi= t? Can't miners just use the configurable limit to decide what size blo= cks they can afford to and are thus willing to create? They could just as e= asily use that to create a fee economy. If the miners with the most hashpow= er are not willing to mine blocks larger than 1 or 2 megs, then they are ab= le to slow down confirmations of transactions. It may take several blocks b= efore a miner willing to include a particular transaction finds a block. Th= is would actually force miners to compete with each other and find a block = size naturally instead of having it forced on them by the protocol. Relays = would be able to participate in that process by restricting the miners abil= ity to propagate large blocks. You know, like what happens in a FREE MARKET= =C2=A0economy, without burdensome regulation which can be manipulated throu= gh politics? Isn't that what's really happening right now? Differen= t political factions with different agendas are fighting over how best to r= egulate the Bitcoin protocol.

I know the limit was origin= ally put in place to prevent spamming. But that was when we were mining wit= h CPUs and just beginning to see the occasional GPU which could take contro= l over the network and maliciously spam large blocks. But with ASIC mining = now catching up to Moore's Law, that's not really an issue anymore.= No one malicious entity can really just take over the network now without = spending more money than it's worth -- and that's just going to get= truer with time as hashpower continues to grow. And it's not like the = hard limit really does anything anymore to prevent spamming. If a spammer w= ants to create thousands or millions of transactions, a hard limit on the b= lock size isn't going to stop him.. He'll just fill up the mempool = or UTXO database instead of someone's block database.. And block storag= e media is generally the cheapest storage.. I mean they could be written to= tape and be just as valid as if they're stored in DRAM. Combine that w= ith pruning, and block storage costs are almost a non-issue for anyone who = isn't running an archival node.

And can't relay nodes= just configure a limit on the size of blocks they will relay? Sure they= 9;d still need to download a big block occasionally, but that's not rea= lly that big a deal, and they're under no obligation to propagate it.. = Even if it's a 2GB block, it'll get downloaded eventually. It's= only if it gets to the point where the average home connection is too slow= to keep up with the transaction & block flow that there's any real= issue there, and that would happen regardless of how big the blocks are. I= personally would much prefer to see hardware limits act as the bottleneck = than to introduce an artificial bottleneck into the protocol that has to be= adjusted regularly.=C2=A0The software and protocol are TECHNICALLY capable= of scaling to handle the world's entire transaction set. The real issu= e with scaling to this size is limitations on hardware, which are regulated= by Moore's Law. Why do we need arbitrary soft limits? Why can't we= allow Bitcoin to grow naturally within the ever increasing limits of our h= ardware? Is it because nobody will ever need more than 640k of RAM?

=
Am I missing something here? Is there some big reason that I'= ;m overlooking why there has to be some hard-coded limit on the block size = that affects the entire network and creates ongoing issues in the future?

--
"Don't bunt. Aim out of the ball park. Aim for the company of imm= ortals." -- David Ogilvy

=C2=A0This message was created with 100% recycled electrons. Please think twice = before printing.

-----------------------------------------------------------= -------------------

_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-de= velopment



--089e0160aa4839ca9a05177a50fb--