Received: from sog-mx-2.v43.ch3.sourceforge.com ([172.29.43.192] helo=mx.sourceforge.net) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1YzUWF-00027F-Up for bitcoin-development@lists.sourceforge.net; Mon, 01 Jun 2015 18:33:03 +0000 X-ACL-Warn: Received: from mail-wg0-f46.google.com ([74.125.82.46]) by sog-mx-2.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1YzUWE-0005Bm-AT for bitcoin-development@lists.sourceforge.net; Mon, 01 Jun 2015 18:33:03 +0000 Received: by wgv5 with SMTP id 5so121701461wgv.1 for ; Mon, 01 Jun 2015 11:32:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to :content-type; bh=l11J2T4uW0h5sJeESuEpah928j/7LsniX7CXNmCi8ZM=; b=BxgJcUMW6xst0rITbL+DzuEB03Hy2LPOWT8A5iBpiVnWbjJ2c0CzSnL7WhENDe7ZLz gBkpEW3b/X45yRKEhyvY7VIfIuyGidGFpF9OfIr6U8ySkxebU/oplQMNDErokHpigees kjN7zpNDqMbF2UK1k0hXfUN++eTYyi44s5cr0INH1DF2DQcn3rz7dvvphdz41AK5vJYT a1wT+ddgXs4BK/jftMQIBeuNn5g3SwSTVwOCeixnVYXLtpJWa69gWTq8KZD1YNfvEtqe IBI3Ow5Yg+ll0wzjbyJ49cSe3ktD1C3F7BDw0M2KQwZMNiz2saEsQoTg+yQ2WhRKhJ0H 93SA== X-Gm-Message-State: ALoCoQmxNLQJsGnIiqgOtx66ejVRdy+8jrZWHhUTqpmyEeQvjS0yPYlOmMeRqL6IrDXiXJLBwvVg X-Received: by 10.180.75.8 with SMTP id y8mr23433590wiv.31.1433183575259; Mon, 01 Jun 2015 11:32:55 -0700 (PDT) MIME-Version: 1.0 Received: by 10.194.246.69 with HTTP; Mon, 1 Jun 2015 11:32:24 -0700 (PDT) From: Jim Phillips Date: Mon, 1 Jun 2015 13:32:24 -0500 Message-ID: To: Bitcoin Dev Content-Type: multipart/alternative; boundary=f46d043c7b9ce6d1bc0517790d03 X-Spam-Score: 1.0 (+) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. 1.0 HTML_MESSAGE BODY: HTML included in message 0.0 T_REMOTE_IMAGE Message contains an external image 0.0 AWL AWL: Adjusted score from AWL reputation of From: address X-Headers-End: 1YzUWE-0005Bm-AT Subject: [Bitcoin-development] Why do we need a MAX_BLOCK_SIZE at all? X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Jun 2015 18:33:04 -0000 --f46d043c7b9ce6d1bc0517790d03 Content-Type: text/plain; charset=UTF-8 Ok, I understand at least some of the reason that blocks have to be kept to a certain size. I get that blocks which are too big will be hard to propagate by relays. Miners will have more trouble uploading the large blocks to the network once they've found a hash. We need block size constraints to create a fee economy for the miners. But these all sound to me like issues that affect some, but not others. So it seems to me like it ought to be a configurable setting. We've already witnessed with last week's stress test that most miners aren't even creating 1MB blocks but are still using the software defaults of 730k. If there are configurable limits, why does there have to be a hard limit? Can't miners just use the configurable limit to decide what size blocks they can afford to and are thus willing to create? They could just as easily use that to create a fee economy. If the miners with the most hashpower are not willing to mine blocks larger than 1 or 2 megs, then they are able to slow down confirmations of transactions. It may take several blocks before a miner willing to include a particular transaction finds a block. This would actually force miners to compete with each other and find a block size naturally instead of having it forced on them by the protocol. Relays would be able to participate in that process by restricting the miners ability to propagate large blocks. You know, like what happens in a FREE MARKET economy, without burdensome regulation which can be manipulated through politics? Isn't that what's really happening right now? Different political factions with different agendas are fighting over how best to regulate the Bitcoin protocol. I know the limit was originally put in place to prevent spamming. But that was when we were mining with CPUs and just beginning to see the occasional GPU which could take control over the network and maliciously spam large blocks. But with ASIC mining now catching up to Moore's Law, that's not really an issue anymore. No one malicious entity can really just take over the network now without spending more money than it's worth -- and that's just going to get truer with time as hashpower continues to grow. And it's not like the hard limit really does anything anymore to prevent spamming. If a spammer wants to create thousands or millions of transactions, a hard limit on the block size isn't going to stop him.. He'll just fill up the mempool or UTXO database instead of someone's block database.. And block storage media is generally the cheapest storage.. I mean they could be written to tape and be just as valid as if they're stored in DRAM. Combine that with pruning, and block storage costs are almost a non-issue for anyone who isn't running an archival node. And can't relay nodes just configure a limit on the size of blocks they will relay? Sure they'd still need to download a big block occasionally, but that's not really that big a deal, and they're under no obligation to propagate it.. Even if it's a 2GB block, it'll get downloaded eventually. It's only if it gets to the point where the average home connection is too slow to keep up with the transaction & block flow that there's any real issue there, and that would happen regardless of how big the blocks are. I personally would much prefer to see hardware limits act as the bottleneck than to introduce an artificial bottleneck into the protocol that has to be adjusted regularly. The software and protocol are TECHNICALLY capable of scaling to handle the world's entire transaction set. The real issue with scaling to this size is limitations on hardware, which are regulated by Moore's Law. Why do we need arbitrary soft limits? Why can't we allow Bitcoin to grow naturally within the ever increasing limits of our hardware? Is it because nobody will ever need more than 640k of RAM? Am I missing something here? Is there some big reason that I'm overlooking why there has to be some hard-coded limit on the block size that affects the entire network and creates ongoing issues in the future? -- *James G. Phillips IV* *"Don't bunt. Aim out of the ball park. Aim for the company of immortals." -- David Ogilvy* *This message was created with 100% recycled electrons. Please think twice before printing.* --f46d043c7b9ce6d1bc0517790d03 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Ok, I understand at least some of the reason that blocks h= ave to be kept to a certain size. I get that blocks which are too big will = be hard to propagate by relays. Miners will have more trouble uploading the= large blocks to the network once they've found a hash. We need block s= ize constraints to create a fee economy for the miners.

But these a= ll sound to me like issues that affect some, but not others. So it seems to= me like it ought to be a configurable setting. We've already witnessed= with last week's stress test that most miners aren't even creating= 1MB blocks but are still using the software defaults of 730k. If there are= configurable limits, why does there have to be a hard limit? Can't min= ers just use the configurable limit to decide what size blocks they can aff= ord to and are thus willing to create? They could just as easily use that t= o create a fee economy. If the miners with the most hashpower are not willi= ng to mine blocks larger than 1 or 2 megs, then they are able to slow down = confirmations of transactions. It may take several blocks before a miner wi= lling to include a particular transaction finds a block. This would actuall= y force miners to compete with each other and find a block size naturally i= nstead of having it forced on them by the protocol. Relays would be able to= participate in that process by restricting the miners ability to propagate= large blocks. You know, like what happens in a FREE MARKET=C2=A0economy, w= ithout burdensome regulation which can be manipulated through politics? Isn= 't that what's really happening right now? Different political fact= ions with different agendas are fighting over how best to regulate the Bitc= oin protocol.

I know the limit was originally put in plac= e to prevent spamming. But that was when we were mining with CPUs and just = beginning to see the occasional GPU which could take control over the netwo= rk and maliciously spam large blocks. But with ASIC mining now catching up = to Moore's Law, that's not really an issue anymore. No one maliciou= s entity can really just take over the network now without spending more mo= ney than it's worth -- and that's just going to get truer with time= as hashpower continues to grow. And it's not like the hard limit reall= y does anything anymore to prevent spamming. If a spammer wants to create t= housands or millions of transactions, a hard limit on the block size isn= 9;t going to stop him.. He'll just fill up the mempool or UTXO database= instead of someone's block database.. And block storage media is gener= ally the cheapest storage.. I mean they could be written to tape and be jus= t as valid as if they're stored in DRAM. Combine that with pruning, and= block storage costs are almost a non-issue for anyone who isn't runnin= g an archival node.

And can't relay nodes just configure = a limit on the size of blocks they will relay? Sure they'd still need t= o download a big block occasionally, but that's not really that big a d= eal, and they're under no obligation to propagate it.. Even if it's= a 2GB block, it'll get downloaded eventually. It's only if it gets= to the point where the average home connection is too slow to keep up with= the transaction & block flow that there's any real issue there, an= d that would happen regardless of how big the blocks are. I personally woul= d much prefer to see hardware limits act as the bottleneck than to introduc= e an artificial bottleneck into the protocol that has to be adjusted regula= rly.=C2=A0The software and protocol are TECHNICALLY capable of scaling to h= andle the world's entire transaction set. The real issue with scaling t= o this size is limitations on hardware, which are regulated by Moore's = Law. Why do we need arbitrary soft limits? Why can't we allow Bitcoin t= o grow naturally within the ever increasing limits of our hardware? Is it b= ecause nobody will ever need more than 640k of RAM?

Am I = missing something here? Is there some big reason that I'm overlooking w= hy there has to be some hard-coded limit on the block size that affects the= entire network and creates ongoing issues in the future?

--

James G. Phi= llips IV=C2=A0
"Don't bunt. Aim out of the ball park. Aim for the comp= any of immortals." -- David Ogilvy

=C2=A0This message was created with 100% recycled electrons. Please th= ink twice before printing.
--f46d043c7b9ce6d1bc0517790d03--