summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJim Phillips <jim@ergophobia.org>2015-06-01 15:02:31 -0500
committerbitcoindev <bitcoindev@gnusha.org>2015-06-01 20:03:13 +0000
commit4c03ea2284d9a6cb90eb4bd0ea8544cb62f61fb1 (patch)
treed96e80deaf2d04a21ae9b12c29c43d5ab5191580
parentfbb67a89b09efb06b724a0b7692645bd68437ff4 (diff)
downloadpi-bitcoindev-4c03ea2284d9a6cb90eb4bd0ea8544cb62f61fb1.tar.gz
pi-bitcoindev-4c03ea2284d9a6cb90eb4bd0ea8544cb62f61fb1.zip
Re: [Bitcoin-development] Why do we need a MAX_BLOCK_SIZE at all?
-rw-r--r--5f/74da5257b2edb520ae44f851d98f6b74e82005476
1 files changed, 476 insertions, 0 deletions
diff --git a/5f/74da5257b2edb520ae44f851d98f6b74e82005 b/5f/74da5257b2edb520ae44f851d98f6b74e82005
new file mode 100644
index 000000000..46454519a
--- /dev/null
+++ b/5f/74da5257b2edb520ae44f851d98f6b74e82005
@@ -0,0 +1,476 @@
+Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194]
+ helo=mx.sourceforge.net)
+ by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.76)
+ (envelope-from <jim@ergophobia.org>) id 1YzVvV-0006pw-Cz
+ for bitcoin-development@lists.sourceforge.net;
+ Mon, 01 Jun 2015 20:03:13 +0000
+X-ACL-Warn:
+Received: from mail-wi0-f181.google.com ([209.85.212.181])
+ by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128)
+ (Exim 4.76) id 1YzVvR-0004c6-1a
+ for bitcoin-development@lists.sourceforge.net;
+ Mon, 01 Jun 2015 20:03:13 +0000
+Received: by wibdq8 with SMTP id dq8so39590387wib.1
+ for <bitcoin-development@lists.sourceforge.net>;
+ Mon, 01 Jun 2015 13:03:03 -0700 (PDT)
+X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
+ d=1e100.net; s=20130820;
+ h=x-gm-message-state:mime-version:in-reply-to:references:from:date
+ :message-id:subject:to:cc:content-type;
+ bh=FkkO5ykCT652pOEHw+mCcT8FmhK3mqH3EbTIKx1oRaU=;
+ b=DN68CA/gvuV7Bvj+bEGNY7aWNCxCLpmz6AMfgoi/LXa6kPl2/BHi5zknFuWijzDb57
+ OmFUSesJbEcfZaEHcqiJ38PPCl/nUy1bCTbM5BQt+mYF4hgfycReSVoTFcCuXqoMQJZw
+ Tbuxz6maXKgxkYCLg1rwANzRucmWljbZ/CFDihxiiN50jbEUr5wymPRRKPJTZz/2Srt0
+ EzTEQQ9TI5DCrpe1snWv1/1fiX8ZEAuYKXygfgAemBe8yyjyu7wqO/Z4iKSCyA3l+czB
+ f53VXwAmvSmCXxJ3wsqyayRai8DMn0dvGaq9zlsRKM0LZO2nLD9ZuHEc/mxyZpSMgnsG
+ hnMg==
+X-Gm-Message-State: ALoCoQmbgeD5zwTKaEqMJ//WC00mro9tUId1Nyu5Z88np4jD2/uRrURRRY6CNfeHS/Ao1qfHHZTU
+X-Received: by 10.194.100.42 with SMTP id ev10mr42327694wjb.50.1433188982960;
+ Mon, 01 Jun 2015 13:03:02 -0700 (PDT)
+MIME-Version: 1.0
+Received: by 10.194.246.69 with HTTP; Mon, 1 Jun 2015 13:02:31 -0700 (PDT)
+In-Reply-To: <CABHVRKSm08T7ik4Ozd-WgMTrkT2c0waKDwg6Ma+ZMTWWeevfAw@mail.gmail.com>
+References: <CANe1mWz_wDAFL2piyLeOxEnMxHCQaTnGLQA6f9jZvLEmbMj6Zw@mail.gmail.com>
+ <CABHVRKSm08T7ik4Ozd-WgMTrkT2c0waKDwg6Ma+ZMTWWeevfAw@mail.gmail.com>
+From: Jim Phillips <jim@ergophobia.org>
+Date: Mon, 1 Jun 2015 15:02:31 -0500
+Message-ID: <CANe1mWzqo0EdEpuQB6FgaOVTrYGvB6bQT-oz+bm_hoi=3WT4xw@mail.gmail.com>
+To: Stephen Morse <stephencalebmorse@gmail.com>
+Content-Type: multipart/alternative; boundary=089e0160aa4839ca9a05177a50fb
+X-Spam-Score: 1.0 (+)
+X-Spam-Report: Spam Filtering performed by mx.sourceforge.net.
+ See http://spamassassin.org/tag/ for more details.
+ 1.0 HTML_MESSAGE BODY: HTML included in message
+ 0.0 T_REMOTE_IMAGE Message contains an external image
+X-Headers-End: 1YzVvR-0004c6-1a
+Cc: Bitcoin Dev <bitcoin-development@lists.sourceforge.net>
+Subject: Re: [Bitcoin-development] Why do we need a MAX_BLOCK_SIZE at all?
+X-BeenThere: bitcoin-development@lists.sourceforge.net
+X-Mailman-Version: 2.1.9
+Precedence: list
+List-Id: <bitcoin-development.lists.sourceforge.net>
+List-Unsubscribe: <https://lists.sourceforge.net/lists/listinfo/bitcoin-development>,
+ <mailto:bitcoin-development-request@lists.sourceforge.net?subject=unsubscribe>
+List-Archive: <http://sourceforge.net/mailarchive/forum.php?forum_name=bitcoin-development>
+List-Post: <mailto:bitcoin-development@lists.sourceforge.net>
+List-Help: <mailto:bitcoin-development-request@lists.sourceforge.net?subject=help>
+List-Subscribe: <https://lists.sourceforge.net/lists/listinfo/bitcoin-development>,
+ <mailto:bitcoin-development-request@lists.sourceforge.net?subject=subscribe>
+X-List-Received-Date: Mon, 01 Jun 2015 20:03:13 -0000
+
+--089e0160aa4839ca9a05177a50fb
+Content-Type: text/plain; charset=UTF-8
+
+>
+> 1. To Maintaining Consensus
+>
+
+
+There has to be clearly defined rules about which blocks are valid and
+> which are not for the network to agree. Obviously no node will accept a
+> block that is 10 million terabytes, it would be near impossible to download
+> even if it were valid. So where do you set the limit? And what if one nodes
+> sets their limit differently than other nodes on the network? If this were
+> to happen, the network would no longer be in consensus about which blocks
+> were valid when a block was broadcasted that met some nodes' size limits
+> and did not meet others.
+> Setting a network limit on the maximum block size ensures that everyone is
+> in agreement about which blocks are valid and which are not, so that
+> consensus is achieved.
+
+
+It is as impossible to upload a 10 million terabyte block as it is to
+download it. But even on a more realistic scale, of say a 2GB block, there
+are other factors that prevent a rogue miner from being able to flood the
+network using large blocks -- such as the ability to get that block
+propagated before it can be orphaned. A simple solution to these large
+blocks is for relays to set configurable limits on the size of blocks that
+they will relay. If the rogue miner can't get his megablock propagated
+before it is orphaned, his attack will not succeed. It doesn't make the
+block invalid, just useless as a DoS tool. And over time, relays can raise
+the limits they set on block sizes they will propagate according to what
+they can handle. As more and more relays accept larger and larger blocks,
+the true maximum block size can grow naturally and not require a hard fork.
+
+2. To Avoid (further) Centralization of Pools
+
+
+
+Suppose we remove the 1 MB cap entirely. A large pool says to itself, "I
+> wish I had a larger percentage of the network hashrate so I could make more
+> profit."
+>
+
+
+Then they realize that since there's no block size limit, they can make a
+> block that is 4 GB large by filling it with nonsense. They and a few other
+> pools have bandwidth large enough to download a block of this size in a
+> reasonable time, but a smaller pool does not. The tiny pool is then stuck
+> trying to download a block that is too large, and continuing to mine on
+> their previous block until they finish downloading the new block. This
+> means the small pool is now wasting their time mining blocks that are
+> likely never to be accepted even if they were solved, since they wouldn't
+> be in the 'longest' chain. Since their hash power is wasted, the original
+> pool operator now has effectively forced smaller pools out of the network,
+> and simultaneously increased their percentage of the network hashrate.
+
+
+
+Yet another issue that can be addressed by allowing relays to restrict
+propagation. Relays are just as impacted by large blocks filled with
+nonsense as small miners. If a relay downloads a block and sees that it's
+full of junk or comes from a miner notorious for producing bad blocks, he
+can refuse to relay it. If a bad block doesn't propagate, it can't hurt
+anyone. Large miners also typically have to use static IPs. Anonymizing
+networks like TOR aren't geared towards handling that type of traffic. They
+can't afford to have the reputation of the IPs they release blocks with
+tarnished, so why would they risk getting blacklisted by relays?
+
+> 3. To Make Full Nodes Feasible
+>
+
+
+Essentially, larger blocks means fewer people that can download and verify
+> the chain, which results fewer people willing to run full nodes and store
+> all of the blockchain data.
+>
+
+
+If there were no block size limit, malicious persons could artificially
+> bloat the block with nonsense and increase the server costs for everyone
+> running a full node, in addition to making it infeasible for people with
+> just home computers to even keep up with the network.
+> The goal is to find a block size limit with the right tradeoff between
+> resource restrictions (so that someone on their home computer can still run
+> a full node), and functional requirements (being able to process X number
+> of transactions per second). Eventually, transactions will likely be done
+> off-chain using micropayment channels, but no such solution currently
+> exists.
+
+
+This same attack could be achieved simply by sending lots of spam
+transactions and bloating the UTXO database or the mempool. In fact, given
+that block storage is substantially cheaper than UTXO/mempool storage, I'd
+be far more concerned with that type of attack. And this particular attack
+vector has already been largely mitigated by pruning and could be further
+mitigated by allowing relays to decide which blocks they propagate.
+
+
+--
+*James G. Phillips IV*
+<https://plus.google.com/u/0/113107039501292625391/posts>
+<http://www.linkedin.com/in/ergophobe>
+
+*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
+-- David Ogilvy*
+
+ *This message was created with 100% recycled electrons. Please think twice
+before printing.*
+
+On Mon, Jun 1, 2015 at 2:02 PM, Stephen Morse <stephencalebmorse@gmail.com>
+wrote:
+
+> This exact question came up on the Bitcoin Stack Exchange once. I gave an
+> answer here:
+> http://bitcoin.stackexchange.com/questions/37292/whats-the-purpose-of-a-maximum-block-size/37303#37303
+>
+> On Mon, Jun 1, 2015 at 2:32 PM, Jim Phillips <jim@ergophobia.org> wrote:
+>
+>> Ok, I understand at least some of the reason that blocks have to be kept
+>> to a certain size. I get that blocks which are too big will be hard to
+>> propagate by relays. Miners will have more trouble uploading the large
+>> blocks to the network once they've found a hash. We need block size
+>> constraints to create a fee economy for the miners.
+>>
+>> But these all sound to me like issues that affect some, but not others.
+>> So it seems to me like it ought to be a configurable setting. We've already
+>> witnessed with last week's stress test that most miners aren't even
+>> creating 1MB blocks but are still using the software defaults of 730k. If
+>> there are configurable limits, why does there have to be a hard limit?
+>> Can't miners just use the configurable limit to decide what size blocks
+>> they can afford to and are thus willing to create? They could just as
+>> easily use that to create a fee economy. If the miners with the most
+>> hashpower are not willing to mine blocks larger than 1 or 2 megs, then they
+>> are able to slow down confirmations of transactions. It may take several
+>> blocks before a miner willing to include a particular transaction finds a
+>> block. This would actually force miners to compete with each other and find
+>> a block size naturally instead of having it forced on them by the protocol.
+>> Relays would be able to participate in that process by restricting the
+>> miners ability to propagate large blocks. You know, like what happens in a
+>> FREE MARKET economy, without burdensome regulation which can be manipulated
+>> through politics? Isn't that what's really happening right now? Different
+>> political factions with different agendas are fighting over how best to
+>> regulate the Bitcoin protocol.
+>>
+>> I know the limit was originally put in place to prevent spamming. But
+>> that was when we were mining with CPUs and just beginning to see the
+>> occasional GPU which could take control over the network and maliciously
+>> spam large blocks. But with ASIC mining now catching up to Moore's Law,
+>> that's not really an issue anymore. No one malicious entity can really just
+>> take over the network now without spending more money than it's worth --
+>> and that's just going to get truer with time as hashpower continues to
+>> grow. And it's not like the hard limit really does anything anymore to
+>> prevent spamming. If a spammer wants to create thousands or millions of
+>> transactions, a hard limit on the block size isn't going to stop him..
+>> He'll just fill up the mempool or UTXO database instead of someone's block
+>> database.. And block storage media is generally the cheapest storage.. I
+>> mean they could be written to tape and be just as valid as if they're
+>> stored in DRAM. Combine that with pruning, and block storage costs are
+>> almost a non-issue for anyone who isn't running an archival node.
+>>
+>> And can't relay nodes just configure a limit on the size of blocks they
+>> will relay? Sure they'd still need to download a big block occasionally,
+>> but that's not really that big a deal, and they're under no obligation to
+>> propagate it.. Even if it's a 2GB block, it'll get downloaded eventually.
+>> It's only if it gets to the point where the average home connection is too
+>> slow to keep up with the transaction & block flow that there's any real
+>> issue there, and that would happen regardless of how big the blocks are. I
+>> personally would much prefer to see hardware limits act as the bottleneck
+>> than to introduce an artificial bottleneck into the protocol that has to be
+>> adjusted regularly. The software and protocol are TECHNICALLY capable of
+>> scaling to handle the world's entire transaction set. The real issue with
+>> scaling to this size is limitations on hardware, which are regulated by
+>> Moore's Law. Why do we need arbitrary soft limits? Why can't we allow
+>> Bitcoin to grow naturally within the ever increasing limits of our
+>> hardware? Is it because nobody will ever need more than 640k of RAM?
+>>
+>> Am I missing something here? Is there some big reason that I'm
+>> overlooking why there has to be some hard-coded limit on the block size
+>> that affects the entire network and creates ongoing issues in the future?
+>>
+>> --
+>>
+>> *James G. Phillips IV*
+>> <https://plus.google.com/u/0/113107039501292625391/posts>
+>>
+>> *"Don't bunt. Aim out of the ball park. Aim for the company of
+>> immortals." -- David Ogilvy*
+>>
+>> *This message was created with 100% recycled electrons. Please think
+>> twice before printing.*
+>>
+>>
+>> ------------------------------------------------------------------------------
+>>
+>> _______________________________________________
+>> Bitcoin-development mailing list
+>> Bitcoin-development@lists.sourceforge.net
+>> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
+>>
+>>
+>
+
+--089e0160aa4839ca9a05177a50fb
+Content-Type: text/html; charset=UTF-8
+Content-Transfer-Encoding: quoted-printable
+
+<div dir=3D"ltr"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
+0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
+eft-style:solid;padding-left:1ex">1. To Maintaining Consensus<br></blockquo=
+te><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;bord=
+er-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:soli=
+d;padding-left:1ex">=C2=A0</blockquote><blockquote class=3D"gmail_quote" st=
+yle=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb=
+(204,204,204);border-left-style:solid;padding-left:1ex">There has to be cle=
+arly defined rules about which blocks are valid and which are not for the n=
+etwork to agree. Obviously no node will accept a block that is 10 million t=
+erabytes, it would be near impossible to download even if it were valid. So=
+ where do you set the limit? And what if one nodes sets their limit differe=
+ntly than other nodes on the network? If this were to happen, the network w=
+ould no longer be in consensus about which blocks were valid when a block w=
+as broadcasted that met some nodes&#39; size limits and did not meet others=
+.<br>Setting a network limit on the maximum block size ensures that everyon=
+e is in agreement about which blocks are valid and which are not, so that c=
+onsensus is achieved.</blockquote><div><br></div><div>It is as impossible t=
+o upload a 10 million terabyte block as it is to download it. But even on a=
+ more realistic scale, of say a 2GB block, there are other factors that pre=
+vent a rogue miner from being able to flood the network using large blocks =
+-- such as the ability to get that block propagated before it can be orphan=
+ed. A simple solution to these large blocks is for relays to set configurab=
+le limits on the size of blocks that they will relay. If the rogue miner ca=
+n&#39;t get his megablock propagated before it is orphaned, his attack will=
+ not succeed. It doesn&#39;t make the block invalid, just useless as a DoS =
+tool. And over time, relays can raise the limits they set on block sizes th=
+ey will propagate according to what they can handle. As more and more relay=
+s accept larger and larger blocks, the true maximum block size can grow nat=
+urally and not require a hard fork.</div><div><br></div><blockquote style=
+=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(20=
+4,204,204);border-left-style:solid;padding-left:1ex" class=3D"gmail_quote">=
+2. To Avoid (further) Centralization of Pools=C2=A0</blockquote><blockquote=
+ style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:=
+rgb(204,204,204);border-left-style:solid;padding-left:1ex" class=3D"gmail_q=
+uote">=C2=A0</blockquote><blockquote style=3D"margin:0px 0px 0px 0.8ex;bord=
+er-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:soli=
+d;padding-left:1ex" class=3D"gmail_quote">Suppose we remove the 1 MB cap en=
+tirely. A large pool says to itself, &quot;I wish I had a larger percentage=
+ of the network hashrate so I could make more profit.&quot;<br></blockquote=
+><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border=
+-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;=
+padding-left:1ex">=C2=A0</blockquote><blockquote style=3D"margin:0px 0px 0p=
+x 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-lef=
+t-style:solid;padding-left:1ex" class=3D"gmail_quote">Then they realize tha=
+t since there&#39;s no block size limit, they can make a block that is 4 GB=
+ large by filling it with nonsense. They and a few other pools have bandwid=
+th large enough to download a block of this size in a reasonable time, but =
+a smaller pool does not. The tiny pool is then stuck trying to download a b=
+lock that is too large, and continuing to mine on their previous block unti=
+l they finish downloading the new block. This means the small pool is now w=
+asting their time mining blocks that are likely never to be accepted even i=
+f they were solved, since they wouldn&#39;t be in the &#39;longest&#39; cha=
+in. Since their hash power is wasted, the original pool operator now has ef=
+fectively forced smaller pools out of the network, and simultaneously incre=
+ased their percentage of the network hashrate.</blockquote><div>=C2=A0</div=
+><p style=3D"margin:0px 0px 1em;padding:0px;border:0px;font-size:15px;clear=
+:both;font-family:&#39;Helvetica Neue&#39;,Helvetica,Arial,sans-serif;line-=
+height:19.5px"><span style=3D"font-family:arial,sans-serif;font-size:small;=
+line-height:normal">Yet another issue that can be addressed by allowing rel=
+ays to restrict propagation. Relays are just as impacted by large blocks fi=
+lled with nonsense as small miners. If a relay downloads a block and sees t=
+hat it&#39;s full of junk or comes from a miner notorious for producing bad=
+ blocks, he can refuse to relay it. If a bad block doesn&#39;t propagate, i=
+t can&#39;t hurt anyone. Large miners also typically have to use static IPs=
+. Anonymizing networks like TOR aren&#39;t geared towards handling that typ=
+e of traffic. They can&#39;t afford to have the reputation of the IPs they =
+release blocks with tarnished, so why would they risk getting blacklisted b=
+y relays?</span></p><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
+px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);borde=
+r-left-style:solid;padding-left:1ex">3. To Make Full Nodes Feasible<br></bl=
+ockquote><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8e=
+x;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-styl=
+e:solid;padding-left:1ex">=C2=A0</blockquote><blockquote class=3D"gmail_quo=
+te" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-col=
+or:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Essentially, =
+larger blocks means fewer people that can download and verify the chain, wh=
+ich results fewer people willing to run full nodes and store all of the blo=
+ckchain data.<br></blockquote><blockquote class=3D"gmail_quote" style=3D"ma=
+rgin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,=
+204);border-left-style:solid;padding-left:1ex">=C2=A0</blockquote><blockquo=
+te class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-widt=
+h:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-le=
+ft:1ex">If there were no block size limit, malicious persons could artifici=
+ally bloat the block with nonsense and increase the server costs for everyo=
+ne running a full node, in addition to making it infeasible for people with=
+ just home computers to even keep up with the network.<br>The goal is to fi=
+nd a block size limit with the right tradeoff between resource restrictions=
+ (so that someone on their home computer can still run a full node), and fu=
+nctional requirements (being able to process X number of transactions per s=
+econd). Eventually, transactions will likely be done off-chain using microp=
+ayment channels, but no such solution currently exists.</blockquote><div><b=
+r></div><div>This same attack could be achieved simply by sending lots of s=
+pam transactions and bloating the UTXO database or the mempool. In fact, gi=
+ven that block storage is substantially cheaper than UTXO/mempool storage, =
+I&#39;d be far more concerned with that type of attack. And this particular=
+ attack vector has already been largely mitigated by pruning and could be f=
+urther mitigated by allowing relays to decide which blocks they propagate.<=
+/div><div>=C2=A0</div></div><div class=3D"gmail_extra"><br clear=3D"all"><d=
+iv><div class=3D"gmail_signature"><div>--<div><b>James G. Phillips IV</b>=
+=C2=A0<a href=3D"https://plus.google.com/u/0/113107039501292625391/posts" s=
+tyle=3D"font-size:x-small" target=3D"_blank"><img src=3D"https://ssl.gstati=
+c.com/images/icons/gplus-16.png"></a>=C2=A0<a href=3D"http://www.linkedin.c=
+om/in/ergophobe" target=3D"_blank"><img src=3D"http://developer.linkedin.co=
+m/sites/default/files/LinkedIn_Logo16px.png"></a></div></div><div><font siz=
+e=3D"1"><i>&quot;Don&#39;t bunt. Aim out of the ball park. Aim for the comp=
+any of immortals.&quot; -- David Ogilvy<br></i></font><div><font size=3D"1"=
+><br></font></div></div><div><font size=3D"1"><img src=3D"http://findicons.=
+com/files/icons/1156/fugue/16/leaf.png">=C2=A0<em style=3D"background-color=
+:rgb(255,255,255);font-family:verdana,geneva,sans-serif;line-height:16px;co=
+lor:green">This message was created with 100% recycled electrons. Please th=
+ink twice before printing.</em></font></div></div></div>
+<br><div class=3D"gmail_quote">On Mon, Jun 1, 2015 at 2:02 PM, Stephen Mors=
+e <span dir=3D"ltr">&lt;<a href=3D"mailto:stephencalebmorse@gmail.com" targ=
+et=3D"_blank">stephencalebmorse@gmail.com</a>&gt;</span> wrote:<br><blockqu=
+ote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc s=
+olid;padding-left:1ex"><div dir=3D"ltr">This exact question came up on the =
+Bitcoin Stack Exchange once. I gave an answer here:=C2=A0<a href=3D"http://=
+bitcoin.stackexchange.com/questions/37292/whats-the-purpose-of-a-maximum-bl=
+ock-size/37303#37303" target=3D"_blank">http://bitcoin.stackexchange.com/qu=
+estions/37292/whats-the-purpose-of-a-maximum-block-size/37303#37303</a></di=
+v><div class=3D"gmail_extra"><br><div class=3D"gmail_quote"><div><div class=
+=3D"h5">On Mon, Jun 1, 2015 at 2:32 PM, Jim Phillips <span dir=3D"ltr">&lt;=
+<a href=3D"mailto:jim@ergophobia.org" target=3D"_blank">jim@ergophobia.org<=
+/a>&gt;</span> wrote:<br></div></div><blockquote class=3D"gmail_quote" styl=
+e=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><d=
+iv class=3D"h5"><div dir=3D"ltr">Ok, I understand at least some of the reas=
+on that blocks have to be kept to a certain size. I get that blocks which a=
+re too big will be hard to propagate by relays. Miners will have more troub=
+le uploading the large blocks to the network once they&#39;ve found a hash.=
+ We need block size constraints to create a fee economy for the miners.<div=
+><br>But these all sound to me like issues that affect some, but not others=
+. So it seems to me like it ought to be a configurable setting. We&#39;ve a=
+lready witnessed with last week&#39;s stress test that most miners aren&#39=
+;t even creating 1MB blocks but are still using the software defaults of 73=
+0k. If there are configurable limits, why does there have to be a hard limi=
+t? Can&#39;t miners just use the configurable limit to decide what size blo=
+cks they can afford to and are thus willing to create? They could just as e=
+asily use that to create a fee economy. If the miners with the most hashpow=
+er are not willing to mine blocks larger than 1 or 2 megs, then they are ab=
+le to slow down confirmations of transactions. It may take several blocks b=
+efore a miner willing to include a particular transaction finds a block. Th=
+is would actually force miners to compete with each other and find a block =
+size naturally instead of having it forced on them by the protocol. Relays =
+would be able to participate in that process by restricting the miners abil=
+ity to propagate large blocks. You know, like what happens in a FREE MARKET=
+=C2=A0economy, without burdensome regulation which can be manipulated throu=
+gh politics? Isn&#39;t that what&#39;s really happening right now? Differen=
+t political factions with different agendas are fighting over how best to r=
+egulate the Bitcoin protocol.<br><br></div><div>I know the limit was origin=
+ally put in place to prevent spamming. But that was when we were mining wit=
+h CPUs and just beginning to see the occasional GPU which could take contro=
+l over the network and maliciously spam large blocks. But with ASIC mining =
+now catching up to Moore&#39;s Law, that&#39;s not really an issue anymore.=
+ No one malicious entity can really just take over the network now without =
+spending more money than it&#39;s worth -- and that&#39;s just going to get=
+ truer with time as hashpower continues to grow. And it&#39;s not like the =
+hard limit really does anything anymore to prevent spamming. If a spammer w=
+ants to create thousands or millions of transactions, a hard limit on the b=
+lock size isn&#39;t going to stop him.. He&#39;ll just fill up the mempool =
+or UTXO database instead of someone&#39;s block database.. And block storag=
+e media is generally the cheapest storage.. I mean they could be written to=
+ tape and be just as valid as if they&#39;re stored in DRAM. Combine that w=
+ith pruning, and block storage costs are almost a non-issue for anyone who =
+isn&#39;t running an archival node.</div><div><br>And can&#39;t relay nodes=
+ just configure a limit on the size of blocks they will relay? Sure they&#3=
+9;d still need to download a big block occasionally, but that&#39;s not rea=
+lly that big a deal, and they&#39;re under no obligation to propagate it.. =
+Even if it&#39;s a 2GB block, it&#39;ll get downloaded eventually. It&#39;s=
+ only if it gets to the point where the average home connection is too slow=
+ to keep up with the transaction &amp; block flow that there&#39;s any real=
+ issue there, and that would happen regardless of how big the blocks are. I=
+ personally would much prefer to see hardware limits act as the bottleneck =
+than to introduce an artificial bottleneck into the protocol that has to be=
+ adjusted regularly.=C2=A0The software and protocol are TECHNICALLY capable=
+ of scaling to handle the world&#39;s entire transaction set. The real issu=
+e with scaling to this size is limitations on hardware, which are regulated=
+ by Moore&#39;s Law. Why do we need arbitrary soft limits? Why can&#39;t we=
+ allow Bitcoin to grow naturally within the ever increasing limits of our h=
+ardware? Is it because nobody will ever need more than 640k of RAM?<br><br>=
+</div><div>Am I missing something here? Is there some big reason that I&#39=
+;m overlooking why there has to be some hard-coded limit on the block size =
+that affects the entire network and creates ongoing issues in the future?</=
+div><div><br><div><div><div>--</div><div><br><div><b>James G. Phillips IV</=
+b>=C2=A0<a href=3D"https://plus.google.com/u/0/113107039501292625391/posts"=
+ style=3D"font-size:x-small" target=3D"_blank"><img src=3D"https://ssl.gsta=
+tic.com/images/icons/gplus-16.png"></a></div></div><div><font size=3D"1"><i=
+>&quot;Don&#39;t bunt. Aim out of the ball park. Aim for the company of imm=
+ortals.&quot; -- David Ogilvy<br></i></font><div><font size=3D"1"><br></fon=
+t></div></div><div><font size=3D"1"><img src=3D"http://findicons.com/files/=
+icons/1156/fugue/16/leaf.png">=C2=A0<em style=3D"font-family:verdana,geneva=
+,sans-serif;line-height:16px;color:green;background-color:rgb(255,255,255)"=
+>This message was created with 100% recycled electrons. Please think twice =
+before printing.</em></font></div></div></div>
+</div></div>
+<br></div></div>-----------------------------------------------------------=
+-------------------<br>
+<br>_______________________________________________<br>
+Bitcoin-development mailing list<br>
+<a href=3D"mailto:Bitcoin-development@lists.sourceforge.net" target=3D"_bla=
+nk">Bitcoin-development@lists.sourceforge.net</a><br>
+<a href=3D"https://lists.sourceforge.net/lists/listinfo/bitcoin-development=
+" target=3D"_blank">https://lists.sourceforge.net/lists/listinfo/bitcoin-de=
+velopment</a><br>
+<br></blockquote></div><br></div>
+</blockquote></div><br></div>
+
+--089e0160aa4839ca9a05177a50fb--
+
+