Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 92F0BBB3 for ; Wed, 1 Jul 2015 07:15:19 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-wi0-f182.google.com (mail-wi0-f182.google.com [209.85.212.182]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id D1BA9ED for ; Wed, 1 Jul 2015 07:15:16 +0000 (UTC) Received: by wicgi11 with SMTP id gi11so36358877wic.0 for ; Wed, 01 Jul 2015 00:15:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:cc:content-type; bh=fOBHL95hQOhgxlNOO4R7B9b4PRzX4Zpf6Np98H+UwTI=; b=yZkvKMBQQ6F6132gWQz0TsIkllCv+uVsV0pDLgJ6ptS0U7GzrWmB2JYnJOAQGH+OLc 6ism/cEagYFhhAcBpXGFFUzniBGfYJZCgGFOOq+zLA1D7k/9r9n+bXpBH2vhMLftQg7u X1b8ur23+ZOq0rbAnTbNdWdQ4JDD9tbKvOkDTcu0n5vFgH/uB8rTQgtjcYt2YK4GBW7m lxFz2fgBZOeHMyIzG4OyenGQ3gFBa3XFtiJnCOFgPnYxpouutACJgNUKzBVZHRbYKT/s M5Ul/3VOf7XojifeEr/rLcp/oEFf+LuM52cjZo1CymbKP8auX0hCjsT6C6Iq0LnM0YYK dyZQ== MIME-Version: 1.0 X-Received: by 10.180.91.107 with SMTP id cd11mr3195188wib.51.1435734915463; Wed, 01 Jul 2015 00:15:15 -0700 (PDT) Received: by 10.27.10.1 with HTTP; Wed, 1 Jul 2015 00:15:15 -0700 (PDT) Date: Wed, 1 Jul 2015 03:15:15 -0400 Message-ID: From: Michael Naber To: Adam Back , Peter Todd Content-Type: multipart/alternative; boundary=f46d043bdf74a0c0db0519cb15f5 X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,LOTS_OF_MONEY, RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: bitcoin-dev@lists.linuxfoundation.org Subject: [bitcoin-dev] Reaching consensus on policy to continually increase block size limit as hardware improves, and a few other critical issues X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Development Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Jul 2015 07:15:19 -0000 --f46d043bdf74a0c0db0519cb15f5 Content-Type: text/plain; charset=UTF-8 This is great: Adam agrees that we should scale the block size limit discretionarily upward within the limits of technology, and continually so as hardware improves. Peter and others: What stands in the way of broader consensus on this? We also agree on a lot of other important things: -- block size is not a free variable -- there are trade-offs between node requirements and block size -- those trade-offs have impacts on decentralization -- it is important to keep decentralization strong -- computing technology is currently not easily capable of running a global transaction network where every transaction is broadcast to every node -- we may need some solution (perhaps lightning / hub and spoke / other things) that can help with this We likely also agree that: -- whatever that solution may be, we want bitcoin to be the "hub" / core of it -- this hub needs to exhibit the characteristic of globally aware global consensus, where every node knows about (awareness) and agrees on (consensus) every transaction -- Critically, the Bitcoin Core Goal: the goal of Bitcoin core is to build the "best" globally aware globally consensus network, recognizing there are complex tradeoffs in doing this. There are a few important things we still don't agree on though. Our disagreement on these things is causing us to have trouble making progress meeting the goal of Bitcoin Core. It is critical we address the following points of disagreement. Please help get agreement on these issues below by sharing your thoughts: 1) Some believe that fees and therefore hash-rate will be high by limiting capacity, and that we need to limit capacity to have a "healthy fee market". Think of the airplane analogy: If some day technology exists to ship a hundred million people (transactions) on a plane (block) then do you really want to fight to outlaw those planes? Airlines are regulated so they have to pay to screen each passenger to a minimum standard, so even if the plane has unlimited capacity, they still have to pay to meet minimum security for each passenger. Just like we can set the block limit, so can we "regulate the airline security requirements" and set a minimum fee size for the sake of security. If technology allows running 100,000 transactions per second in 25 years, and we set the minimum fee size to one penny, then each block is worth a minimum of $600,000. Miners should be ok with that and so should everyone else. 2) Some believe that it is better for (a) network reliability and (b) validation of transaction integrity, to have every user run a "full node" in order to use Bitcoin Core. I don't agree with this. I'll break this into two pieces of network reliability and transaction integrity. Network Reliability Imagine you're setting up an email server for a big company. You decide to set up a main server, and two fail-over servers. Somebody says that they're really concerned about reliability and asks you to add another couple fail-over servers. So you agree. But at some point there's limited benefit to adding more servers: and there's real cost -- all those servers need to keep in sync with one another, and they need to be maintained, etc. And there's limited return: how likely is it really that all those servers are going to go down? Bitcoin is obviously different from corporate email servers. In one sense, you've got miners and volunteer nodes rather than centrally managed ones, so nodes are much more likely to go down. But at the end of the day, is our up-time really going to be that much better when you have a million nodes versus a few thousand? Cloud storage copies your data a half dozen times to a few different data centers. But they don't copy it a half a million times. At some point the added redundancy doesn't matter for reliability. We just don't need millions of nodes to participate in a broadcast network to ensure network reliability. Transaction Integrity Think of open source software: you trust it because you know it can be audited easily, but you probably don't take the time to audit yourself every piece of open source software you use. And so it is with Bitcoin: People need to be able to easily validate the blockchain, but they don't need to be able to validate it every time they use it, and they certainly don't need to validate it when using Bitcoin on their Apple watches. If I can lease a server in a data center for a few hours at fifty cents an hour to validate the block chain, then the total cost for me to independently validate the blockchain is just a couple dollars. Compare that to my cost to independently validate other parts of the system -- like the source code! Where's the real cost here? If the goal of decentralization is to ensure transaction integrity and network reliability, then we just don't need lots of nodes or every user running a node to meet that goal. If the goal of decentralization is something else: what is it? 3) Some believe that we should make Bitcoin Core to run as a high-memory server-grade software rather than for people's desktops. I think this is a great idea. The meaningful impact to the goals of decentralization by limiting which hardware nodes can run on will be minimal compared with the huge gains in capacity. Why does increasing capacity of Bitcoin Core matter when we can "increase capacity" by moving to hub and spoke / lightning? Maybe we should ask why does growing more apples matter if we can grow more oranges instead? Hub and spoke and lightning are useful means of making lower cost transactions, but they're not the same as Bitcoin Core. Stick to the goal: the goal of Bitcoin core is to build the "best" globally aware globally consensus network, recognizing there are complex tradeoffs in doing this. Hub and spoke and lightning could be great when you want lower-cost fees and don't really care about global awareness. Poker chips are great when you're in a casino. We don't talk about lightning networks to the guy who designs poker chips, and we shouldn't be talking about them to the guy who builds globally aware consensus networks either. Do people even want increased capacity when they can use hub and spoke / lightning? If you think they might be willing to pay $600,000 every ten minutes for it (see above) then yes. Increase capacity, and let the market decide if that capacity gets used. On Tue, Jun 30, 2015 at 3:54 PM, Adam Back wrote: > Not that I'm arguing against scaling within tech limits - I agree we > can and should - but note block-size is not a free variable. The > system is a balance of factors, interests and incentives. > > As Greg said here > > https://www.reddit.com/r/Bitcoin/comments/3b0593/to_fork_or_not_to_fork/cshphic?context=3 > there are multiple things we should usefully do with increased > bandwidth: > > a) improve decentralisation and hence security/policy > neutrality/fungibility (which is quite weak right now by a number of > measures) > b) improve privacy (privacy features tend to consume bandwidth, eg see > the Confidential Transactions feature) or more incremental features. > c) increase throughput > > I think some of the within tech limits bandwidth should be > pre-allocated to decentralisation improvements given a) above. > > And I think that we should also see work to improve decentralisation > with better pooling protocols that people are working on, to remove > some of the artificial centralisation in the system. > > Secondly on the interests and incentives - miners also play an > important part of the ecosystem and have gone through some lean times, > they may not be overjoyed to hear a plan to just whack the block-size > up to 8MB. While it's true (within some limits) that miners could > collectively keep blocks smaller, there is the ongoing reality that > someone else can take break ranks and take any fee however de minimis > fee if there is a huge excess of space relative to current demand and > drive fees to zero for a few years. A major thing even preserving > fees is wallet defaults, which could be overridden(plus protocol > velocity/fee limits). > > I think solutions that see growth scale more smoothly - like Jeff > Garzik's and Greg Maxwell's and Gavin Andresen's (though Gavin's > starts with a step) are far less likely to create perverse unforeseen > side-effects. Well we can foresee this particular effect, but the > market and game theory can surprise you so I think you generally want > the game-theory & market effects to operate within some more smoothly > changing caps, with some user or miner mutual control of the cap. > > So to be concrete here's some hypotheticals (unvalidated numbers): > > a) X MB cap with miner policy limits (simple, lasts a while) > b) starting at 1MB and growing to 2*X MB cap with 10%/year growth > limiter + policy limits > c) starting at 1MB and growing to 3*X MB cap with 15%/year growth > limiter + Jeff Garzik's miner vote. > d) starting at 1MB and growing to 4*X MB cap with 20%/year growth > limiter + Greg Maxwell's flexcap > > I think it would be good to see some tests of achievable network > bandwidth on a range of networks, but as an illustration say X is 2MB. > > Rationale being the weaker the signalling mechanism between users and > user demanded size (in most models communicated via miners), the more > risk something will go in an unforeseen direction and hence the lower > the cap and more conservative the growth curve. > > 15% growth limiter is not Nielsen's law by intent. Akamai have data > on what they serve, and it's more like 15% per annum, but very > variable by country > > http://www.akamai.com/stateoftheinternet/soti-visualizations.html#stoi-graph > CISCO expect home DSL to double in 5 years > ( > http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html > ), which is about the same number. > > (Thanks to Rusty for data sources for 15% number). > > This also supports the claim I have made a few times here, that it is > not realistic to support massive growth without algorithmic > improvement from Lightning like or extension-block like opt-in > systems. People who are proposing that we ramp blocksizes to create > big headroom are I think from what has been said over time, often > without advertising it clearly, actually assuming and being ok with > the idea that full nodes move into data-centers period and small > business/power user validation becomes a thing of the distant past. > Further the aggressive auto-growth risks seeing that trend continuing > into higher tier data-centers with negative implications for > decentralisation. The odd proponent seems OK with even that too. > > Decentralisation is key to Bitcoin's security model, and it's > differentiating properties. I think those aggressive growth numbers > stray into the zone of losing efficiency. By which I mean in > scalability or privacy systems if you make a trade-off too far, it > becomes time to re-asses what you're doing. For example at that level > of centralisation, alternative designs are more network efficient, > while achieving the same effective (weak) decentralisation. In > Bitcoin I see this as a strong argument not to push things to that > extreme, the core functionality must remain for Lightning and other > scaling approaches to remain secure by using the Bitcoin as a secure > anchor. If we heavily centralise and weaken the security of the main > Bitcoin chain, there remains nothing secure to build on. > > Therefore I think it's more appropriate for high scale to rely on > lightning, or a semi-centralised trade-offs being in the side-chain > model or similar, where the higher risk of centralisation is opt-in > and not exposed back (due to the security firewall) to the Bitcoin > network itself. > > People who would like to try the higher tier data-center and > throughput by high bandwidth use route should in my opinion run that > experiment as a layer 2 side-chain or analogous. There are a few ways > to do that. And it would be appropriate to my mind that we discuss > them here also. > > An experiment like that could run in parallel with lightning, maybe it > could be done faster, or offer different trade-offs, so could be an > interesting and useful thing to see work on. > > > On Tue, Jun 30, 2015 at 12:25 PM, Peter Todd wrote: > >> Which of course raises another issue: if that was the plan, then all you > >> can do is double capacity, with no clear way to scaling beyond that. > >> Why bother? > > A secondary function can be a market signalling - market evidence > throughput can increase, and there is a technical process that is > effectively working on it. While people may not all understand the > trade-offs and decentralisation work that should happen in parallel, > nor the Lightning protocol's expected properties - they can appreciate > perceived progress and an evidently functioning process. Kind of a > weak rationale, from a purely technical perspective, but it may some > value, and is certainly less risky than a unilateral fork. > > As I recall Gavin has said things about this area before also > (demonstrate throughput progress to the market). > > Another factor that people have said, which I think I agree with > fairly much is that if we can chose something conservative, that there > is wide-spread support for, it can be safer to do it with moderate > lead time. Then if there is an implied 3-6mo lead time we are maybe > projecting ahead a bit further on block-size utilisation. Of course > the risk is we overshoot demand but there probably should be some > balance between that risk and the risk of doing a more rushed change > that requires system wide upgrade of all non-SPV software, where > stragglers risk losing money. > > As well as scaling block-size within tech limits, we should include a > commitment to improve decentralisation, and I think any proposal > should be reasonably well analysed in terms of bandwidth assumptions > and game-theory. eg In IETF documents they have a security > considerations section, and sometimes a privacy section. In BIPs > maybe we need a security, privacy and decentralisation/fungibility > section. > > Adam > > NB some new list participants may not be aware that miners are > imposing local policy limits eg at 750kB and that a 250kB policy > existed in the past and those limits saw utilisation and were > unilaterally increased unevenly. I'm not sure if anyone has a clear > picture of what limits are imposed by hash-rate even today. That's > why Pieter posed the question - are we already at the policy limit - > maybe the blocks we're seeing are closely tracking policy limits, if > someone mapped that and asked miners by hash-rate etc. > > On 30 June 2015 at 18:35, Michael Naber wrote: > > Re: Why bother doubling capacity? So that we could have 2x more network > > participants of course. > > > > Re: No clear way to scaling beyond that: Computers are getting more > capable > > aren't they? We'll increase capacity along with hardware. > > > > It's a good thing to scale the network if technology permits it. How can > you > > argue with that? > --f46d043bdf74a0c0db0519cb15f5 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
This is great: Adam agrees that we should scale the block = size limit discretionarily=C2=A0upward within the limits of technology, and= continually so as hardware improves. Peter and others: What stands in the = way of broader consensus on this?


We= also agree on a lot of other important things:
-- block siz= e is not a free variable
-- there are trade-offs betwee= n node requirements and block size
-- those trade-o= ffs have impacts on decentralization
-- it is important to keep d= ecentralization strong
-- computing technology is currently= not easily capable of running a global transaction network where every tra= nsaction is broadcast to every node
-- we may need some= solution (perhaps lightning / hub and spoke / other things) that can help = with this

We likely also agree that:
-- = whatever that solution may be, we want bitcoin to be the "hub" / = core of it
-- this hub needs to exhibit the characteristic of glo= bally aware global consensus, where every node knows about (awareness) and = agrees on (consensus) every transaction
-- Critically, the Bitcoi= n Core Goal: the goal of Bitcoin core is to build the "best" glob= ally aware globally consensus network, recognizing there are complex tradeo= ffs in doing this.


There are a few = important things we still don't agree on though. Our disagreement on th= ese things is causing us to have trouble making progress meeting the goal o= f Bitcoin Core. It is critical we address the following points of disagreem= ent. Please help get agreement on these issues below by sharing your though= ts:

1) Some believe that fees and therefore hash-r= ate will be high by limiting capacity, and that we need to limit capacity t= o have a "healthy fee market".

Thi= nk of the airplane analogy: If some day technology exists to ship a hundred= million people (transactions) on a plane (block) then do you really want t= o fight to outlaw those planes? Airlines are regulated so they have to pay = to screen each passenger to a minimum standard, so even if the plane has un= limited capacity, they still have to pay to meet minimum security for each = passenger.=C2=A0

Just like we can set the block li= mit, so can we "regulate the airline security requirements" and s= et a minimum fee size for the sake of security. If technology allows runnin= g 100,000 transactions per second in 25 years, and we set the minimum fee s= ize to one penny, then each block is worth a minimum of $600,000. Miners sh= ould be ok with that and so should everyone else.
<= div>

2) Some believe that it is better for (a) network r= eliability and (b) validation of transaction integrity, to have every user = run a "full node" in order to use Bitcoin Core.

I don't agree with this. I'll break this into two pie= ces of network reliability and transaction integrity.

<= /div>
Network Reliability

Imagine you're setting up an email server for a big company. You dec= ide to set up a main server, and two fail-over servers. Somebody says that = they're really concerned about reliability and asks you to add another = couple fail-over servers. So you agree. But at some point there's limit= ed benefit to adding more servers: and there's real cost -- all those s= ervers need to keep in sync with one another, and they need to be maintaine= d, etc. And there's limited return: how likely is it really that all th= ose servers are going to go down?

Bitcoin is obviously different from corporate email servers. In one s= ense, you've got miners and volunteer nodes rather than centrally manag= ed ones, so nodes are much more likely to go down. But at the end of the da= y, is our up-time really going to be that much better when you have a milli= on nodes versus a few thousand?=C2=A0

Clo= ud storage copies your data a half dozen times to a few different data cent= ers. But they don't copy it a half a million times. At some point the a= dded redundancy doesn't matter for reliability. We just don't need = millions of nodes to participate in a broadcast network to ensure network r= eliability.

Transaction Integrity

Think of open source softwa= re: you trust it because you know it can be audited easily, but you probabl= y don't take the time to audit yourself every piece of open source soft= ware you use.=C2=A0And so it is with Bitcoin:=C2=A0People need to be able t= o easily validate the blockchain, but they don't need to be able to val= idate it every time they use it, and they certainly don't need to valid= ate it when using Bitcoin on their Apple watches.

If I can lease a s= erver in a data center for a few hours at fifty cents an hour to validate t= he block chain, then the total cost for me to independently validate the bl= ockchain is just a couple dollars. Compare that to my cost to independently= validate other parts of the system -- like the source code! Where's th= e real cost here?

If the goal of decentralization is to ensure tran= saction integrity and network reliability, then we just don't need lots= of nodes or every user running a node to meet that goal. If the goal of de= centralization is something else: what is it?

3) Some believe that we should make Bitcoin Core to run as a high-memory s= erver-grade software rather than for people's desktops.

<= /div>
I think this is a great idea.=C2=A0

The meaningful impact to t= he goals of decentralization by limiting which hardware nodes can run on wi= ll be minimal compared with the huge gains in capacity.=C2=A0Why does incre= asing capacity of Bitcoin Core matter when we can "increase capacity&q= uot; by moving to hub and spoke / lightning?=C2=A0Maybe we should ask why d= oes growing more apples matter if we can grow more oranges instead?

= Hub and spoke and lightning are useful means of making lower cost transacti= ons, but they're not the same as Bitcoin Core. Stick to the goal: the g= oal of Bitcoin core is to build the "best" globally aware globall= y consensus network, recognizing there are complex tradeoffs in doing this.=

Hub and spoke and lightning could be great when you want lower-cost= fees and don't really care about global awareness.=C2=A0Poker chips ar= e great when you're in a casino.=C2=A0We don't talk about lightning= networks to the guy who designs poker chips, and we shouldn't be talki= ng about them to the guy who builds globally aware consensus networks eithe= r.=C2=A0

Do people even want increased capacity when they can use hu= b and spoke / lightning? If you think they might be willing to pay $600,000= every ten minutes for it (see above) then yes. Increase capacity, and let = the market decide if that capacity gets used.


On Tue, = Jun 30, 2015 at 3:54 PM, Adam Back <adam@cypherspace.org>= wrote:
Not that I'm arguing against = scaling within tech limits - I agree we
can and should - but note block-size is not a free variable.=C2=A0 The
system is a balance of factors, interests and incentives.

As Greg said here
https://w= ww.reddit.com/r/Bitcoin/comments/3b0593/to_fork_or_not_to_fork/cshphic?cont= ext=3D3
there are multiple things we should usefully do with increased
bandwidth:

a) improve decentralisation and hence security/policy
neutrality/fungibility (which is quite weak right now by a number of
measures)
b) improve privacy (privacy features tend to consume bandwidth, eg see
the Confidential Transactions feature) or more incremental features.
c) increase throughput

I think some of the within tech limits bandwidth should be
pre-allocated to decentralisation improvements given a) above.

And I think that we should also see work to improve decentralisation
with better pooling protocols that people are working on, to remove
some of the artificial centralisation in the system.

Secondly on the interests and incentives - miners also play an
important part of the ecosystem and have gone through some lean times,
they may not be overjoyed to hear a plan to just whack the block-size
up to 8MB.=C2=A0 While it's true (within some limits) that miners could=
collectively keep blocks smaller, there is the ongoing reality that
someone else can take break ranks and take any fee however de minimis
fee if there is a huge excess of space relative to current demand and
drive fees to zero for a few years.=C2=A0 A major thing even preserving
fees is wallet defaults, which could be overridden(plus protocol
velocity/fee limits).

I think solutions that see growth scale more smoothly - like Jeff
Garzik's and Greg Maxwell's and Gavin Andresen's (though Gavin&= #39;s
starts with a step) are far less likely to create perverse unforeseen
side-effects.=C2=A0 Well we can foresee this particular effect, but the
market and game theory can surprise you so I think you generally want
the game-theory & market effects to operate within some more smoothly changing caps, with some user or miner mutual control of the cap.

So to be concrete here's some hypotheticals (unvalidated numbers):

a) X MB cap with miner policy limits (simple, lasts a while)
b) starting at 1MB and growing to 2*X MB cap with 10%/year growth
limiter + policy limits
c) starting at 1MB and growing to 3*X MB cap with 15%/year growth
limiter + Jeff Garzik's miner vote.
d) starting at 1MB and growing to 4*X MB cap with 20%/year growth
limiter + Greg Maxwell's flexcap

I think it would be good to see some tests of achievable network
bandwidth on a range of networks, but as an illustration say X is 2MB.

Rationale being the weaker the signalling mechanism between users and
user demanded size (in most models communicated via miners), the more
risk something will go in an unforeseen direction and hence the lower
the cap and more conservative the growth curve.

15% growth limiter is not Nielsen's law by intent.=C2=A0 Akamai have da= ta
on what they serve, and it's more like 15% per annum, but very
variable by country
http://www.akamai.com/st= ateoftheinternet/soti-visualizations.html#stoi-graph
CISCO expect home DSL to double in 5 years
(http://www.cisco.com/c/en/us/solutions/collateral/= service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html<= /a>
), which is about the same number.

(Thanks to Rusty for data sources for 15% number).

This also supports the claim I have made a few times here, that it is
not realistic to support massive growth without algorithmic
improvement from Lightning like or extension-block like opt-in
systems.=C2=A0 People who are proposing that we ramp blocksizes to create big headroom are I think from what has been said over time, often
without advertising it clearly, actually assuming and being ok with
the idea that full nodes move into data-centers period and small
business/power user validation becomes a thing of the distant past.
Further the aggressive auto-growth risks seeing that trend continuing
into higher tier data-centers with negative implications for
decentralisation.=C2=A0 The odd proponent seems OK with even that too.

Decentralisation is key to Bitcoin's security model, and it's
differentiating properties.=C2=A0 I think those aggressive growth numbers stray into the zone of losing efficiency.=C2=A0 By which I mean in
scalability or privacy systems if you make a trade-off too far, it
becomes time to re-asses what you're doing.=C2=A0 For example at that l= evel
of centralisation, alternative designs are more network efficient,
while achieving the same effective (weak) decentralisation.=C2=A0 In
Bitcoin I see this as a strong argument not to push things to that
extreme, the core functionality must remain for Lightning and other
scaling approaches to remain secure by using the Bitcoin as a secure
anchor.=C2=A0 If we heavily centralise and weaken the security of the main<= br> Bitcoin chain, there remains nothing secure to build on.

Therefore I think it's more appropriate for high scale to rely on
lightning, or a semi-centralised trade-offs being in the side-chain
model or similar, where the higher risk of centralisation is opt-in
and not exposed back (due to the security firewall) to the Bitcoin
network itself.

People who would like to try the higher tier data-center and
throughput by high bandwidth use route should in my opinion run that
experiment as a layer 2 side-chain or analogous.=C2=A0 There are a few ways=
to do that.=C2=A0 And it would be appropriate to my mind that we discuss them here also.

An experiment like that could run in parallel with lightning, maybe it
could be done faster, or offer different trade-offs, so could be an
interesting and useful thing to see work on.

> On Tue, Jun 30, 2015 at 12:25 PM, Peter Todd <
pete@petertodd.org> wrote:
>> Which of course raises another issue: if that was the plan, then a= ll you
>> can do is double capacity, with no clear way to scaling beyond tha= t.
>> Why bother?

A secondary function can be a market signalling - market evidence
throughput can increase, and there is a technical process that is
effectively working on it.=C2=A0 While people may not all understand the trade-offs and decentralisation work that should happen in parallel,
nor the Lightning protocol's expected properties - they can appreciate<= br> perceived progress and an evidently functioning process.=C2=A0 Kind of a weak rationale, from a purely technical perspective, but it may some
value, and is certainly less risky than a unilateral fork.

As I recall Gavin has said things about this area before also
(demonstrate throughput progress to the market).

Another factor that people have said, which I think I agree with
fairly much is that if we can chose something conservative, that there
is wide-spread support for, it can be safer to do it with moderate
lead time.=C2=A0 Then if there is an implied 3-6mo lead time we are maybe projecting ahead a bit further on block-size utilisation.=C2=A0 Of course the risk is we overshoot demand but there probably should be some
balance between that risk and the risk of doing a more rushed change
that requires system wide upgrade of all non-SPV software, where
stragglers risk losing money.

As well as scaling block-size within tech limits, we should include a
commitment to improve decentralisation, and I think any proposal
should be reasonably well analysed in terms of bandwidth assumptions
and game-theory.=C2=A0 eg In IETF documents they have a security
considerations section, and sometimes a privacy section.=C2=A0 In BIPs
maybe we need a security, privacy and decentralisation/fungibility
section.

Adam

NB some new list participants may not be aware that miners are
imposing local policy limits eg at 750kB and that a 250kB policy
existed in the past and those limits saw utilisation and were
unilaterally increased unevenly.=C2=A0 I'm not sure if anyone has a cle= ar
picture of what limits are imposed by hash-rate even today.=C2=A0 That'= s
why Pieter posed the question - are we already at the policy limit -
maybe the blocks we're seeing are closely tracking policy limits, if someone mapped that and asked miners by hash-rate etc.

On 30 June 2015 at 18:35, Michael Naber <mickeybob@gmail.com> wrote:
> Re: Why bother doubling capacity? So that we could have 2x more networ= k
> participants of course.
>
> Re: No clear way to scaling beyond that: Computers are getting more ca= pable
> aren't they? We'll increase capacity along with hardware.
>
> It's a good thing to scale the network if technology permits it. H= ow can you
> argue with that?

--f46d043bdf74a0c0db0519cb15f5--