summaryrefslogtreecommitdiff
path: root/53/3f2b62f59d2c9df5bdf23f467fb1ab3dc52e9c
blob: d19f00269dafad6d4567c1b16edf27eff2803418 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
Return-Path: <mickeybob@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 92F0BBB3
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed,  1 Jul 2015 07:15:19 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-wi0-f182.google.com (mail-wi0-f182.google.com
	[209.85.212.182])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id D1BA9ED
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed,  1 Jul 2015 07:15:16 +0000 (UTC)
Received: by wicgi11 with SMTP id gi11so36358877wic.0
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed, 01 Jul 2015 00:15:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=fOBHL95hQOhgxlNOO4R7B9b4PRzX4Zpf6Np98H+UwTI=;
	b=yZkvKMBQQ6F6132gWQz0TsIkllCv+uVsV0pDLgJ6ptS0U7GzrWmB2JYnJOAQGH+OLc
	6ism/cEagYFhhAcBpXGFFUzniBGfYJZCgGFOOq+zLA1D7k/9r9n+bXpBH2vhMLftQg7u
	X1b8ur23+ZOq0rbAnTbNdWdQ4JDD9tbKvOkDTcu0n5vFgH/uB8rTQgtjcYt2YK4GBW7m
	lxFz2fgBZOeHMyIzG4OyenGQ3gFBa3XFtiJnCOFgPnYxpouutACJgNUKzBVZHRbYKT/s
	M5Ul/3VOf7XojifeEr/rLcp/oEFf+LuM52cjZo1CymbKP8auX0hCjsT6C6Iq0LnM0YYK
	dyZQ==
MIME-Version: 1.0
X-Received: by 10.180.91.107 with SMTP id cd11mr3195188wib.51.1435734915463;
	Wed, 01 Jul 2015 00:15:15 -0700 (PDT)
Received: by 10.27.10.1 with HTTP; Wed, 1 Jul 2015 00:15:15 -0700 (PDT)
Date: Wed, 1 Jul 2015 03:15:15 -0400
Message-ID: <CALgxB7sqQPvXUX-7g3xVOb8EarXrJkjjpuhSbhP+i2h8vut49g@mail.gmail.com>
From: Michael Naber <mickeybob@gmail.com>
To: Adam Back <adam@cypherspace.org>, Peter Todd <pete@petertodd.org>
Content-Type: multipart/alternative; boundary=f46d043bdf74a0c0db0519cb15f5
X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,LOTS_OF_MONEY,
	RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
Cc: bitcoin-dev@lists.linuxfoundation.org
Subject: [bitcoin-dev] Reaching consensus on policy to continually increase
 block size limit as hardware improves, and a few other critical issues
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Development Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Wed, 01 Jul 2015 07:15:19 -0000

--f46d043bdf74a0c0db0519cb15f5
Content-Type: text/plain; charset=UTF-8

This is great: Adam agrees that we should scale the block size limit
discretionarily upward within the limits of technology, and continually so
as hardware improves. Peter and others: What stands in the way of broader
consensus on this?


We also agree on a lot of other important things:
-- block size is not a free variable
-- there are trade-offs between node requirements and block size
-- those trade-offs have impacts on decentralization
-- it is important to keep decentralization strong
-- computing technology is currently not easily capable of running a global
transaction network where every transaction is broadcast to every node
-- we may need some solution (perhaps lightning / hub and spoke / other
things) that can help with this

We likely also agree that:
-- whatever that solution may be, we want bitcoin to be the "hub" / core of
it
-- this hub needs to exhibit the characteristic of globally aware global
consensus, where every node knows about (awareness) and agrees on
(consensus) every transaction
-- Critically, the Bitcoin Core Goal: the goal of Bitcoin core is to build
the "best" globally aware globally consensus network, recognizing there are
complex tradeoffs in doing this.


There are a few important things we still don't agree on though. Our
disagreement on these things is causing us to have trouble making progress
meeting the goal of Bitcoin Core. It is critical we address the following
points of disagreement. Please help get agreement on these issues below by
sharing your thoughts:

1) Some believe that fees and therefore hash-rate will be high by limiting
capacity, and that we need to limit capacity to have a "healthy fee market".

Think of the airplane analogy: If some day technology exists to ship a
hundred million people (transactions) on a plane (block) then do you really
want to fight to outlaw those planes? Airlines are regulated so they have
to pay to screen each passenger to a minimum standard, so even if the plane
has unlimited capacity, they still have to pay to meet minimum security for
each passenger.

Just like we can set the block limit, so can we "regulate the airline
security requirements" and set a minimum fee size for the sake of security.
If technology allows running 100,000 transactions per second in 25 years,
and we set the minimum fee size to one penny, then each block is worth a
minimum of $600,000. Miners should be ok with that and so should everyone
else.


2) Some believe that it is better for (a) network reliability and (b)
validation of transaction integrity, to have every user run a "full node"
in order to use Bitcoin Core.

I don't agree with this. I'll break this into two pieces of network
reliability and transaction integrity.

Network Reliability


Imagine you're setting up an email server for a big company. You decide to
set up a main server, and two fail-over servers. Somebody says that they're
really concerned about reliability and asks you to add another couple
fail-over servers. So you agree. But at some point there's limited benefit
to adding more servers: and there's real cost -- all those servers need to
keep in sync with one another, and they need to be maintained, etc. And
there's limited return: how likely is it really that all those servers are
going to go down?

Bitcoin is obviously different from corporate email servers. In one sense,
you've got miners and volunteer nodes rather than centrally managed ones,
so nodes are much more likely to go down. But at the end of the day, is our
up-time really going to be that much better when you have a million nodes
versus a few thousand?

Cloud storage copies your data a half dozen times to a few different data
centers. But they don't copy it a half a million times. At some point the
added redundancy doesn't matter for reliability. We just don't need
millions of nodes to participate in a broadcast network to ensure network
reliability.

Transaction Integrity

Think of open source software: you trust it because you know it can be
audited easily, but you probably don't take the time to audit yourself
every piece of open source software you use. And so it is with
Bitcoin: People need to be able to easily validate the blockchain, but they
don't need to be able to validate it every time they use it, and they
certainly don't need to validate it when using Bitcoin on their Apple
watches.

If I can lease a server in a data center for a few hours at fifty cents an
hour to validate the block chain, then the total cost for me to
independently validate the blockchain is just a couple dollars. Compare
that to my cost to independently validate other parts of the system -- like
the source code! Where's the real cost here?

If the goal of decentralization is to ensure transaction integrity and
network reliability, then we just don't need lots of nodes or every user
running a node to meet that goal. If the goal of decentralization is
something else: what is it?

3) Some believe that we should make Bitcoin Core to run as a high-memory
server-grade software rather than for people's desktops.

I think this is a great idea.

The meaningful impact to the goals of decentralization by limiting which
hardware nodes can run on will be minimal compared with the huge gains in
capacity. Why does increasing capacity of Bitcoin Core matter when we can
"increase capacity" by moving to hub and spoke / lightning? Maybe we should
ask why does growing more apples matter if we can grow more oranges instead?

Hub and spoke and lightning are useful means of making lower cost
transactions, but they're not the same as Bitcoin Core. Stick to the goal:
the goal of Bitcoin core is to build the "best" globally aware globally
consensus network, recognizing there are complex tradeoffs in doing this.

Hub and spoke and lightning could be great when you want lower-cost fees
and don't really care about global awareness. Poker chips are great when
you're in a casino. We don't talk about lightning networks to the guy who
designs poker chips, and we shouldn't be talking about them to the guy who
builds globally aware consensus networks either.

Do people even want increased capacity when they can use hub and spoke /
lightning? If you think they might be willing to pay $600,000 every ten
minutes for it (see above) then yes. Increase capacity, and let the market
decide if that capacity gets used.



On Tue, Jun 30, 2015 at 3:54 PM, Adam Back <adam@cypherspace.org> wrote:

> Not that I'm arguing against scaling within tech limits - I agree we
> can and should - but note block-size is not a free variable.  The
> system is a balance of factors, interests and incentives.
>
> As Greg said here
>
> https://www.reddit.com/r/Bitcoin/comments/3b0593/to_fork_or_not_to_fork/cshphic?context=3
> there are multiple things we should usefully do with increased
> bandwidth:
>
> a) improve decentralisation and hence security/policy
> neutrality/fungibility (which is quite weak right now by a number of
> measures)
> b) improve privacy (privacy features tend to consume bandwidth, eg see
> the Confidential Transactions feature) or more incremental features.
> c) increase throughput
>
> I think some of the within tech limits bandwidth should be
> pre-allocated to decentralisation improvements given a) above.
>
> And I think that we should also see work to improve decentralisation
> with better pooling protocols that people are working on, to remove
> some of the artificial centralisation in the system.
>
> Secondly on the interests and incentives - miners also play an
> important part of the ecosystem and have gone through some lean times,
> they may not be overjoyed to hear a plan to just whack the block-size
> up to 8MB.  While it's true (within some limits) that miners could
> collectively keep blocks smaller, there is the ongoing reality that
> someone else can take break ranks and take any fee however de minimis
> fee if there is a huge excess of space relative to current demand and
> drive fees to zero for a few years.  A major thing even preserving
> fees is wallet defaults, which could be overridden(plus protocol
> velocity/fee limits).
>
> I think solutions that see growth scale more smoothly - like Jeff
> Garzik's and Greg Maxwell's and Gavin Andresen's (though Gavin's
> starts with a step) are far less likely to create perverse unforeseen
> side-effects.  Well we can foresee this particular effect, but the
> market and game theory can surprise you so I think you generally want
> the game-theory & market effects to operate within some more smoothly
> changing caps, with some user or miner mutual control of the cap.
>
> So to be concrete here's some hypotheticals (unvalidated numbers):
>
> a) X MB cap with miner policy limits (simple, lasts a while)
> b) starting at 1MB and growing to 2*X MB cap with 10%/year growth
> limiter + policy limits
> c) starting at 1MB and growing to 3*X MB cap with 15%/year growth
> limiter + Jeff Garzik's miner vote.
> d) starting at 1MB and growing to 4*X MB cap with 20%/year growth
> limiter + Greg Maxwell's flexcap
>
> I think it would be good to see some tests of achievable network
> bandwidth on a range of networks, but as an illustration say X is 2MB.
>
> Rationale being the weaker the signalling mechanism between users and
> user demanded size (in most models communicated via miners), the more
> risk something will go in an unforeseen direction and hence the lower
> the cap and more conservative the growth curve.
>
> 15% growth limiter is not Nielsen's law by intent.  Akamai have data
> on what they serve, and it's more like 15% per annum, but very
> variable by country
>
> http://www.akamai.com/stateoftheinternet/soti-visualizations.html#stoi-graph
> CISCO expect home DSL to double in 5 years
> (
> http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html
> ), which is about the same number.
>
> (Thanks to Rusty for data sources for 15% number).
>
> This also supports the claim I have made a few times here, that it is
> not realistic to support massive growth without algorithmic
> improvement from Lightning like or extension-block like opt-in
> systems.  People who are proposing that we ramp blocksizes to create
> big headroom are I think from what has been said over time, often
> without advertising it clearly, actually assuming and being ok with
> the idea that full nodes move into data-centers period and small
> business/power user validation becomes a thing of the distant past.
> Further the aggressive auto-growth risks seeing that trend continuing
> into higher tier data-centers with negative implications for
> decentralisation.  The odd proponent seems OK with even that too.
>
> Decentralisation is key to Bitcoin's security model, and it's
> differentiating properties.  I think those aggressive growth numbers
> stray into the zone of losing efficiency.  By which I mean in
> scalability or privacy systems if you make a trade-off too far, it
> becomes time to re-asses what you're doing.  For example at that level
> of centralisation, alternative designs are more network efficient,
> while achieving the same effective (weak) decentralisation.  In
> Bitcoin I see this as a strong argument not to push things to that
> extreme, the core functionality must remain for Lightning and other
> scaling approaches to remain secure by using the Bitcoin as a secure
> anchor.  If we heavily centralise and weaken the security of the main
> Bitcoin chain, there remains nothing secure to build on.
>
> Therefore I think it's more appropriate for high scale to rely on
> lightning, or a semi-centralised trade-offs being in the side-chain
> model or similar, where the higher risk of centralisation is opt-in
> and not exposed back (due to the security firewall) to the Bitcoin
> network itself.
>
> People who would like to try the higher tier data-center and
> throughput by high bandwidth use route should in my opinion run that
> experiment as a layer 2 side-chain or analogous.  There are a few ways
> to do that.  And it would be appropriate to my mind that we discuss
> them here also.
>
> An experiment like that could run in parallel with lightning, maybe it
> could be done faster, or offer different trade-offs, so could be an
> interesting and useful thing to see work on.
>
> > On Tue, Jun 30, 2015 at 12:25 PM, Peter Todd <pete@petertodd.org> wrote:
> >> Which of course raises another issue: if that was the plan, then all you
> >> can do is double capacity, with no clear way to scaling beyond that.
> >> Why bother?
>
> A secondary function can be a market signalling - market evidence
> throughput can increase, and there is a technical process that is
> effectively working on it.  While people may not all understand the
> trade-offs and decentralisation work that should happen in parallel,
> nor the Lightning protocol's expected properties - they can appreciate
> perceived progress and an evidently functioning process.  Kind of a
> weak rationale, from a purely technical perspective, but it may some
> value, and is certainly less risky than a unilateral fork.
>
> As I recall Gavin has said things about this area before also
> (demonstrate throughput progress to the market).
>
> Another factor that people have said, which I think I agree with
> fairly much is that if we can chose something conservative, that there
> is wide-spread support for, it can be safer to do it with moderate
> lead time.  Then if there is an implied 3-6mo lead time we are maybe
> projecting ahead a bit further on block-size utilisation.  Of course
> the risk is we overshoot demand but there probably should be some
> balance between that risk and the risk of doing a more rushed change
> that requires system wide upgrade of all non-SPV software, where
> stragglers risk losing money.
>
> As well as scaling block-size within tech limits, we should include a
> commitment to improve decentralisation, and I think any proposal
> should be reasonably well analysed in terms of bandwidth assumptions
> and game-theory.  eg In IETF documents they have a security
> considerations section, and sometimes a privacy section.  In BIPs
> maybe we need a security, privacy and decentralisation/fungibility
> section.
>
> Adam
>
> NB some new list participants may not be aware that miners are
> imposing local policy limits eg at 750kB and that a 250kB policy
> existed in the past and those limits saw utilisation and were
> unilaterally increased unevenly.  I'm not sure if anyone has a clear
> picture of what limits are imposed by hash-rate even today.  That's
> why Pieter posed the question - are we already at the policy limit -
> maybe the blocks we're seeing are closely tracking policy limits, if
> someone mapped that and asked miners by hash-rate etc.
>
> On 30 June 2015 at 18:35, Michael Naber <mickeybob@gmail.com> wrote:
> > Re: Why bother doubling capacity? So that we could have 2x more network
> > participants of course.
> >
> > Re: No clear way to scaling beyond that: Computers are getting more
> capable
> > aren't they? We'll increase capacity along with hardware.
> >
> > It's a good thing to scale the network if technology permits it. How can
> you
> > argue with that?
>

--f46d043bdf74a0c0db0519cb15f5
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">This is great: Adam agrees that we should scale the block =
size limit discretionarily=C2=A0upward within the limits of technology, and=
 continually so as hardware improves. Peter and others: What stands in the =
way of broader consensus on this?<div><div><br></div><div><br></div><div>We=
 also agree on a lot of other important things:</div><div><div>-- block siz=
e is not a free variable<br></div></div><div>-- there are trade-offs betwee=
n node requirements and block size<br></div><div><div><div>-- those trade-o=
ffs have impacts on decentralization</div><div>-- it is important to keep d=
ecentralization strong</div></div><div>-- computing technology is currently=
 not easily capable of running a global transaction network where every tra=
nsaction is broadcast to every node<br></div></div><div>-- we may need some=
 solution (perhaps lightning / hub and spoke / other things) that can help =
with this</div><div><br></div><div>We likely also agree that:</div><div>-- =
whatever that solution may be, we want bitcoin to be the &quot;hub&quot; / =
core of it</div><div>-- this hub needs to exhibit the characteristic of glo=
bally aware global consensus, where every node knows about (awareness) and =
agrees on (consensus) every transaction</div><div>-- Critically, the Bitcoi=
n Core Goal: the goal of Bitcoin core is to build the &quot;best&quot; glob=
ally aware globally consensus network, recognizing there are complex tradeo=
ffs in doing this.</div><div><br></div><div><br></div><div>There are a few =
important things we still don&#39;t agree on though. Our disagreement on th=
ese things is causing us to have trouble making progress meeting the goal o=
f Bitcoin Core. It is critical we address the following points of disagreem=
ent. Please help get agreement on these issues below by sharing your though=
ts:</div><div><br></div><div>1) Some believe that fees and therefore hash-r=
ate will be high by limiting capacity, and that we need to limit capacity t=
o have a &quot;healthy fee market&quot;.</div><div><br></div></div><blockqu=
ote style=3D"margin:0px 0px 0px 40px;border:none;padding:0px"><div><div>Thi=
nk of the airplane analogy: If some day technology exists to ship a hundred=
 million people (transactions) on a plane (block) then do you really want t=
o fight to outlaw those planes? Airlines are regulated so they have to pay =
to screen each passenger to a minimum standard, so even if the plane has un=
limited capacity, they still have to pay to meet minimum security for each =
passenger.=C2=A0</div><div><br></div><div>Just like we can set the block li=
mit, so can we &quot;regulate the airline security requirements&quot; and s=
et a minimum fee size for the sake of security. If technology allows runnin=
g 100,000 transactions per second in 25 years, and we set the minimum fee s=
ize to one penny, then each block is worth a minimum of $600,000. Miners sh=
ould be ok with that and so should everyone else.</div></div></blockquote><=
div><div><br></div><div>2) Some believe that it is better for (a) network r=
eliability and (b) validation of transaction integrity, to have every user =
run a &quot;full node&quot; in order to use Bitcoin Core.</div><div><br></d=
iv></div><blockquote style=3D"margin:0px 0px 0px 40px;border:none;padding:0=
px"><div><div>I don&#39;t agree with this. I&#39;ll break this into two pie=
ces of network reliability and transaction integrity.</div></div><div><br><=
/div></blockquote><blockquote style=3D"margin:0px 0px 0px 40px;border:none;=
padding:0px"><div>Network Reliability</div></blockquote><blockquote style=
=3D"margin:0px 0px 0px 40px;border:none;padding:0px"><div><div><br></div><d=
iv>Imagine you&#39;re setting up an email server for a big company. You dec=
ide to set up a main server, and two fail-over servers. Somebody says that =
they&#39;re really concerned about reliability and asks you to add another =
couple fail-over servers. So you agree. But at some point there&#39;s limit=
ed benefit to adding more servers: and there&#39;s real cost -- all those s=
ervers need to keep in sync with one another, and they need to be maintaine=
d, etc. And there&#39;s limited return: how likely is it really that all th=
ose servers are going to go down?</div></div><div><div><br></div></div><div=
><div>Bitcoin is obviously different from corporate email servers. In one s=
ense, you&#39;ve got miners and volunteer nodes rather than centrally manag=
ed ones, so nodes are much more likely to go down. But at the end of the da=
y, is our up-time really going to be that much better when you have a milli=
on nodes versus a few thousand?=C2=A0</div></div><div><br></div></blockquot=
e><blockquote style=3D"margin:0px 0px 0px 40px;border:none;padding:0px">Clo=
ud storage copies your data a half dozen times to a few different data cent=
ers. But they don&#39;t copy it a half a million times. At some point the a=
dded redundancy doesn&#39;t matter for reliability. We just don&#39;t need =
millions of nodes to participate in a broadcast network to ensure network r=
eliability.<br><br>Transaction Integrity<br><br>Think of open source softwa=
re: you trust it because you know it can be audited easily, but you probabl=
y don&#39;t take the time to audit yourself every piece of open source soft=
ware you use.=C2=A0And so it is with Bitcoin:=C2=A0People need to be able t=
o easily validate the blockchain, but they don&#39;t need to be able to val=
idate it every time they use it, and they certainly don&#39;t need to valid=
ate it when using Bitcoin on their Apple watches.<br><br>If I can lease a s=
erver in a data center for a few hours at fifty cents an hour to validate t=
he block chain, then the total cost for me to independently validate the bl=
ockchain is just a couple dollars. Compare that to my cost to independently=
 validate other parts of the system -- like the source code! Where&#39;s th=
e real cost here? <br><br>If the goal of decentralization is to ensure tran=
saction integrity and network reliability, then we just don&#39;t need lots=
 of nodes or every user running a node to meet that goal. If the goal of de=
centralization is something else: what is it?<br><br></blockquote><div><div=
>3) Some believe that we should make Bitcoin Core to run as a high-memory s=
erver-grade software rather than for people&#39;s desktops.</div><div><br><=
/div></div><blockquote style=3D"margin:0px 0px 0px 40px;border:none;padding=
:0px">I think this is a great idea.=C2=A0<br><br>The meaningful impact to t=
he goals of decentralization by limiting which hardware nodes can run on wi=
ll be minimal compared with the huge gains in capacity.=C2=A0Why does incre=
asing capacity of Bitcoin Core matter when we can &quot;increase capacity&q=
uot; by moving to hub and spoke / lightning?=C2=A0Maybe we should ask why d=
oes growing more apples matter if we can grow more oranges instead?<br><br>=
Hub and spoke and lightning are useful means of making lower cost transacti=
ons, but they&#39;re not the same as Bitcoin Core. Stick to the goal: the g=
oal of Bitcoin core is to build the &quot;best&quot; globally aware globall=
y consensus network, recognizing there are complex tradeoffs in doing this.=
<br><br>Hub and spoke and lightning could be great when you want lower-cost=
 fees and don&#39;t really care about global awareness.=C2=A0Poker chips ar=
e great when you&#39;re in a casino.=C2=A0We don&#39;t talk about lightning=
 networks to the guy who designs poker chips, and we shouldn&#39;t be talki=
ng about them to the guy who builds globally aware consensus networks eithe=
r.=C2=A0<br><br>Do people even want increased capacity when they can use hu=
b and spoke / lightning? If you think they might be willing to pay $600,000=
 every ten minutes for it (see above) then yes. Increase capacity, and let =
the market decide if that capacity gets used.</blockquote><div><div><br></d=
iv></div><div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Tue, =
Jun 30, 2015 at 3:54 PM, Adam Back <span dir=3D"ltr">&lt;<a href=3D"mailto:=
adam@cypherspace.org" target=3D"_blank">adam@cypherspace.org</a>&gt;</span>=
 wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bor=
der-left:1px #ccc solid;padding-left:1ex">Not that I&#39;m arguing against =
scaling within tech limits - I agree we<br>
can and should - but note block-size is not a free variable.=C2=A0 The<br>
system is a balance of factors, interests and incentives.<br>
<br>
As Greg said here<br>
<a href=3D"https://www.reddit.com/r/Bitcoin/comments/3b0593/to_fork_or_not_=
to_fork/cshphic?context=3D3" rel=3D"noreferrer" target=3D"_blank">https://w=
ww.reddit.com/r/Bitcoin/comments/3b0593/to_fork_or_not_to_fork/cshphic?cont=
ext=3D3</a><br>
there are multiple things we should usefully do with increased<br>
bandwidth:<br>
<br>
a) improve decentralisation and hence security/policy<br>
neutrality/fungibility (which is quite weak right now by a number of<br>
measures)<br>
b) improve privacy (privacy features tend to consume bandwidth, eg see<br>
the Confidential Transactions feature) or more incremental features.<br>
c) increase throughput<br>
<br>
I think some of the within tech limits bandwidth should be<br>
pre-allocated to decentralisation improvements given a) above.<br>
<br>
And I think that we should also see work to improve decentralisation<br>
with better pooling protocols that people are working on, to remove<br>
some of the artificial centralisation in the system.<br>
<br>
Secondly on the interests and incentives - miners also play an<br>
important part of the ecosystem and have gone through some lean times,<br>
they may not be overjoyed to hear a plan to just whack the block-size<br>
up to 8MB.=C2=A0 While it&#39;s true (within some limits) that miners could=
<br>
collectively keep blocks smaller, there is the ongoing reality that<br>
someone else can take break ranks and take any fee however de minimis<br>
fee if there is a huge excess of space relative to current demand and<br>
drive fees to zero for a few years.=C2=A0 A major thing even preserving<br>
fees is wallet defaults, which could be overridden(plus protocol<br>
velocity/fee limits).<br>
<br>
I think solutions that see growth scale more smoothly - like Jeff<br>
Garzik&#39;s and Greg Maxwell&#39;s and Gavin Andresen&#39;s (though Gavin&=
#39;s<br>
starts with a step) are far less likely to create perverse unforeseen<br>
side-effects.=C2=A0 Well we can foresee this particular effect, but the<br>
market and game theory can surprise you so I think you generally want<br>
the game-theory &amp; market effects to operate within some more smoothly<b=
r>
changing caps, with some user or miner mutual control of the cap.<br>
<br>
So to be concrete here&#39;s some hypotheticals (unvalidated numbers):<br>
<br>
a) X MB cap with miner policy limits (simple, lasts a while)<br>
b) starting at 1MB and growing to 2*X MB cap with 10%/year growth<br>
limiter + policy limits<br>
c) starting at 1MB and growing to 3*X MB cap with 15%/year growth<br>
limiter + Jeff Garzik&#39;s miner vote.<br>
d) starting at 1MB and growing to 4*X MB cap with 20%/year growth<br>
limiter + Greg Maxwell&#39;s flexcap<br>
<br>
I think it would be good to see some tests of achievable network<br>
bandwidth on a range of networks, but as an illustration say X is 2MB.<br>
<br>
Rationale being the weaker the signalling mechanism between users and<br>
user demanded size (in most models communicated via miners), the more<br>
risk something will go in an unforeseen direction and hence the lower<br>
the cap and more conservative the growth curve.<br>
<br>
15% growth limiter is not Nielsen&#39;s law by intent.=C2=A0 Akamai have da=
ta<br>
on what they serve, and it&#39;s more like 15% per annum, but very<br>
variable by country<br>
<a href=3D"http://www.akamai.com/stateoftheinternet/soti-visualizations.htm=
l#stoi-graph" rel=3D"noreferrer" target=3D"_blank">http://www.akamai.com/st=
ateoftheinternet/soti-visualizations.html#stoi-graph</a><br>
CISCO expect home DSL to double in 5 years<br>
(<a href=3D"http://www.cisco.com/c/en/us/solutions/collateral/service-provi=
der/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html" rel=3D"noref=
errer" target=3D"_blank">http://www.cisco.com/c/en/us/solutions/collateral/=
service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html<=
/a><br>
), which is about the same number.<br>
<br>
(Thanks to Rusty for data sources for 15% number).<br>
<br>
This also supports the claim I have made a few times here, that it is<br>
not realistic to support massive growth without algorithmic<br>
improvement from Lightning like or extension-block like opt-in<br>
systems.=C2=A0 People who are proposing that we ramp blocksizes to create<b=
r>
big headroom are I think from what has been said over time, often<br>
without advertising it clearly, actually assuming and being ok with<br>
the idea that full nodes move into data-centers period and small<br>
business/power user validation becomes a thing of the distant past.<br>
Further the aggressive auto-growth risks seeing that trend continuing<br>
into higher tier data-centers with negative implications for<br>
decentralisation.=C2=A0 The odd proponent seems OK with even that too.<br>
<br>
Decentralisation is key to Bitcoin&#39;s security model, and it&#39;s<br>
differentiating properties.=C2=A0 I think those aggressive growth numbers<b=
r>
stray into the zone of losing efficiency.=C2=A0 By which I mean in<br>
scalability or privacy systems if you make a trade-off too far, it<br>
becomes time to re-asses what you&#39;re doing.=C2=A0 For example at that l=
evel<br>
of centralisation, alternative designs are more network efficient,<br>
while achieving the same effective (weak) decentralisation.=C2=A0 In<br>
Bitcoin I see this as a strong argument not to push things to that<br>
extreme, the core functionality must remain for Lightning and other<br>
scaling approaches to remain secure by using the Bitcoin as a secure<br>
anchor.=C2=A0 If we heavily centralise and weaken the security of the main<=
br>
Bitcoin chain, there remains nothing secure to build on.<br>
<br>
Therefore I think it&#39;s more appropriate for high scale to rely on<br>
lightning, or a semi-centralised trade-offs being in the side-chain<br>
model or similar, where the higher risk of centralisation is opt-in<br>
and not exposed back (due to the security firewall) to the Bitcoin<br>
network itself.<br>
<br>
People who would like to try the higher tier data-center and<br>
throughput by high bandwidth use route should in my opinion run that<br>
experiment as a layer 2 side-chain or analogous.=C2=A0 There are a few ways=
<br>
to do that.=C2=A0 And it would be appropriate to my mind that we discuss<br=
>
them here also.<br>
<br>
An experiment like that could run in parallel with lightning, maybe it<br>
could be done faster, or offer different trade-offs, so could be an<br>
interesting and useful thing to see work on.<br>
<br>
&gt; On Tue, Jun 30, 2015 at 12:25 PM, Peter Todd &lt;<a href=3D"mailto:pet=
e@petertodd.org">pete@petertodd.org</a>&gt; wrote:<br>
&gt;&gt; Which of course raises another issue: if that was the plan, then a=
ll you<br>
&gt;&gt; can do is double capacity, with no clear way to scaling beyond tha=
t.<br>
&gt;&gt; Why bother?<br>
<br>
A secondary function can be a market signalling - market evidence<br>
throughput can increase, and there is a technical process that is<br>
effectively working on it.=C2=A0 While people may not all understand the<br=
>
trade-offs and decentralisation work that should happen in parallel,<br>
nor the Lightning protocol&#39;s expected properties - they can appreciate<=
br>
perceived progress and an evidently functioning process.=C2=A0 Kind of a<br=
>
weak rationale, from a purely technical perspective, but it may some<br>
value, and is certainly less risky than a unilateral fork.<br>
<br>
As I recall Gavin has said things about this area before also<br>
(demonstrate throughput progress to the market).<br>
<br>
Another factor that people have said, which I think I agree with<br>
fairly much is that if we can chose something conservative, that there<br>
is wide-spread support for, it can be safer to do it with moderate<br>
lead time.=C2=A0 Then if there is an implied 3-6mo lead time we are maybe<b=
r>
projecting ahead a bit further on block-size utilisation.=C2=A0 Of course<b=
r>
the risk is we overshoot demand but there probably should be some<br>
balance between that risk and the risk of doing a more rushed change<br>
that requires system wide upgrade of all non-SPV software, where<br>
stragglers risk losing money.<br>
<br>
As well as scaling block-size within tech limits, we should include a<br>
commitment to improve decentralisation, and I think any proposal<br>
should be reasonably well analysed in terms of bandwidth assumptions<br>
and game-theory.=C2=A0 eg In IETF documents they have a security<br>
considerations section, and sometimes a privacy section.=C2=A0 In BIPs<br>
maybe we need a security, privacy and decentralisation/fungibility<br>
section.<br>
<br>
Adam<br>
<br>
NB some new list participants may not be aware that miners are<br>
imposing local policy limits eg at 750kB and that a 250kB policy<br>
existed in the past and those limits saw utilisation and were<br>
unilaterally increased unevenly.=C2=A0 I&#39;m not sure if anyone has a cle=
ar<br>
picture of what limits are imposed by hash-rate even today.=C2=A0 That&#39;=
s<br>
why Pieter posed the question - are we already at the policy limit -<br>
maybe the blocks we&#39;re seeing are closely tracking policy limits, if<br=
>
someone mapped that and asked miners by hash-rate etc.<br>
<br>
On 30 June 2015 at 18:35, Michael Naber &lt;<a href=3D"mailto:mickeybob@gma=
il.com">mickeybob@gmail.com</a>&gt; wrote:<br>
&gt; Re: Why bother doubling capacity? So that we could have 2x more networ=
k<br>
&gt; participants of course.<br>
&gt;<br>
&gt; Re: No clear way to scaling beyond that: Computers are getting more ca=
pable<br>
&gt; aren&#39;t they? We&#39;ll increase capacity along with hardware.<br>
&gt;<br>
&gt; It&#39;s a good thing to scale the network if technology permits it. H=
ow can you<br>
&gt; argue with that?<br>
</blockquote></div><br></div></div>

--f46d043bdf74a0c0db0519cb15f5--