summaryrefslogtreecommitdiff
path: root/dc/380379df956ad037592f74a39c33188b510e29
blob: 86a40caa730c40aef8c0cf07039abb1e93f47774 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194]
	helo=mx.sourceforge.net)
	by sfs-ml-2.v29.ch3.sourceforge.com with esmtp (Exim 4.76)
	(envelope-from <pieter.wuille@gmail.com>) id 1Yy1hZ-0003xu-61
	for bitcoin-development@lists.sourceforge.net;
	Thu, 28 May 2015 17:34:41 +0000
Received-SPF: pass (sog-mx-4.v43.ch3.sourceforge.com: domain of gmail.com
	designates 209.85.218.46 as permitted sender)
	client-ip=209.85.218.46; envelope-from=pieter.wuille@gmail.com;
	helo=mail-oi0-f46.google.com; 
Received: from mail-oi0-f46.google.com ([209.85.218.46])
	by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128)
	(Exim 4.76) id 1Yy1hX-0001Dk-8B
	for bitcoin-development@lists.sourceforge.net;
	Thu, 28 May 2015 17:34:41 +0000
Received: by oihb142 with SMTP id b142so37887723oih.3
	for <bitcoin-development@lists.sourceforge.net>;
	Thu, 28 May 2015 10:34:33 -0700 (PDT)
MIME-Version: 1.0
X-Received: by 10.202.78.142 with SMTP id c136mr3278230oib.131.1432834473671; 
	Thu, 28 May 2015 10:34:33 -0700 (PDT)
Received: by 10.60.94.36 with HTTP; Thu, 28 May 2015 10:34:32 -0700 (PDT)
Received: by 10.60.94.36 with HTTP; Thu, 28 May 2015 10:34:32 -0700 (PDT)
In-Reply-To: <CABsx9T3-zxCAagAS0megd06xvG5n-3tUL9NUK9TT3vt7XNL9Tg@mail.gmail.com>
References: <16096345.A1MpJQQkRW@crushinator>
	<CABsx9T3-zxCAagAS0megd06xvG5n-3tUL9NUK9TT3vt7XNL9Tg@mail.gmail.com>
Date: Thu, 28 May 2015 10:34:32 -0700
Message-ID: <CAPg+sBgf84O9QpppSn=tNqR9jofbfRr02X8xweVgGyFbHQznXA@mail.gmail.com>
From: Pieter Wuille <pieter.wuille@gmail.com>
To: Gavin Andresen <gavinandresen@gmail.com>
Content-Type: multipart/alternative; boundary=001a11c162e0d34079051727c5e1
X-Spam-Score: -0.6 (/)
X-Spam-Report: Spam Filtering performed by mx.sourceforge.net.
	See http://spamassassin.org/tag/ for more details.
	-1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for
	sender-domain
	0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider
	(pieter.wuille[at]gmail.com)
	-0.0 SPF_PASS               SPF: sender matches SPF record
	1.0 HTML_MESSAGE           BODY: HTML included in message
	-0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from
	author's domain
	0.1 DKIM_SIGNED            Message has a DKIM or DK signature,
	not necessarily valid
	-0.1 DKIM_VALID Message has at least one valid DKIM or DK signature
X-Headers-End: 1Yy1hX-0001Dk-8B
Cc: Bitcoin Dev <bitcoin-development@lists.sourceforge.net>
Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB step
	function
X-BeenThere: bitcoin-development@lists.sourceforge.net
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: <bitcoin-development.lists.sourceforge.net>
List-Unsubscribe: <https://lists.sourceforge.net/lists/listinfo/bitcoin-development>,
	<mailto:bitcoin-development-request@lists.sourceforge.net?subject=unsubscribe>
List-Archive: <http://sourceforge.net/mailarchive/forum.php?forum_name=bitcoin-development>
List-Post: <mailto:bitcoin-development@lists.sourceforge.net>
List-Help: <mailto:bitcoin-development-request@lists.sourceforge.net?subject=help>
List-Subscribe: <https://lists.sourceforge.net/lists/listinfo/bitcoin-development>,
	<mailto:bitcoin-development-request@lists.sourceforge.net?subject=subscribe>
X-List-Received-Date: Thu, 28 May 2015 17:34:41 -0000

--001a11c162e0d34079051727c5e1
Content-Type: text/plain; charset=UTF-8

> until we have size-independent new block propagation

I don't really believe that is possible. I'll argue why below. To be clear,
this is not an argument against increasing the block size, only against
using the assumption of size-independent propagation.

There are several significant improvements likely possible to various
aspects of block propagation, but I don't believe you can make any part
completely size-independent. Perhaps the remaining aspects result in terms
in the total time that vanish compared to the link latencies for 1 MB
blocks, but there will be some block sizes for which this is no longer the
case, and we need to know where that is the case.

* You can't assume that every transaction is pre-relayed and pre-validated.
This can happen due to non-uniform relay policies (different codebases, and
future things like size-limited mempools), double spend attempts, and
transactions generated before a block had time to propagate. You've
previously argued for a policy of not including too recent transactions,
but that requires a bound on network diameter, and if these late
transactions are profitable, it has exactly the same problem as making
larger blocks non-proportionally more economic for larger pools groups if
propagation time is size dependent).
  * This results in extra bandwidth usage for efficient relay protocols,
and if discrepancy estimation mispredicts the size of IBLT or error
correction data needed, extra roundtrips.
  * Signature validation for unrelayed transactions will be needed at block
relay time.
  * Database lookups for the inputs of unrelayed transactions cannot be
cached in advance.

* Block validation with 100% known and pre-validated transactions is not
constant time, due to updates that need to be made to the UTXO set (and
future ideas like UTXO commitments would make this effect an order of
magnitude worse).

* More efficient relay protocols also have higher CPU cost for
encoding/decoding.

Again, none of this is a reason why the block size can't increase. If
availability of hardware with higher bandwidth, faster disk/ram access
times, and faster CPUs increases, we should be able to have larger blocks
with the same propagation profile as smaller blocks with earlier technology.

But we should know how technology scales with larger blocks, and I don't
believe we do, apart from microbenchmarks in laboratory conditions.

-- 
Pieter
 On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock <bip@mattwhitlock.name>
wrote:

> Between all the flames on this list, several ideas were raised that did
> not get much attention. I hereby resubmit these ideas for consideration and
> discussion.
>
> - Perhaps the hard block size limit should be a function of the actual
> block sizes over some trailing sampling period. For example, take the
> median block size among the most recent 2016 blocks and multiply it by 1.5.
> This allows Bitcoin to scale up gradually and organically, rather than
> having human beings guessing at what is an appropriate limit.
>

A lot of people like this idea, or something like it. It is nice and
simple, which is really important for consensus-critical code.

With this rule in place, I believe there would be more "fee pressure"
(miners would be creating smaller blocks) today. I created a couple of
histograms of block sizes to infer what policy miners are ACTUALLY
following today with respect to block size:

Last 1,000 blocks:
  http://bitcoincore.org/~gavin/sizes_last1000.html

Notice a big spike at 750K -- the default size for Bitcoin Core.
This graph might be misleading, because transaction volume or fees might
not be high enough over the last few days to fill blocks to whatever limit
miners are willing to mine.

So I graphed a time when (according to statoshi.info) there WERE a lot of
transactions waiting to be confirmed:
   http://bitcoincore.org/~gavin/sizes_357511.html

That might also be misleading, because it is possible there were a lot of
transactions waiting to be confirmed because miners who choose to create
small blocks got lucky and found more blocks than normal.  In fact, it
looks like that is what happened: more smaller-than-normal blocks were
found, and the memory pool backed up.

So: what if we had a dynamic maximum size limit based on recent history?

The average block size is about 400K, so a 1.5x rule would make the max
block size 600K; miners would definitely be squeezing out transactions /
putting pressure to increase transaction fees. Even a 2x rule (implying
800K max blocks) would, today, be squeezing out transactions / putting
pressure to increase fees.

Using a median size instead of an average means the size can increase or
decrease more quickly. For example, imagine the rule is "median of last
2016 blocks" and 49% of miners are producing 0-size blocks and 51% are
producing max-size blocks. The median is max-size, so the 51% have total
control over making blocks bigger.  Swap the roles, and the median is
min-size.

Because of that, I think using an average is better-- it means the max size
will change (up or down) more slowly.

I also think 2016 blocks is too long, because transaction volumes change
quicker than that. An average over 144 blocks (last 24 hours) would be
better able to handle increased transaction volume around major holidays,
and would also be able to react more quickly if an economically irrational
attacker attempted to flood the network with fee-paying transactions.

So my straw-man proposal would be:  max size 2x average size over last 144
blocks, calculated at every block.

There are a couple of other changes I'd pair with that consensus change:

+ Make the default mining policy for Bitcoin Core neutral-- have its target
block size be the average size, so miners that don't care will "go along
with the people who do care."

+ Use something like Greg's formula for size instead of bytes-on-the-wire,
to discourage bloating the UTXO set.


---------

When I've proposed (privately, to the other core committers) some dynamic
algorithm the objection has been "but that gives miners complete control
over the max block size."

I think that worry is unjustified right now-- certainly, until we have
size-independent new block propagation there is an incentive for miners to
keep their blocks small, and we see miners creating small blocks even when
there are fee-paying transactions waiting to be confirmed.

I don't even think it will be a problem if/when we do have size-independent
new block propagation, because I think the combination of the random timing
of block-finding plus a dynamic limit as described above will create a
healthy system.

If I'm wrong, then it seems to me the miners will have a very strong
incentive to, collectively, impose whatever rules are necessary (maybe a
soft-fork to put a hard cap on block size) to make the system healthy again.


-- 
--
Gavin Andresen


------------------------------------------------------------------------------

_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--001a11c162e0d34079051727c5e1
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">&gt; until we have size-independent new block propagation</p=
>
<p dir=3D"ltr">I don&#39;t really believe that is possible. I&#39;ll argue =
why below. To be clear, this is not an argument against increasing the bloc=
k size, only against using the assumption of size-independent propagation.<=
/p>
<p dir=3D"ltr">There are several significant improvements likely possible t=
o various aspects of block propagation, but I don&#39;t believe you can mak=
e any part completely size-independent. Perhaps the remaining aspects resul=
t in terms in the total time that vanish compared to the link latencies for=
 1 MB blocks, but there will be some block sizes for which this is no longe=
r the case, and we need to know where that is the case.</p>
<p dir=3D"ltr">* You can&#39;t assume that every transaction is pre-relayed=
 and pre-validated. This can happen due to non-uniform relay policies (diff=
erent codebases, and future things like size-limited mempools), double spen=
d attempts, and transactions generated before a block had time to propagate=
. You&#39;ve previously argued for a policy of not including too recent tra=
nsactions, but that requires a bound on network diameter, and if these late=
 transactions are profitable, it has exactly the same problem as making lar=
ger blocks non-proportionally more economic for larger pools groups if prop=
agation time is size dependent).<br>
=C2=A0 * This results in extra bandwidth usage for efficient relay protocol=
s, and if discrepancy estimation mispredicts the size of IBLT or error corr=
ection data needed, extra roundtrips.<br>
=C2=A0 * Signature validation for unrelayed transactions will be needed at =
block relay time.<br>
=C2=A0 * Database lookups for the inputs of unrelayed transactions cannot b=
e cached in advance.</p>
<p dir=3D"ltr">* Block validation with 100% known and pre-validated transac=
tions is not constant time, due to updates that need to be made to the UTXO=
 set (and future ideas like UTXO commitments would make this effect an orde=
r of magnitude worse).</p>
<p dir=3D"ltr">* More efficient relay protocols also have higher CPU cost f=
or encoding/decoding.</p>
<p dir=3D"ltr">Again, none of this is a reason why the block size can&#39;t=
 increase. If availability of hardware with higher bandwidth, faster disk/r=
am access times, and faster CPUs increases, we should be able to have large=
r blocks with the same propagation profile as smaller blocks with earlier t=
echnology.</p>
<p dir=3D"ltr">But we should know how technology scales with larger blocks,=
 and I don&#39;t believe we do, apart from microbenchmarks in laboratory co=
nditions.</p>
<p dir=3D"ltr">-- <br>
Pieter<br>
</p>
<div class=3D"gmail_quot&lt;blockquote class=3D" style=3D"margin:0 0 0 .8ex=
;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=
=3D"gmail_extra"><div class=3D"gmail_quote">On Fri, May 8, 2015 at 3:20 AM,=
 Matt Whitlock <span dir=3D"ltr">&lt;<a href=3D"mailto:bip@mattwhitlock.nam=
e" target=3D"_blank">bip@mattwhitlock.name</a>&gt;</span> wrote:<br><blockq=
uote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-wi=
dth:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-=
left:1ex">Between all the flames on this list, several ideas were raised th=
at did not get much attention. I hereby resubmit these ideas for considerat=
ion and discussion.<br>
<br>
- Perhaps the hard block size limit should be a function of the actual bloc=
k sizes over some trailing sampling period. For example, take the median bl=
ock size among the most recent 2016 blocks and multiply it by 1.5. This all=
ows Bitcoin to scale up gradually and organically, rather than having human=
 beings guessing at what is an appropriate limit.<br></blockquote><div><br>=
</div><div>A lot of people like this idea, or something like it. It is nice=
 and simple, which is really important for consensus-critical code.</div><d=
iv><br></div><div>With this rule in place, I believe there would be more &q=
uot;fee pressure&quot; (miners would be creating smaller blocks) today. I c=
reated a couple of histograms of block sizes to infer what policy miners ar=
e ACTUALLY following today with respect to block size:</div><div><br></div>=
<div>Last 1,000 blocks:</div><div>=C2=A0=C2=A0<a href=3D"http://bitcoincore=
.org/~gavin/sizes_last1000.html" target=3D"_blank">http://bitcoincore.org/~=
gavin/sizes_last1000.html</a></div><div><br></div><div>Notice a big spike a=
t 750K -- the default size for Bitcoin Core.</div><div>This graph might be =
misleading, because transaction volume or fees might not be high enough ove=
r the last few days to fill blocks to whatever limit miners are willing to =
mine.<br></div><div><br></div><div>So I graphed a time when (according to <=
a href=3D"http://statoshi.info" target=3D"_blank">statoshi.info</a>) there =
WERE a lot of transactions waiting to be confirmed:</div><div>=C2=A0 =C2=A0=
<a href=3D"http://bitcoincore.org/~gavin/sizes_357511.html" target=3D"_blan=
k">http://bitcoincore.org/~gavin/sizes_357511.html</a></div><div><br></div>=
<div>That might also be misleading, because it is possible there were a lot=
 of transactions waiting to be confirmed because miners who choose to creat=
e small blocks got lucky and found more blocks than normal.=C2=A0 In fact, =
it looks like that is what happened: more smaller-than-normal blocks were f=
ound, and the memory pool backed up.</div><div><br></div><div>So: what if w=
e had a dynamic maximum size limit based on recent history?</div><div><br><=
/div><div>The average block size is about 400K, so a 1.5x rule would make t=
he max block size 600K; miners would definitely be squeezing out transactio=
ns / putting pressure to increase transaction fees. Even a 2x rule (implyin=
g 800K max blocks) would, today, be squeezing out transactions / putting pr=
essure to increase fees.</div><div><br></div><div>Using a median size inste=
ad of an average means the size can increase or decrease more quickly. For =
example, imagine the rule is &quot;median of last 2016 blocks&quot; and 49%=
 of miners are producing 0-size blocks and 51% are producing max-size block=
s. The median is max-size, so the 51% have total control over making blocks=
 bigger.=C2=A0 Swap the roles, and the median is min-size.</div><div><br></=
div><div>Because of that, I think using an average is better-- it means the=
 max size will change (up or down) more slowly.</div><div><br></div><div>I =
also think 2016 blocks is too long, because transaction volumes change quic=
ker than that. An average over 144 blocks (last 24 hours) would be better a=
ble to handle increased transaction volume around major holidays, and would=
 also be able to react more quickly if an economically irrational attacker =
attempted to flood the network with fee-paying transactions.</div><div><br>=
</div><div>So my straw-man proposal would be: =C2=A0max size 2x average siz=
e over last 144 blocks, calculated at every block.</div><div><br></div><div=
>There are a couple of other changes I&#39;d pair with that consensus chang=
e:</div><div><br></div><div>+ Make the default mining policy for Bitcoin Co=
re neutral-- have its target block size be the average size, so miners that=
 don&#39;t care will &quot;go along with the people who do care.&quot;</div=
><div><br></div><div>+ Use something like Greg&#39;s formula for size inste=
ad of bytes-on-the-wire, to discourage bloating the UTXO set.</div><div><br=
></div><div><br></div><div>---------</div><div><br></div><div>When I&#39;ve=
 proposed (privately, to the other core committers) some dynamic algorithm =
the objection has been &quot;but that gives miners complete control over th=
e max block size.&quot;</div><div><br></div><div>I think that worry is unju=
stified right now-- certainly, until we have size-independent new block pro=
pagation there is an incentive for miners to keep their blocks small, and w=
e see miners creating small blocks even when there are fee-paying transacti=
ons waiting to be confirmed.</div><div><br></div><div>I don&#39;t even thin=
k it will be a problem if/when we do have size-independent new block propag=
ation, because I think the combination of the random timing of block-findin=
g plus a dynamic limit as described above will create a healthy system.</di=
v><div><br></div><div>If I&#39;m wrong, then it seems to me the miners will=
 have a very strong incentive to, collectively, impose whatever rules are n=
ecessary (maybe a soft-fork to put a hard cap on block size) to make the sy=
stem healthy again.</div></div><div><br></div><div><br></div>-- <br><div>--=
<br>Gavin Andresen<br></div><div><br></div>
</div></div>
<br>-----------------------------------------------------------------------=
-------<br>
<br>_______________________________________________<br>
Bitcoin-development mailing list<br>
<a href=3D"mailto:Bitcoin-development@lists.sourceforge.net">Bitcoin-develo=
pment@lists.sourceforge.net</a><br>
<a href=3D"https://lists.sourceforge.net/lists/listinfo/bitcoin-development=
" target=3D"_blank">https://lists.sourceforge.net/lists/listinfo/bitcoin-de=
velopment</a><br>
<br></div>

--001a11c162e0d34079051727c5e1--