summaryrefslogtreecommitdiff
path: root/f9/c83d378cd1009ad56c4b2e27ccd88e89a3f690
blob: da789b5e4b041afe114c837f1627ee7208053022 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
Return-Path: <gmaxwell@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id C95B194B
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Mon, 15 May 2017 23:31:08 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-vk0-f53.google.com (mail-vk0-f53.google.com
	[209.85.213.53])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id F109EF0
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Mon, 15 May 2017 23:31:00 +0000 (UTC)
Received: by mail-vk0-f53.google.com with SMTP id y190so57187316vkc.1
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Mon, 15 May 2017 16:31:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
	h=mime-version:sender:from:date:message-id:subject:to;
	bh=kB5eFD5itcnytQLFfQ8eOJE0rwQlxkykUzXX6YCYpPU=;
	b=DZoRv8HeVqHcNBjCZANfRVXNC3Vy//48+RrzTy7kUPdHXcZJjWeizKhMlvBvqvfJ/E
	pBDY6h2ptFcpREIPsJ5Dw8/3bn9mm8uEInI+fDJzvqAYq7VOcf0osMmbAo+o+tejUZng
	viwJImZY5Q0KUTjIFsUZqH9x962dnJN54SSZjvO1TT+8OuDOU3lZrC3QI4Wd7DZbph1w
	L1iil0ng5Pm3sBKPycyzE2KJA1zk4KlX2hCaR1NOaDVs71OE3lEkWjrc+k9aA1qHsuE3
	yzmerTDDn6i7vRE1Kf7b1PvQlYQ5jExK4A0cxNJY15Q4vBA7C+5yiPVTya8oegYsGDVD
	1Omw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20161025;
	h=x-gm-message-state:mime-version:sender:from:date:message-id:subject
	:to; bh=kB5eFD5itcnytQLFfQ8eOJE0rwQlxkykUzXX6YCYpPU=;
	b=uhqFs1tGRWRtCPqI6BGx6QuBu3xyGBuxmUVV5cDMSpVg9PDGBIhykhrNnemAj6IwFk
	CKLh1qtnx9Muf74O8OuyKsqbBXZoEckNnGytWtD4fZpxRbshVB8k+dR/X7ABtMtJh53M
	hFzhq6MAFho1BWkF34XF/PdkxOqpcqLXAwB9rEfnxEZ6hbzBywogd/0JnETD0ovZjIoz
	KleCaMlCQgc2T/Keq3FCBxGze86+jowo5Jj9s4cr4hOnKNJa+ycen4G24uFVKzABIdjE
	XxGnuSdc9xTiZAtKmgY+W4KRB7lYkh8Ssg0JMPW4QUkh+205AgpO5F0tfrIJ5e/sDHgs
	um+w==
X-Gm-Message-State: AODbwcD08/+Wz4s/v4F7vjpDne0Hkbdq6f+6udvzaQ6sHIAl3iUqGjd8
	NEbIFvnA626wCyg4g5Crycx/J0xrMw==
X-Received: by 10.31.172.86 with SMTP id v83mr2124309vke.26.1494891059673;
	Mon, 15 May 2017 16:30:59 -0700 (PDT)
MIME-Version: 1.0
Sender: gmaxwell@gmail.com
Received: by 10.103.20.66 with HTTP; Mon, 15 May 2017 16:30:59 -0700 (PDT)
From: Gregory Maxwell <greg@xiph.org>
Date: Mon, 15 May 2017 23:30:59 +0000
X-Google-Sender-Auth: q_f2H1MWsFby2hxyEBCU_hDLRf0
Message-ID: <CAAS2fgSfx-sr8hQt_atFw0Qmu_cQFCAc2XCapoNsUuZrPU3O5Q@mail.gmail.com>
To: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
Content-Type: multipart/alternative; boundary="001a1143447a97023c054f9872b8"
X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID, FREEMAIL_FROM, HTML_MESSAGE, RCVD_IN_DNSWL_NONE,
	RCVD_IN_SORBS_SPAM autolearn=no version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
X-Mailman-Approved-At: Mon, 15 May 2017 23:36:58 +0000
Subject: [bitcoin-dev] Validationless mining without transactions
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Mon, 15 May 2017 23:31:08 -0000

--001a1143447a97023c054f9872b8
Content-Type: text/plain; charset="UTF-8"

Today someone showed up on IRC suggesting a scheme for to improve the
ability of miners to mine without validation while including transactions
by shipping around an approximate sketch of the txins that were used by a
block.

I pointed out that what sounded like the exact same scheme had been
previously proposed by Anthony Towns over a year ago,  that it turned out
that it didn't need any consensus changes, but also wasn't very attractive
because the actual transmission of the block (at least with FBRP or Fibre)
didn't really take any longer...  And, of course, mining without validating
does a real number on SPV security assumptions.

But then realized the the conversation between Anthony and I was offlist.
So-- for posterity...

I think the most interesting thing about this thread is that it gives a
concrete proof that a restriction on collecting transaction fees does not
discourage validationless mining; nor to other proposed consensus changes
make it any easier to include transactions while mining without validation.


Forwarded conversation
Subject: Blockchain verification flag (BIP draft)
------------------------

From: Anthony Towns <aj@erisian.com.au>
Date: Mon, Feb 29, 2016 at 2:13 AM
To: Gregory Maxwell <greg@xiph.org>


On Fri, Dec 04, 2015 at 08:26:22AM +0000, Gregory Maxwell via bitcoin-dev
wrote:
> A significant fraction of hashrate currently mines blocks without
> verifying them for a span of time after a new block shows up on the
> network for economically rational reasons.

Two thoughts related to this. Are they obvious or daft?

a)

Would it make sense to require some demonstration that you've validated
prior blocks? eg, you could demonstrate you've done part of the work
to at least verify signatures from the previous block by including the
sha256 of the concatenation of all the sighash values in the coinbase
transaction -- if you'd already done the sig checking, calculating that
as you went would be pretty cheap, I think. Then make the rule be that
if you set the "validated" bit without including the demonstration of
validation, your block is invalid.

I guess this is more or less what Peter Todd proposed in:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/
2015-December/012105.html

b)

It occurred to me while emailing with Matt Corallo, that it's probably
possible to make it easy to generate actually useful blocks while doing
validationless mining, rather than only creating empty blocks.

When creating a block, you:

   - calculate a fixed size (7168 bytes?) bloom filter of the
     prevouts that the block is spending
   - include the sha256 of the final bloom filter as the last output
     in the coinbase
   - enforce the inclusion of that sha256 by soft-fork
   - as part of fast relaying, transmit:
       - 80 byte block header
       - 7168 byte bloom filter
       - < 416 (?) byte merkle path to the coinbase
       - 64 byte sha256 midstate for coinbase up to start of the
         final transaction
       - < 128 byte tail of coinbase including bloom commitment
     (total of 7856 bytes, so less than 8kB)

I think that would be enough to verify that the proof-of-work is
committing to the bloom filter, and the bloom filter will then let
you throw out any transactions that could have been included in a block
built on block n-1, but can't be included in block n+1 -- whether they're
included in the new block, or would be double spends. So given that
information you can safely build a new block that's actually full of
transactions on top of the new block, even prior to downloading it in
full, let alone validating it.

I've run that algorithm over the last couple of weeks' worth of
transactions (to see how many of the next block's transaction would have
been thrown away using that approach) and it appeared to work fine --
it throws away maybe a dozen transactions per block compared to accurate
validation, but that's only about a couple of kB out of a 1MB block,
so something like 0.2%.  (I'm seeing ~4500 prevouts per block roughly,
so that's the error rate you'd expect; doubling for 2MB's worth of txes
with segwit predicts 3.5%, doubling again would presumably result in 14%
of transactions being falsely identified as double spends prior to the
block actually validating)

I haven't checked the math in detail, but I think that could reasonably
give an immediate 20% increase in effective blocksize, given the number of
empty blocks that get mined... (There were only ~1571MB of transactions
in the last 2016 blocks, so bumping the average from 780kB per block to
940kB would be a 20% increase; which would bring the 1.7x segwit increase
up to 2x too...)

Also, as far as I can see, you could probably even just have bitcoin core
transmit that 8kB of data around as part of propogating headers first.
Once you've got the new header and bloom filter, the only extra bit
should be passing both those into getblocktemplate to update the
previousblockhash and transaction selection. Both those together and it
seems like you could be mining on top of the latest block seconds after
it was found, just by naively running a bitcoin node?

I saw the "Bitcoin Classic" roadmap includes:

  "Implement "headers-first" mining. As soon as a valid 80-byte block
   header that extends the most-work chain is received, relay the header
   (via a new p2p network message) and allow mining an empty block on top
   of it, for up to 20 seconds."

which seems like the same idea done worse...

Any thoughts? Pointers to the bitcointalk thread where this was proposed
two years ago? :)

Cheers,
aj


----------
From: Gregory Maxwell <gmaxwell@gmail.com>
Date: Mon, Feb 29, 2016 at 3:20 AM
To: Anthony Towns <aj@erisian.com.au>


On Mon, Feb 29, 2016 at 2:13 AM, Anthony Towns <aj@erisian.com.au> wrote:
> Would it make sense to require some demonstration that you've validated
> prior blocks? eg, you could demonstrate you've done part of the work

That information is easily shared/delegated... so it just creates
another centralized information source, and another source of
unfairness producing latency in the mining process. Without actually
preventing parties from mining. Doubly so in the context of how
validationless mining is actually done; the miners pull from other
miner's stratum servers; so they'll just see the commitments there.

So I don't see there being too much value there.

> if you set the "validated" bit without including the demonstration of
> validation, your block is invalid.

Pretty good incentive to not adopt the scheme, perhaps?

Moreover, this creates another way for a block to be invalid which has
no compact fraud proof. :(

> It occurred to me while emailing with Matt Corallo, that it's probably
> possible to make it easy to generate actually useful blocks while doing
> validationless mining, rather than only creating empty blocks.

I agree but:

I'm basically tired of repeating to people that there is no need for a
validationless block to be empty. So Yes, I agree with you on that
fact; it's possible for miners to do this already, with no protocol
changes (yes, it requires trusting each other but inherently
validationless mining already requires that). Miners only don't bother
right now because the funds left behind are insubstantial.

Its absolutely untrue that an empty block is not useful. Every block,
empty or not, mined against the best tip you know contributes to the
resolution of consensus and collapsing the network onto a single
state. Every block that was mined only after validating a block
amplifies security; by helping leave behind an invalid chain faster. A
block doesn't need to contain transactions to do these things.

>        - 7168 byte bloom filter

FWIW, thats significantly larger than the amount of data typically
needed to send the whole block using the fast block relay protocol.

Your estimates are assuming the empty blocks come purely from
transmission and verification, but because most verification is cached
and transmission compressed-- they don't. There are numerous latency
sources through the whole stack, some constant some
size-proportional... the mining without validation achieves its gains
not from skipping validation (at least not most of the time); but
mostly from short cutting a deep stack with many latency sources;
including ones that have nothing to do with bitcoin core or the
Bitcoin protocol.

High hardware latency also amplifies short periods of empty block
mining to longer periods.

Perhaps most importantly, VFM mining avoids needing to identify and
characterize these other delay sources, by short cutting right at the
end no one needs to even figure out that their pool server is
performing a DNS request before every time it contacts their bitcoind
RPC or whatnot.

>   "Implement "headers-first" mining. As soon as a valid 80-byte block

This BIP draft resulted in me relieving some pretty vicious attacks
from that community... funny.

> Any thoughts? Pointers to the bitcointalk thread where this was proposed
> two years ago? :)

Relevant to your interests: https://github.com/bitcoin/bitcoin/pull/1586

Lots of discussion on IRC.

----------
From: Anthony Towns <aj@erisian.com.au>
Date: Wed, Mar 2, 2016 at 9:55 PM
To: Gregory Maxwell <gmaxwell@gmail.com>


On Mon, Feb 29, 2016 at 03:20:01AM +0000, Gregory Maxwell wrote:
> On Mon, Feb 29, 2016 at 2:13 AM, Anthony Towns <aj@erisian.com.au> wrote:
> > Would it make sense to require some demonstration that you've validated
> > prior blocks? eg, you could demonstrate you've done part of the work
> That information is easily shared/delegated...

Yeah, I thought about that. It's a tradeoff -- you definitely want the
validation to be easily "shared" in the sense that you want one validation
run to suffice for billions of mining attempts; and you probably want
it to be easy to compute when you receive a block, so you don't have
to revalidate the previous one to validate the new one... But you don't
want it to be so easily shared that one person on the planet calculates
it and everyone else just leeches from them.

> so it just creates
> another centralized information source, and another source of
> unfairness producing latency in the mining process. Without actually
> preventing parties from mining. Doubly so in the context of how
> validationless mining is actually done; the miners pull from other
> miner's stratum servers; so they'll just see the commitments there.

I think you could make it hostile to accidental sharing by having it be:

  <n> ;
  sha256(
      sha256( current block's first <n>+1 coinbase outputs ;
               previous block's nonce )
      sha256( previous block's sighash values )
  )

If you skipped the internal sha256's (or just moved the nonce into the
final sha256), you'd be half-way forced to revalidate the previous block
every time you found a new block, which might be worthwhile.

> > if you set the "validated" bit without including the demonstration of
> > validation, your block is invalid.
> Pretty good incentive to not adopt the scheme, perhaps?

Well, my theory was once you have validated the block, then the
demonstration is trivially easy to provide.

I was thinking that you could add a positive incentive by making validated
blocks count for something like 1.6x the chainwork for choosing which
chain to build on; so if you have a chain with 3 unvalidated blocks in
a row, then a chain with 2 validated blocks in a row instead would be
preferred for building your next block.

> Moreover, this creates another way for a block to be invalid which has
> no compact fraud proof. :(

Hmmm. That's true. Is it true by definition though? If you're proving
you've validated 100% of a block, then is it even conceptually possible
to check that proof with less work than validating 100% of a block?
Sounds kind of SNARK-ish.

Oh, don't SNARKs (theoretically) give you a compact fraud proof, provided
the block size and sigops are bounded? The "secret" input is the block
data, public input is the block hash and the supposed validation proof
hash, program returns true if the block hash matches the block data,
and the calculated validation hash doesn't match the supposed validation
hash. Shudder to think how long generating the proof would take though,
or how hard it'd be to generate the circuit in the first place...

> > It occurred to me while emailing with Matt Corallo, that it's probably
> > possible to make it easy to generate actually useful blocks while doing
> > validationless mining, rather than only creating empty blocks.
> I agree but:
> I'm basically tired of repeating to people that there is no need for a
> validationless block to be empty. So Yes, I agree with you on that
> fact; it's possible for miners to do this already, with no protocol
> changes (yes, it requires trusting each other but inherently
> validationless mining already requires that).

If you're only mining an empty block, the only way someone else can
cause you to waste your time is by wasting their own time doing PoW on
an invalid block. If you're mining a block with transactions in it, and
they can mine a valid block, but trick you into mining something that
double spends, then they can make you waste your time without wasting
their own, which seems like a much worse attack to me.

The advantage of the consensus enforced bloom filter is you don't have
to trust anything more than that economic incentive. However if you just
sent an unverifiable bloom filter, it'd be trivial to trick you into
mining an invalid block.

(If you already have the 1MB of block data, then extracting the prevouts
for use as a blacklist would probably be plenty fast though)

(Of course, maybe 90% of current hashpower does trust each other
anyway, in which case requiring trust isn't a burden, but that's not
very decentralised...)

(Paragraphs deleted. My maths is probably wrong, but I think it is
actually economically rational to mine invalid blocks as chaff to distract
validationless miners? The numbers I get are something like "if 40% of
the network is doing validationless mining for 20 seconds out of every
10 minutes, then it's profitable to devote about 2% of your hashpower to
mining invalid blocks". Probably some pretty dodgy assumptions though,
so I'm not including any algebra. But having actual invalid blocks with
real proof of work appear in the wild seems like it'd be a good way to
encourage miners to do validation...)

> Miners only don't bother
> right now because the funds left behind are insubstantial.

Hey, fees are almost 1% of the block payout these days -- that's within
an order of magnitude of a rounding error!

> Its absolutely untrue that an empty block is not useful.

Yeah, I deleted "useless" for that reason then put it back in anyway...

> >        - 7168 byte bloom filter
> FWIW, thats significantly larger than the amount of data typically
> needed to send the whole block using the fast block relay protocol.

Really? Hmm, if you have 2-byte indexes into the most likely to be mined
60k transactions, by 2000 transactions per block is about 4000 bytes. So
I guess that makes sense. And weak blocks would make that generalisable
and only add maybe a 32B index to include on the wire, presumably.

It'd only take a dozen missed transactions to be longer though.

> Your estimates are assuming the empty blocks come purely from
> transmission and verification, but because most verification is cached
> and transmission compressed-- they don't. There are numerous latency
> sources through the whole stack, some constant some
> size-proportional... the mining without validation achieves its gains
> not from skipping validation (at least not most of the time); but
> mostly from short cutting a deep stack with many latency sources;
> including ones that have nothing to do with bitcoin core or the
> Bitcoin protocol.

Hmm, so my assumption is the "bitcoin core" side of the stack looks
something like:

   block header received by p2p or relay network
     |
     V
   block data received by p2p or relay network
     |
     V
   validation, UTXO set updates
     |
     V
   getblocktemplate (possible tx ordering recalculation)
     |
     V
   block header to do PoW on!
     |
     V
   vary and push to miners over the network
     |
     V
   push to ASICs

and the validationless "shortcut" just looks like:

   block header received by p2p or relay network
     |
     V
   hack hack
     |
     V
   new block header to do PoW on!
     |
     V
   vary and push to miners over the network
     |
     V
   push to ASICs

and so making the bitcoin core parts able to provide an unvalidated
header to push to miners/ASICs against "instantly" would be a win as
far as getting bitcoin proper back into the loop all the time... That
would mean removing validation from the critical path, and possibly more
optimisation of getblocktemplate to make it effectively instant too. But
those seem possible?

Having it be:

  header received by bitcoin core
    |
    V
  new block header to do (unverified) PoW on!
    |
    V
  ...

and

  header received by bitcoin core
    |
    V
  block data received by bitcoin core
    |
    V
  block data validated
    |
    V
  new block header to do (verified) PoW on!
    |
    V
  ...

with mining tools being able to just reliably and efficiently leave
bitcoin core in the loop seems like it ought to be a win to me...

> Perhaps most importantly, VFM mining avoids needing to identify and
> characterize these other delay sources, by short cutting right at the
> end no one needs to even figure out that their pool server is
> performing a DNS request before every time it contacts their bitcoind
> RPC or whatnot.

At least with longpoll, doing a DNS query before connection shouldn't
matter?

> >   "Implement "headers-first" mining. As soon as a valid 80-byte block
> This BIP draft resulted in me relieving some pretty vicious attacks
> from that community... funny.

I'm guessing you meant "receiving", which makes that a kinda weird
freudian slip? :) But yeah, technical consistency isn't something I've
seen much of from that area...

> > Any thoughts? Pointers to the bitcointalk thread where this was proposed
> > two years ago? :)
> Relevant to your interests: https://github.com/bitcoin/bitcoin/pull/1586

Tsk, 2 != 4...

Hmm, I'm not sure where this leaves my opinion on either of those ideas.

Cheers,
aj


----------
From: Anthony Towns <aj@erisian.com.au>
Date: Sun, Mar 13, 2016 at 3:58 AM
To: Gregory Maxwell <gmaxwell@gmail.com>


On Thu, Mar 03, 2016 at 07:55:06AM +1000, Anthony Towns wrote:
> > >        - 7168 byte bloom filter
> > FWIW, thats significantly larger than the amount of data typically
> > needed to send the whole block using the fast block relay protocol.
> Really? Hmm, if you have 2-byte indexes into the most likely to be mined
> 60k transactions, by 2000 transactions per block is about 4000 bytes. So
> I guess that makes sense. And weak blocks would make that generalisable
> and only add maybe a 32B index to include on the wire, presumably.
> It'd only take a dozen missed transactions to be longer though.

So I think there's two levels of withholding adversarial miners could
do:

 - block withholding, so they have more time to build on top of their
   own block, maybe increasing their effective hashrate if they have
   above average connectivity

 - transaction withholding, so an entire block can be invalidated
   after the fact, hitting SPV nodes. if there are SPV miners, this can
   invalidate their work (potentially profitably, if you've accidently
   orphaned yourself)

You could solve transaction withholding for miners just by saying
"a PoW isn't valid unless the merkle tree is valid", that way you
can't retroactively invalidate a block, but then you need fast relay
before starting to mine, not just the header and some hint as to what
transactions might be included, and therefore the bloom filter idea
is pointless...


Having actually tried the relay network now, it seems like:

 a) it gets less coding gain than it theoretically could; the day or
    so's worth of blocks from Lightsword only seemed to be ~8x less data,
    rather than ~125x-250x, and what I'm seeing seems similar. So still
    room for improvement?

 b) using "weak blocks" as a way of paying for adding "non-standard"
    transactions (large, low fee, actually non-standard, etc) to the
    mempool seems workable to me; so long as the only reason you're doing
    weak blocks is so miners can ensure the transactions they're mining
    are in mempools, and thus that their blocks will relay quickly, the
    incentives seem properly aligned. (I think you'd want to distinguish
    txns only relayed because they have a weak block, just to be nice to
    SPV clients -- weak block txns might only be mined by one miner, while
    standard, fee paying transactions are being mined by all/most miners)

 c) it seems like it would be possible to adapt the relay protocol into
    a p2p environment to me? I'm thinking that you provide a bidirectional
    mapping for (a subset of) your mempool for each connection you
    have, so that you can quickly go to/from a 2-byte index to a
    transaction. If you make it so that whoever was listening gets to
    decide what transactions are okay, then you'd just need 9 of these
    maps -- 1 for each of your outgoing connections (ie, 8 total), plus
    another 1 that covers all your incoming connections, and each map
    should only really need to use up to about a meg of memory, which
    seems pretty feasible.  Maybe it means up to 8x5MB of your mempool
    is controlled by other people's policies rather than your own,
    but that doesn't seem to bad either.

 d) I'm a bit confused how it compares to IBLT; it seems like IBLT has
    really strong ordering requirements to work correctly, but if you
    had that you could compress the fast relay protocol really well,
    since you could apply the same ordering to your shared mempool, and
    then just send "next tx, next tx, skip 1 tx, next tx, next tx, skip
    3 tx, next tx, here's one you missed, ...", which with compression
    would probably get you to just a few /bits/ per (previously seen)
    transaction...  [0] [1]

 e) for p2p relay, maybe it would make sense to have the protocol only
    allow sending blocks where all the transactions are "previously
    seen". that way if you get a block where some txes haven't been
    seen before, you stall that block, and start sending transactions
    through. if another block comes in in the meantime, that doesn't
    have any new transactions, you send that block through straight away.
    that encourages sending weak blocks through first, to ensure your
    transactions are already in mempools and no one else can sneak
    in first.

Hmm... So that all seems kind of plausible to me; in how many ways am I
mistaken? :)

Cheers,
aj

[0] A hard-fork change to have the block merkle tree be ordered by txid,
    and have the transactions topologically sorted before being validated
    would be kind-of interesting here -- apart from making sorting
    obvious, it'd make it easy to prove that a block doesn't contain a
    transaction. Bit altcoin-y though...

[1] Maybe having the shared mempool indexes be sorted rather than FIFO
    would make the data structures hard; I don't think so, but not sure.


----------
From: Gregory Maxwell <gmaxwell@gmail.com>
Date: Sun, Mar 13, 2016 at 5:06 AM
To: Anthony Towns <aj@erisian.com.au>


On Sun, Mar 13, 2016 at 3:58 AM, Anthony Towns <aj@erisian.com.au> wrote:
> On Thu, Mar 03, 2016 at 07:55:06AM +1000, Anthony Towns wrote:
>> > >        - 7168 byte bloom filter
>> > FWIW, thats significantly larger than the amount of data typically
>> > needed to send the whole block using the fast block relay protocol.
>> Really? Hmm, if you have 2-byte indexes into the most likely to be mined
>> 60k transactions, by 2000 transactions per block is about 4000 bytes. So
>> I guess that makes sense. And weak blocks would make that generalisable
>> and only add maybe a 32B index to include on the wire, presumably.
>> It'd only take a dozen missed transactions to be longer though.
>
> So I think there's two levels of withholding adversarial miners could
> do:
>
>  - block withholding, so they have more time to build on top of their
>    own block, maybe increasing their effective hashrate if they have
>    above average connectivity

Also called "selfish mining".

>  - transaction withholding, so an entire block can be invalidated
>    after the fact, hitting SPV nodes. if there are SPV miners, this can
>    invalidate their work (potentially profitably, if you've accidently
>    orphaned yourself)
> You could solve transaction withholding for miners just by saying
> "a PoW isn't valid unless the merkle tree is valid", that way you
> can't retroactively invalidate a block, but then you need fast relay
> before starting to mine, not just the header and some hint as to what
> transactions might be included, and therefore the bloom filter idea
> is pointless...

Right, this is how Bitcoin Core works (won't extend a chain it hasn't
validated)-- but some miners have shortcutted it to reduce latency.
(And not just bypassing validation, but the whole process, e.g.
transaction selection; which historically has taken more time than
propagation).

> Having actually tried the relay network now, it seems like:
>
>  a) it gets less coding gain than it theoretically could; the day or
>     so's worth of blocks from Lightsword only seemed to be ~8x less data,
>     rather than ~125x-250x, and what I'm seeing seems similar. So still
>     room for improvement?

It's pretty variable.  It depends a lot on consistency between the
transactions the server side selects and the client. When spam attacks
go on, or miners change their policy compression falls off until the
far end twiddles.

Go look at the distribution of the results.

>  c) it seems like it would be possible to adapt the relay protocol into
>     a p2p environment to me? I'm thinking that you provide a bidirectional
>     mapping for (a subset of) your mempool for each connection you
>     have, so that you can quickly go to/from a 2-byte index to a
>     transaction. If you make it so that whoever was listening gets to
>     decide what transactions are okay, then you'd just need 9 of these
>     maps -- 1 for each of your outgoing connections (ie, 8 total), plus
>     another 1 that covers all your incoming connections, and each map
>     should only really need to use up to about a meg of memory, which
>     seems pretty feasible.  Maybe it means up to 8x5MB of your mempool
>     is controlled by other people's policies rather than your own,
>     but that doesn't seem to bad either.

That is a bit kludgy, but yes-- it would work.

But the key thing about latency minimization is that you _must_ send a
block with no request; because otherwise the RTT for just the request
alone will totally dominate the transfer in most cases.  And having N
peers send you the whole block redundantly ends up hurting your
performance (esp because packet losses mean more round trips) even if
the compression is very high.

All these problems can be avoided; at least in theory. Optimal latency
mitigation would be achieved by something like block network coding
techniques:

https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

With these techniques peers could blindly send you data without you
requesting it, while every byte they send would usefully contribute to
your reconstruction. With extra effort and opportunistic forwarding
the entire network could, in theory, receive a block in the time it
took the original host to send only one block, while making use of a
significant fraction of the network's whole bisection bandwidth.

>  d) I'm a bit confused how it compares to IBLT; it seems like IBLT has
>     really strong ordering requirements to work correctly, but if you
>     had that you could compress the fast relay protocol really well,
>     since you could apply the same ordering to your shared mempool, and
>     then just send "next tx, next tx, skip 1 tx, next tx, next tx, skip
>     3 tx, next tx, here's one you missed, ...", which with compression
>     would probably get you to just a few /bits/ per (previously seen)
>     transaction...  [0] [1]

Latency of block relay easily ends up CPU bound; even when not doing
anything too smart (this is why Matt's relay protocol stuff has AVX
sha2 code in it). Prior IBLT implementation attempts have performance
so low that their decode time ends up dwarfing transmission time, and
plain uncoded blocks are faster for common host/bandwidth
configurations.

The ordering requirements stuff is not that relevant in my view; you
likely believe this because Gavin rat-holed himself on it trying to
spec out ordering requirements for miners...  The reality of it is
that a uniform permutation of, say, 4000 transactions can be stored in
log2(4000!)/8 bytes, or about 5.2kbytes (and this is easily achieved
just by using range coding to optimally pack integers in the range
[0..n_unpicked_txn) to pick transactions out of a lexagraphically
sorted list) ... and this is without any prediction at all-- randomly
ordered txn in the block would work just as well.

[E.g. using the uint coder from the daala video codec project can code
these values with about 1% overhead, and runs at about 12MB/sec doing
so on my slow laptop]

Recently some folks have been working privately on a block network
coding implementation... earlier attempts (even before IBLT became
trendy) were thwarted by the same thing that thwarts IBLT: the
decoding was so slow it dominated the latency. We've found some faster
coding schemes though... so it looks like it might be practical now. I
could send you more info if you read the block network coding page and
are interested in helping.

Both IBLT and BNC would both be more useful in the weakblocks model
because there the decode speed isn't latency critical-- so if it needs
100ms of cpu time to decode an efficiently encoded block, that is no
big deal.

>  e) for p2p relay, maybe it would make sense to have the protocol only
>     allow sending blocks where all the transactions are "previously
>     seen". that way if you get a block where some txes haven't been
>     seen before, you stall that block, and start sending transactions
>     through. if another block comes in in the meantime, that doesn't
>     have any new transactions, you send that block through straight away.
>     that encourages sending weak blocks through first, to ensure your
>     transactions are already in mempools and no one else can sneak
>     in first.

Yes, it's perfectly reasonable to do that for bandwidth minimization--
though it doesn't minimize latency.  "Seen" is complex, you have no
guarantee a peer will accept any transaction you've sent it, or even
that it will retain any it sent you. So multiple round trips are
required to resolve missing transactions.

We haven't bothered implementing this historically because the
bandwidth reduction is small overall, and it's not the right strategy
for reducing latency-- the vast majority of bandwidth is eaten by
relay. Right now maybe 15% is used by blocks... so at most you'd get a
15% improvement here.

I did some fairly informal measurements and posted about it:
https://bitcointalk.org/index.php?topic=1377345.0

I also point out there that the existing blocksonly mode achieves
bandwidth optimal transport already (ignoring things like transaction
format compression)... just so long as you don't care about learning
about unconfirmed transactions. :)

> [0] A hard-fork change to have the block merkle tree be ordered by txid,
>     and have the transactions topologically sorted before being validated
>     would be kind-of interesting here -- apart from making sorting
>     obvious, it'd make it easy to prove that a block doesn't contain a
>     transaction. Bit altcoin-y though...

If you sort by data (or ID) without requiring the verifier to
topologically sort then an efficient permutation coder would only
spend bits on places where dependencies push things out of the
expected order... which is fairly rare.

Seems like a reasonable cost for avoiding the hardfork, no? The
receiver topo sort requirement would also require more memory in a
block verifier; and would be more complex to fraud proof, I think.

Engineering wise it's not quite so simple. It's helpful for miners to
have blocks sorted by feerate so that later stages of the mining
process can drop the least profitable transactions simply by
truncating the block.

> [1] Maybe having the shared mempool indexes be sorted rather than FIFO
>     would make the data structures hard; I don't think so, but not sure.

I tried to get Matt to do that for his stuff previously; pointing out
the sorted indexes would be easier to efficiently code. His
counterargument was that for 2000 txn, the two bytes indexes take 4kb,
which is pretty insignificant... and that his time would be better
spent trying to get the hit-rate up. I found that hard to argue with.
:)

----------
From: Anthony Towns <aj@erisian.com.au>
Date: Mon, Mar 14, 2016 at 3:08 AM
To: Gregory Maxwell <gmaxwell@gmail.com>


On Sun, Mar 13, 2016 at 05:06:25AM +0000, Gregory Maxwell wrote:
> >  - block withholding, so they have more time to build on top of their
> >    own block, maybe increasing their effective hashrate if they have
> >    above average connectivity
> Also called "selfish mining".

Yup.

> >  c) it seems like it would be possible to adapt the relay protocol into
> >     a p2p environment to me? [...]
> That is a bit kludgy, but yes-- it would work.
> But the key thing about latency minimization is that you _must_ send a
> block with no request; because otherwise the RTT for just the request
> alone will totally dominate the transfer in most cases.  And having N
> peers send you the whole block redundantly ends up hurting your
> performance (esp because packet losses mean more round trips) even if
> the compression is very high.

If the block can be encoded fully, then it's up to maybe 10kB per block
max (at 1MB blocksize); I don't think multiple transmissions matter much
in that case? Hmm, maybe it does...

> All these problems can be avoided; at least in theory. Optimal latency
> mitigation would be achieved by something like block network coding
> techniques:
> https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

Ugh, patents. Interesting that the patents on turbo codes have expired,
last time I looked they hadn't.

> With these techniques peers could blindly send you data without you
> requesting it, while every byte they send would usefully contribute to
> your reconstruction.

Yeah, that makes sense I think. Pretty complicated though. The "someone
sent corrupt data" seems a little bit problematic to deal with too,
especially in the "optimistically forward stuff before you can validate
it" phase. At least if you're using error correcting codes anyway,
that's probably a self-solving problem.

What's with the switch from 32 bit faux ids in the original section
to 63 bits in the reimagination? I guess you use most of that for the
additional encoded length though...

Keying with the previous block's hash seems kind-of painful, doesn't it?
Once you receive the ids, you want to lookup the actual transactions
from your mempool, but since you can't decrypt anything useful with
only the first 50/60 bits of cyphertext, the only way to do that is
to have already cycled through all the transactions in your mempool
and pre-calculated what their network coded id for that block is, and
you have to do that everytime you receive a block (including orphans,
I guess). It'd make reorgs more expensive too, because you'd have to
reindex all the mempool then as well?

Maybe if you're only doing that predictively it's not so bad? The 5MB-20MB
of txes with highest fees get coded up, and you just download any other
transactions in full? If you're downloading large coinbase txes regularly
anyway, that's probably no big deal.

> Latency of block relay easily ends up CPU bound; even when not doing
> anything too smart (this is why Matt's relay protocol stuff has AVX
> sha2 code in it).

Yeah, that seemed a little odd to me; there shouldn't be that much
hashing to validate a block (1MB of transactions, then maybe 128kB to
get to sha256d, then another 2*128kB for the rest of the merkle tree?).
Matt's code seems like it's doing a linear search through the tx index
to find each tx though, which probably doesn't help.

> Prior IBLT implementation attempts have performance
> so low that their decode time ends up dwarfing transmission time, and
> plain uncoded blocks are faster for common host/bandwidth
> configurations.

Heh.

> The ordering requirements stuff is not that relevant in my view; you
> likely believe this because Gavin rat-holed himself on it trying to
> spec out ordering requirements for miners...  The reality of it is
> that a uniform permutation of, say, 4000 transactions can be stored in
> log2(4000!)/8 bytes, or about 5.2kbytes

Right, but 5.2 kB is a lot of overhead; at least compared to the cases
where Matt's stuff already works well :)

> Recently some folks have been working privately on a block network
> coding implementation... earlier attempts (even before IBLT became
> trendy) were thwarted by the same thing that thwarts IBLT: the
> decoding was so slow it dominated the latency. We've found some faster
> coding schemes though...  so it looks like it might be practical now. I
> could send you more info if you read the block network coding page and
> are interested in helping.

Sure. (Though, fair warning, I've already failed a few times at doing
anything useful with erasure coding...)

> >  e) for p2p relay, maybe it would make sense to have the protocol only
> >     allow sending blocks where all the transactions are "previously
> >     seen". that way if you get a block where some txes haven't been
> >     seen before, you stall that block, and start sending transactions
> >     through. if another block comes in in the meantime, that doesn't
> >     have any new transactions, you send that block through straight
away.
> >     that encourages sending weak blocks through first, to ensure your
> >     transactions are already in mempools and no one else can sneak
> >     in first.
> Yes, it's perfectly reasonable to do that for bandwidth minimization--
> though it doesn't minimize latency.  "Seen" is complex, you have no
> guarantee a peer will accept any transaction you've sent it, or even
> that it will retain any it sent you. So multiple round trips are
> required to resolve missing transactions.

The "p2p relay" in my head has "seen" meaning "the 5MB of transactions
the listening peer thinks is most likely to be mined", odds on both
peers have actually seen something like 145MB of additional transactions
too. You don't do round trips; you just start sending the "unseen"
transactions automatically (by id or in full?), then you send the
compressed block. The only round trip is if you sent the id, but they
actually needed the full tx.

In my head, you get good latency if you do weak blocks beforehand,
and somewhat poorer latency if you don't. Even in my head, I'm not sure
that's actually feasible, though: I'm not sure weak blocks for coinbase
transactions really work, and comparatively high latency on 5% of blocks
that didn't get any weak blocks beforehand isn't very attractive...

> We haven't bothered implementing this historically because the
> bandwidth reduction is small overall, and it's not the right strategy
> for reducing latency-- the vast majority of bandwidth is eaten by
> relay. Right now maybe 15% is used by blocks... so at most you'd get a
> 15% improvement here.

Yeah, I'm assuming a non-trivial increase in bandwidth usage compared
to current relay. Compared to relaying spam transactions (that don't
get mined prior to expiry), not sure it's significant though.

> > [0] A hard-fork change to have the block merkle tree be ordered by txid,
> >     and have the transactions topologically sorted before being
validated
> >     would be kind-of interesting here -- apart from making sorting
> >     obvious, it'd make it easy to prove that a block doesn't contain a
> >     transaction. Bit altcoin-y though...
> If you sort by data (or ID) without requiring the verifier to
> topologically sort then an efficient permutation coder would only
> spend bits on places where dependencies push things out of the
> expected order... which is fairly rare.

Really? I was seeing a lot of transaction chains in the couple of blocks I
looked at. Also, you wouldn't get short proofs that a transaction isn't
present in a block that way either afaics.

> Seems like a reasonable cost for avoiding the hardfork, no? The
> receiver topo sort requirement would also require more memory in a
> block verifier; and would be more complex to fraud proof, I think.

Hmm, I think it'd be easy to fraud proof -- just show adjacent merkle
paths where the results are in the wrong order. Maybe the same's true
with the id-order-but-toposorted too -- just show adjacent merkle paths
where the results are in the wrong order, and the later doesn't depend
on the former. I'm not sure that gives a unique sort though (but maybe
that doesn't actually matter).

> Engineering wise it's not quite so simple. It's helpful for miners to
> have blocks sorted by feerate so that later stages of the mining
> process can drop the least profitable transactions simply by
> truncating the block.

Yeah; not having ordering requirements seems far more practical.

> > [1] Maybe having the shared mempool indexes be sorted rather than FIFO
> >     would make the data structures hard; I don't think so, but not sure.
> I tried to get Matt to do that for his stuff previously; pointing out
> the sorted indexes would be easier to efficiently code. His
> counterargument was that for 2000 txn, the two bytes indexes take 4kb,
> which is pretty insignificant... and that his time would be better
> spent trying to get the hit-rate up. I found that hard to argue with.
> :)

Yeah. Having the bitcoin mempool and fee info (and heck, priority info)
more readily available when seeing new transactions and choosing what to
include seems like it'd be helpful here. Seems relatively painful to do
that outside of bitcoin though.

Cheers,
aj


----------
From: Gregory Maxwell <gmaxwell@gmail.com>
Date: Mon, May 15, 2017 at 8:03 PM
To: Anthony Towns <aj@erisian.com.au>


I ran into someone proposing the same thing as you. Can I share this
discussion with them? (with the public?)

----------
From: Anthony Towns <aj@erisian.com.au>
Date: Mon, May 15, 2017 at 11:00 PM
To: Gregory Maxwell <gmaxwell@gmail.com>


Yes, go ahead on both counts.
--
Sent from my phone.

--001a1143447a97023c054f9872b8
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><br></div><div>Today someone showed up on IRC suggest=
ing a scheme for to improve the ability of miners to mine without validatio=
n while including transactions by shipping around an approximate sketch of =
the txins that were used by a block.<br></div><div><br></div><div>I pointed=
 out that what sounded like the exact same scheme had been previously propo=
sed by Anthony Towns over a year ago,=C2=A0 that it turned out that it didn=
&#39;t need any consensus changes, but also wasn&#39;t very attractive beca=
use the actual transmission of the block (at least with FBRP or Fibre) didn=
&#39;t really take any longer...=C2=A0 And, of course, mining without valid=
ating does a real number on SPV security assumptions.<br></div><div><br></d=
iv><div>But then realized the the conversation between Anthony and I was of=
flist.=C2=A0 So-- for posterity...</div><div><br></div><div>I think the mos=
t interesting thing about this thread is that it gives a concrete proof tha=
t a restriction on collecting transaction fees does not discourage validati=
onless mining; nor to other proposed consensus changes make it any easier t=
o include transactions while mining without validation.</div><div><br></div=
><div><br></div><div class=3D"gmail_quote"><span style=3D"font-size:large;f=
ont-weight:bold">Forwarded conversation</span><br>Subject: <b class=3D"gmai=
l_sendername">Blockchain verification flag (BIP draft)</b><br>-------------=
-----------</div><div class=3D"gmail_quote"><br><span class=3D"gmail_quote"=
><font color=3D"#888">From: <b class=3D"gmail_sendername">Anthony Towns</b>=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:aj@erisian.com.au">aj@erisian.com.=
au</a>&gt;</span><br>Date: Mon, Feb 29, 2016 at 2:13 AM<br>To: Gregory Maxw=
ell &lt;<a href=3D"mailto:greg@xiph.org">greg@xiph.org</a>&gt;<br></font><b=
r></span><br><span class=3D"">On Fri, Dec 04, 2015 at 08:26:22AM +0000, Gre=
gory Maxwell via bitcoin-dev wrote:<br>
&gt; A significant fraction of hashrate currently mines blocks without<br>
&gt; verifying them for a span of time after a new block shows up on the<br=
>
&gt; network for economically rational reasons.<br>
<br>
</span>Two thoughts related to this. Are they obvious or daft?<br>
<br>
a)<br>
<br>
Would it make sense to require some demonstration that you&#39;ve validated=
<br>
prior blocks? eg, you could demonstrate you&#39;ve done part of the work<br=
>
to at least verify signatures from the previous block by including the<br>
sha256 of the concatenation of all the sighash values in the coinbase<br>
transaction -- if you&#39;d already done the sig checking, calculating that=
<br>
as you went would be pretty cheap, I think. Then make the rule be that<br>
if you set the &quot;validated&quot; bit without including the demonstratio=
n of<br>
validation, your block is invalid.<br>
<br>
I guess this is more or less what Peter Todd proposed in:<br>
<br>
<a href=3D"https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Dec=
ember/012105.html" rel=3D"noreferrer" target=3D"_blank">https://lists.linux=
foundation.<wbr>org/pipermail/bitcoin-dev/<wbr>2015-December/012105.html</a=
><br>
<br>
b)<br>
<br>
It occurred to me while emailing with Matt Corallo, that it&#39;s probably<=
br>
possible to make it easy to generate actually useful blocks while doing<br>
validationless mining, rather than only creating empty blocks.<br>
<br>
When creating a block, you:<br>
<br>
=C2=A0 =C2=A0- calculate a fixed size (7168 bytes?) bloom filter of the<br>
=C2=A0 =C2=A0 =C2=A0prevouts that the block is spending<br>
=C2=A0 =C2=A0- include the sha256 of the final bloom filter as the last out=
put<br>
=C2=A0 =C2=A0 =C2=A0in the coinbase<br>
=C2=A0 =C2=A0- enforce the inclusion of that sha256 by soft-fork<br>
=C2=A0 =C2=A0- as part of fast relaying, transmit:<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0- 80 byte block header<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0- 7168 byte bloom filter<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0- &lt; 416 (?) byte merkle path to the coinbase<=
br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0- 64 byte sha256 midstate for coinbase up to sta=
rt of the<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0final transaction<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0- &lt; 128 byte tail of coinbase including bloom=
 commitment<br>
=C2=A0 =C2=A0 =C2=A0(total of 7856 bytes, so less than 8kB)<br>
<br>
I think that would be enough to verify that the proof-of-work is<br>
committing to the bloom filter, and the bloom filter will then let<br>
you throw out any transactions that could have been included in a block<br>
built on block n-1, but can&#39;t be included in block n+1 -- whether they&=
#39;re<br>
included in the new block, or would be double spends. So given that<br>
information you can safely build a new block that&#39;s actually full of<br=
>
transactions on top of the new block, even prior to downloading it in<br>
full, let alone validating it.<br>
<br>
I&#39;ve run that algorithm over the last couple of weeks&#39; worth of<br>
transactions (to see how many of the next block&#39;s transaction would hav=
e<br>
been thrown away using that approach) and it appeared to work fine --<br>
it throws away maybe a dozen transactions per block compared to accurate<br=
>
validation, but that&#39;s only about a couple of kB out of a 1MB block,<br=
>
so something like 0.2%.=C2=A0 (I&#39;m seeing ~4500 prevouts per block roug=
hly,<br>
so that&#39;s the error rate you&#39;d expect; doubling for 2MB&#39;s worth=
 of txes<br>
with segwit predicts 3.5%, doubling again would presumably result in 14%<br=
>
of transactions being falsely identified as double spends prior to the<br>
block actually validating)<br>
<br>
I haven&#39;t checked the math in detail, but I think that could reasonably=
<br>
give an immediate 20% increase in effective blocksize, given the number of<=
br>
empty blocks that get mined... (There were only ~1571MB of transactions<br>
in the last 2016 blocks, so bumping the average from 780kB per block to<br>
940kB would be a 20% increase; which would bring the 1.7x segwit increase<b=
r>
up to 2x too...)<br>
<br>
Also, as far as I can see, you could probably even just have bitcoin core<b=
r>
transmit that 8kB of data around as part of propogating headers first.<br>
Once you&#39;ve got the new header and bloom filter, the only extra bit<br>
should be passing both those into getblocktemplate to update the<br>
previousblockhash and transaction selection. Both those together and it<br>
seems like you could be mining on top of the latest block seconds after<br>
it was found, just by naively running a bitcoin node?<br>
<br>
I saw the &quot;Bitcoin Classic&quot; roadmap includes:<br>
<br>
=C2=A0 &quot;Implement &quot;headers-first&quot; mining. As soon as a valid=
 80-byte block<br>
=C2=A0 =C2=A0header that extends the most-work chain is received, relay the=
 header<br>
=C2=A0 =C2=A0(via a new p2p network message) and allow mining an empty bloc=
k on top<br>
=C2=A0 =C2=A0of it, for up to 20 seconds.&quot;<br>
<br>
which seems like the same idea done worse...<br>
<br>
Any thoughts? Pointers to the bitcointalk thread where this was proposed<br=
>
two years ago? :)<br>
<br>
Cheers,<br>
aj<br>
<br>
<br>----------<br><span class=3D"gmail_quote"><font color=3D"#888">From: <b=
 class=3D"gmail_sendername">Gregory Maxwell</b> <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:gmaxwell@gmail.com">gmaxwell@gmail.com</a>&gt;</span><br>Date:=
 Mon, Feb 29, 2016 at 3:20 AM<br>To: Anthony Towns &lt;<a href=3D"mailto:aj=
@erisian.com.au">aj@erisian.com.au</a>&gt;<br></font><br></span><br><span c=
lass=3D"">On Mon, Feb 29, 2016 at 2:13 AM, Anthony Towns &lt;<a href=3D"mai=
lto:aj@erisian.com.au">aj@erisian.com.au</a>&gt; wrote:<br>
&gt; Would it make sense to require some demonstration that you&#39;ve vali=
dated<br>
&gt; prior blocks? eg, you could demonstrate you&#39;ve done part of the wo=
rk<br>
<br>
</span>That information is easily shared/delegated... so it just creates<br=
>
another centralized information source, and another source of<br>
unfairness producing latency in the mining process. Without actually<br>
preventing parties from mining. Doubly so in the context of how<br>
validationless mining is actually done; the miners pull from other<br>
miner&#39;s stratum servers; so they&#39;ll just see the commitments there.=
<br>
<br>
So I don&#39;t see there being too much value there.<br>
<span class=3D""><br>
&gt; if you set the &quot;validated&quot; bit without including the demonst=
ration of<br>
&gt; validation, your block is invalid.<br>
<br>
</span>Pretty good incentive to not adopt the scheme, perhaps?<br>
<br>
Moreover, this creates another way for a block to be invalid which has<br>
no compact fraud proof. :(<br>
<span class=3D""><br>
&gt; It occurred to me while emailing with Matt Corallo, that it&#39;s prob=
ably<br>
&gt; possible to make it easy to generate actually useful blocks while doin=
g<br>
&gt; validationless mining, rather than only creating empty blocks.<br>
<br>
</span>I agree but:<br>
<br>
I&#39;m basically tired of repeating to people that there is no need for a<=
br>
validationless block to be empty. So Yes, I agree with you on that<br>
fact; it&#39;s possible for miners to do this already, with no protocol<br>
changes (yes, it requires trusting each other but inherently<br>
validationless mining already requires that). Miners only don&#39;t bother<=
br>
right now because the funds left behind are insubstantial.<br>
<br>
Its absolutely untrue that an empty block is not useful. Every block,<br>
empty or not, mined against the best tip you know contributes to the<br>
resolution of consensus and collapsing the network onto a single<br>
state. Every block that was mined only after validating a block<br>
amplifies security; by helping leave behind an invalid chain faster. A<br>
block doesn&#39;t need to contain transactions to do these things.<br>
<span class=3D""><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 - 7168 byte bloom filter<br>
<br>
</span>FWIW, thats significantly larger than the amount of data typically<b=
r>
needed to send the whole block using the fast block relay protocol.<br>
<br>
Your estimates are assuming the empty blocks come purely from<br>
transmission and verification, but because most verification is cached<br>
and transmission compressed-- they don&#39;t. There are numerous latency<br=
>
sources through the whole stack, some constant some<br>
size-proportional... the mining without validation achieves its gains<br>
not from skipping validation (at least not most of the time); but<br>
mostly from short cutting a deep stack with many latency sources;<br>
including ones that have nothing to do with bitcoin core or the<br>
Bitcoin protocol.<br>
<br>
High hardware latency also amplifies short periods of empty block<br>
mining to longer periods.<br>
<br>
Perhaps most importantly, VFM mining avoids needing to identify and<br>
characterize these other delay sources, by short cutting right at the<br>
end no one needs to even figure out that their pool server is<br>
performing a DNS request before every time it contacts their bitcoind<br>
RPC or whatnot.<br>
<span class=3D""><br>
&gt;=C2=A0 =C2=A0&quot;Implement &quot;headers-first&quot; mining. As soon =
as a valid 80-byte block<br>
<br>
</span>This BIP draft resulted in me relieving some pretty vicious attacks<=
br>
from that community... funny.<br>
<span class=3D""><br>
&gt; Any thoughts? Pointers to the bitcointalk thread where this was propos=
ed<br>
&gt; two years ago? :)<br>
<br>
</span>Relevant to your interests: <a href=3D"https://github.com/bitcoin/bi=
tcoin/pull/1586" rel=3D"noreferrer" target=3D"_blank">https://github.com/bi=
tcoin/<wbr>bitcoin/pull/1586</a><br>
<br>
Lots of discussion on IRC.<br>
<br>----------<br><span class=3D"gmail_quote"><font color=3D"#888">From: <b=
 class=3D"gmail_sendername">Anthony Towns</b> <span dir=3D"ltr">&lt;<a href=
=3D"mailto:aj@erisian.com.au">aj@erisian.com.au</a>&gt;</span><br>Date: Wed=
, Mar 2, 2016 at 9:55 PM<br>To: Gregory Maxwell &lt;<a href=3D"mailto:gmaxw=
ell@gmail.com">gmaxwell@gmail.com</a>&gt;<br></font><br></span><br><span cl=
ass=3D"">On Mon, Feb 29, 2016 at 03:20:01AM +0000, Gregory Maxwell wrote:<b=
r>
&gt; On Mon, Feb 29, 2016 at 2:13 AM, Anthony Towns &lt;<a href=3D"mailto:a=
j@erisian.com.au">aj@erisian.com.au</a>&gt; wrote:<br>
&gt; &gt; Would it make sense to require some demonstration that you&#39;ve=
 validated<br>
&gt; &gt; prior blocks? eg, you could demonstrate you&#39;ve done part of t=
he work<br>
&gt; That information is easily shared/delegated...<br>
<br>
</span>Yeah, I thought about that. It&#39;s a tradeoff -- you definitely wa=
nt the<br>
validation to be easily &quot;shared&quot; in the sense that you want one v=
alidation<br>
run to suffice for billions of mining attempts; and you probably want<br>
it to be easy to compute when you receive a block, so you don&#39;t have<br=
>
to revalidate the previous one to validate the new one... But you don&#39;t=
<br>
want it to be so easily shared that one person on the planet calculates<br>
it and everyone else just leeches from them.<br>
<span class=3D""><br>
&gt; so it just creates<br>
&gt; another centralized information source, and another source of<br>
&gt; unfairness producing latency in the mining process. Without actually<b=
r>
&gt; preventing parties from mining. Doubly so in the context of how<br>
&gt; validationless mining is actually done; the miners pull from other<br>
&gt; miner&#39;s stratum servers; so they&#39;ll just see the commitments t=
here.<br>
<br>
</span>I think you could make it hostile to accidental sharing by having it=
 be:<br>
<br>
=C2=A0 &lt;n&gt; ;<br>
=C2=A0 sha256(<br>
=C2=A0 =C2=A0 =C2=A0 sha256( current block&#39;s first &lt;n&gt;+1 coinbase=
 outputs ;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0previous block&#39;s=
 nonce )<br>
=C2=A0 =C2=A0 =C2=A0 sha256( previous block&#39;s sighash values )<br>
=C2=A0 )<br>
<br>
If you skipped the internal sha256&#39;s (or just moved the nonce into the<=
br>
final sha256), you&#39;d be half-way forced to revalidate the previous bloc=
k<br>
every time you found a new block, which might be worthwhile.<br>
<span class=3D""><br>
&gt; &gt; if you set the &quot;validated&quot; bit without including the de=
monstration of<br>
&gt; &gt; validation, your block is invalid.<br>
&gt; Pretty good incentive to not adopt the scheme, perhaps?<br>
<br>
</span>Well, my theory was once you have validated the block, then the<br>
demonstration is trivially easy to provide.<br>
<br>
I was thinking that you could add a positive incentive by making validated<=
br>
blocks count for something like 1.6x the chainwork for choosing which<br>
chain to build on; so if you have a chain with 3 unvalidated blocks in<br>
a row, then a chain with 2 validated blocks in a row instead would be<br>
preferred for building your next block.<br>
<span class=3D""><br>
&gt; Moreover, this creates another way for a block to be invalid which has=
<br>
&gt; no compact fraud proof. :(<br>
<br>
</span>Hmmm. That&#39;s true. Is it true by definition though? If you&#39;r=
e proving<br>
you&#39;ve validated 100% of a block, then is it even conceptually possible=
<br>
to check that proof with less work than validating 100% of a block?<br>
Sounds kind of SNARK-ish.<br>
<br>
Oh, don&#39;t SNARKs (theoretically) give you a compact fraud proof, provid=
ed<br>
the block size and sigops are bounded? The &quot;secret&quot; input is the =
block<br>
data, public input is the block hash and the supposed validation proof<br>
hash, program returns true if the block hash matches the block data,<br>
and the calculated validation hash doesn&#39;t match the supposed validatio=
n<br>
hash. Shudder to think how long generating the proof would take though,<br>
or how hard it&#39;d be to generate the circuit in the first place...<br>
<span class=3D""><br>
&gt; &gt; It occurred to me while emailing with Matt Corallo, that it&#39;s=
 probably<br>
&gt; &gt; possible to make it easy to generate actually useful blocks while=
 doing<br>
&gt; &gt; validationless mining, rather than only creating empty blocks.<br=
>
&gt; I agree but:<br>
&gt; I&#39;m basically tired of repeating to people that there is no need f=
or a<br>
&gt; validationless block to be empty. So Yes, I agree with you on that<br>
&gt; fact; it&#39;s possible for miners to do this already, with no protoco=
l<br>
&gt; changes (yes, it requires trusting each other but inherently<br>
&gt; validationless mining already requires that).<br>
<br>
</span>If you&#39;re only mining an empty block, the only way someone else =
can<br>
cause you to waste your time is by wasting their own time doing PoW on<br>
an invalid block. If you&#39;re mining a block with transactions in it, and=
<br>
they can mine a valid block, but trick you into mining something that<br>
double spends, then they can make you waste your time without wasting<br>
their own, which seems like a much worse attack to me.<br>
<br>
The advantage of the consensus enforced bloom filter is you don&#39;t have<=
br>
to trust anything more than that economic incentive. However if you just<br=
>
sent an unverifiable bloom filter, it&#39;d be trivial to trick you into<br=
>
mining an invalid block.<br>
<br>
(If you already have the 1MB of block data, then extracting the prevouts<br=
>
for use as a blacklist would probably be plenty fast though)<br>
<br>
(Of course, maybe 90% of current hashpower does trust each other<br>
anyway, in which case requiring trust isn&#39;t a burden, but that&#39;s no=
t<br>
very decentralised...)<br>
<br>
(Paragraphs deleted. My maths is probably wrong, but I think it is<br>
actually economically rational to mine invalid blocks as chaff to distract<=
br>
validationless miners? The numbers I get are something like &quot;if 40% of=
<br>
the network is doing validationless mining for 20 seconds out of every<br>
10 minutes, then it&#39;s profitable to devote about 2% of your hashpower t=
o<br>
mining invalid blocks&quot;. Probably some pretty dodgy assumptions though,=
<br>
so I&#39;m not including any algebra. But having actual invalid blocks with=
<br>
real proof of work appear in the wild seems like it&#39;d be a good way to<=
br>
encourage miners to do validation...)<br>
<span class=3D""><br>
&gt; Miners only don&#39;t bother<br>
&gt; right now because the funds left behind are insubstantial.<br>
<br>
</span>Hey, fees are almost 1% of the block payout these days -- that&#39;s=
 within<br>
an order of magnitude of a rounding error!<br>
<span class=3D""><br>
&gt; Its absolutely untrue that an empty block is not useful.<br>
<br>
</span>Yeah, I deleted &quot;useless&quot; for that reason then put it back=
 in anyway...<br>
<span class=3D""><br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 - 7168 byte bloom filter<br>
&gt; FWIW, thats significantly larger than the amount of data typically<br>
&gt; needed to send the whole block using the fast block relay protocol.<br=
>
<br>
</span>Really? Hmm, if you have 2-byte indexes into the most likely to be m=
ined<br>
60k transactions, by 2000 transactions per block is about 4000 bytes. So<br=
>
I guess that makes sense. And weak blocks would make that generalisable<br>
and only add maybe a 32B index to include on the wire, presumably.<br>
<br>
It&#39;d only take a dozen missed transactions to be longer though.<br>
<span class=3D""><br>
&gt; Your estimates are assuming the empty blocks come purely from<br>
&gt; transmission and verification, but because most verification is cached=
<br>
&gt; and transmission compressed-- they don&#39;t. There are numerous laten=
cy<br>
&gt; sources through the whole stack, some constant some<br>
&gt; size-proportional... the mining without validation achieves its gains<=
br>
&gt; not from skipping validation (at least not most of the time); but<br>
&gt; mostly from short cutting a deep stack with many latency sources;<br>
&gt; including ones that have nothing to do with bitcoin core or the<br>
&gt; Bitcoin protocol.<br>
<br>
</span>Hmm, so my assumption is the &quot;bitcoin core&quot; side of the st=
ack looks<br>
something like:<br>
<br>
=C2=A0 =C2=A0block header received by p2p or relay network<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0block data received by p2p or relay network<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0validation, UTXO set updates<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0getblocktemplate (possible tx ordering recalculation)<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0block header to do PoW on!<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0vary and push to miners over the network<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0push to ASICs<br>
<br>
and the validationless &quot;shortcut&quot; just looks like:<br>
<br>
=C2=A0 =C2=A0block header received by p2p or relay network<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0hack hack<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0new block header to do PoW on!<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0vary and push to miners over the network<br>
=C2=A0 =C2=A0 =C2=A0|<br>
=C2=A0 =C2=A0 =C2=A0V<br>
=C2=A0 =C2=A0push to ASICs<br>
<br>
and so making the bitcoin core parts able to provide an unvalidated<br>
header to push to miners/ASICs against &quot;instantly&quot; would be a win=
 as<br>
far as getting bitcoin proper back into the loop all the time... That<br>
would mean removing validation from the critical path, and possibly more<br=
>
optimisation of getblocktemplate to make it effectively instant too. But<br=
>
those seem possible?<br>
<br>
Having it be:<br>
<br>
=C2=A0 header received by bitcoin core<br>
=C2=A0 =C2=A0 |<br>
=C2=A0 =C2=A0 V<br>
=C2=A0 new block header to do (unverified) PoW on!<br>
=C2=A0 =C2=A0 |<br>
=C2=A0 =C2=A0 V<br>
=C2=A0 ...<br>
<br>
and<br>
<br>
=C2=A0 header received by bitcoin core<br>
=C2=A0 =C2=A0 |<br>
=C2=A0 =C2=A0 V<br>
=C2=A0 block data received by bitcoin core<br>
=C2=A0 =C2=A0 |<br>
=C2=A0 =C2=A0 V<br>
=C2=A0 block data validated<br>
=C2=A0 =C2=A0 |<br>
=C2=A0 =C2=A0 V<br>
=C2=A0 new block header to do (verified) PoW on!<br>
=C2=A0 =C2=A0 |<br>
=C2=A0 =C2=A0 V<br>
=C2=A0 ...<br>
<br>
with mining tools being able to just reliably and efficiently leave<br>
bitcoin core in the loop seems like it ought to be a win to me...<br>
<span class=3D""><br>
&gt; Perhaps most importantly, VFM mining avoids needing to identify and<br=
>
&gt; characterize these other delay sources, by short cutting right at the<=
br>
&gt; end no one needs to even figure out that their pool server is<br>
&gt; performing a DNS request before every time it contacts their bitcoind<=
br>
&gt; RPC or whatnot.<br>
<br>
</span>At least with longpoll, doing a DNS query before connection shouldn&=
#39;t<br>
matter?<br>
<span class=3D""><br>
&gt; &gt;=C2=A0 =C2=A0&quot;Implement &quot;headers-first&quot; mining. As =
soon as a valid 80-byte block<br>
&gt; This BIP draft resulted in me relieving some pretty vicious attacks<br=
>
&gt; from that community... funny.<br>
<br>
</span>I&#39;m guessing you meant &quot;receiving&quot;, which makes that a=
 kinda weird<br>
freudian slip? :) But yeah, technical consistency isn&#39;t something I&#39=
;ve<br>
seen much of from that area...<br>
<span class=3D""><br>
&gt; &gt; Any thoughts? Pointers to the bitcointalk thread where this was p=
roposed<br>
&gt; &gt; two years ago? :)<br>
&gt; Relevant to your interests: <a href=3D"https://github.com/bitcoin/bitc=
oin/pull/1586" rel=3D"noreferrer" target=3D"_blank">https://github.com/bitc=
oin/<wbr>bitcoin/pull/1586</a><br>
<br>
</span>Tsk, 2 !=3D 4...<br>
<br>
Hmm, I&#39;m not sure where this leaves my opinion on either of those ideas=
.<br>
<br>
Cheers,<br>
aj<br>
<br>
<br>----------<br><span class=3D"gmail_quote"><font color=3D"#888">From: <b=
 class=3D"gmail_sendername">Anthony Towns</b> <span dir=3D"ltr">&lt;<a href=
=3D"mailto:aj@erisian.com.au">aj@erisian.com.au</a>&gt;</span><br>Date: Sun=
, Mar 13, 2016 at 3:58 AM<br>To: Gregory Maxwell &lt;<a href=3D"mailto:gmax=
well@gmail.com">gmaxwell@gmail.com</a>&gt;<br></font><br></span><br><span c=
lass=3D"">On Thu, Mar 03, 2016 at 07:55:06AM +1000, Anthony Towns wrote:<br=
>
&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 - 7168 byte bloom filter<br>
&gt; &gt; FWIW, thats significantly larger than the amount of data typicall=
y<br>
&gt; &gt; needed to send the whole block using the fast block relay protoco=
l.<br>
&gt; Really? Hmm, if you have 2-byte indexes into the most likely to be min=
ed<br>
&gt; 60k transactions, by 2000 transactions per block is about 4000 bytes. =
So<br>
&gt; I guess that makes sense. And weak blocks would make that generalisabl=
e<br>
&gt; and only add maybe a 32B index to include on the wire, presumably.<br>
&gt; It&#39;d only take a dozen missed transactions to be longer though.<br=
>
<br>
</span>So I think there&#39;s two levels of withholding adversarial miners =
could<br>
do:<br>
<br>
=C2=A0- block withholding, so they have more time to build on top of their<=
br>
=C2=A0 =C2=A0own block, maybe increasing their effective hashrate if they h=
ave<br>
=C2=A0 =C2=A0above average connectivity<br>
<br>
=C2=A0- transaction withholding, so an entire block can be invalidated<br>
=C2=A0 =C2=A0after the fact, hitting SPV nodes. if there are SPV miners, th=
is can<br>
=C2=A0 =C2=A0invalidate their work (potentially profitably, if you&#39;ve a=
ccidently<br>
=C2=A0 =C2=A0orphaned yourself)<br>
<br>
You could solve transaction withholding for miners just by saying<br>
&quot;a PoW isn&#39;t valid unless the merkle tree is valid&quot;, that way=
 you<br>
can&#39;t retroactively invalidate a block, but then you need fast relay<br=
>
before starting to mine, not just the header and some hint as to what<br>
transactions might be included, and therefore the bloom filter idea<br>
is pointless...<br>
<br>
<br>
Having actually tried the relay network now, it seems like:<br>
<br>
=C2=A0a) it gets less coding gain than it theoretically could; the day or<b=
r>
=C2=A0 =C2=A0 so&#39;s worth of blocks from Lightsword only seemed to be ~8=
x less data,<br>
=C2=A0 =C2=A0 rather than ~125x-250x, and what I&#39;m seeing seems similar=
. So still<br>
=C2=A0 =C2=A0 room for improvement?<br>
<br>
=C2=A0b) using &quot;weak blocks&quot; as a way of paying for adding &quot;=
non-standard&quot;<br>
=C2=A0 =C2=A0 transactions (large, low fee, actually non-standard, etc) to =
the<br>
=C2=A0 =C2=A0 mempool seems workable to me; so long as the only reason you&=
#39;re doing<br>
=C2=A0 =C2=A0 weak blocks is so miners can ensure the transactions they&#39=
;re mining<br>
=C2=A0 =C2=A0 are in mempools, and thus that their blocks will relay quickl=
y, the<br>
=C2=A0 =C2=A0 incentives seem properly aligned. (I think you&#39;d want to =
distinguish<br>
=C2=A0 =C2=A0 txns only relayed because they have a weak block, just to be =
nice to<br>
=C2=A0 =C2=A0 SPV clients -- weak block txns might only be mined by one min=
er, while<br>
=C2=A0 =C2=A0 standard, fee paying transactions are being mined by all/most=
 miners)<br>
<br>
=C2=A0c) it seems like it would be possible to adapt the relay protocol int=
o<br>
=C2=A0 =C2=A0 a p2p environment to me? I&#39;m thinking that you provide a =
bidirectional<br>
=C2=A0 =C2=A0 mapping for (a subset of) your mempool for each connection yo=
u<br>
=C2=A0 =C2=A0 have, so that you can quickly go to/from a 2-byte index to a<=
br>
=C2=A0 =C2=A0 transaction. If you make it so that whoever was listening get=
s to<br>
=C2=A0 =C2=A0 decide what transactions are okay, then you&#39;d just need 9=
 of these<br>
=C2=A0 =C2=A0 maps -- 1 for each of your outgoing connections (ie, 8 total)=
, plus<br>
=C2=A0 =C2=A0 another 1 that covers all your incoming connections, and each=
 map<br>
=C2=A0 =C2=A0 should only really need to use up to about a meg of memory, w=
hich<br>
=C2=A0 =C2=A0 seems pretty feasible.=C2=A0 Maybe it means up to 8x5MB of yo=
ur mempool<br>
=C2=A0 =C2=A0 is controlled by other people&#39;s policies rather than your=
 own,<br>
=C2=A0 =C2=A0 but that doesn&#39;t seem to bad either.<br>
<br>
=C2=A0d) I&#39;m a bit confused how it compares to IBLT; it seems like IBLT=
 has<br>
=C2=A0 =C2=A0 really strong ordering requirements to work correctly, but if=
 you<br>
=C2=A0 =C2=A0 had that you could compress the fast relay protocol really we=
ll,<br>
=C2=A0 =C2=A0 since you could apply the same ordering to your shared mempoo=
l, and<br>
=C2=A0 =C2=A0 then just send &quot;next tx, next tx, skip 1 tx, next tx, ne=
xt tx, skip<br>
=C2=A0 =C2=A0 3 tx, next tx, here&#39;s one you missed, ...&quot;, which wi=
th compression<br>
=C2=A0 =C2=A0 would probably get you to just a few /bits/ per (previously s=
een)<br>
=C2=A0 =C2=A0 transaction...=C2=A0 [0] [1]<br>
<br>
=C2=A0e) for p2p relay, maybe it would make sense to have the protocol only=
<br>
=C2=A0 =C2=A0 allow sending blocks where all the transactions are &quot;pre=
viously<br>
=C2=A0 =C2=A0 seen&quot;. that way if you get a block where some txes haven=
&#39;t been<br>
=C2=A0 =C2=A0 seen before, you stall that block, and start sending transact=
ions<br>
=C2=A0 =C2=A0 through. if another block comes in in the meantime, that does=
n&#39;t<br>
=C2=A0 =C2=A0 have any new transactions, you send that block through straig=
ht away.<br>
=C2=A0 =C2=A0 that encourages sending weak blocks through first, to ensure =
your<br>
=C2=A0 =C2=A0 transactions are already in mempools and no one else can snea=
k<br>
=C2=A0 =C2=A0 in first.<br>
<br>
Hmm... So that all seems kind of plausible to me; in how many ways am I<br>
mistaken? :)<br>
<br>
Cheers,<br>
aj<br>
<br>
[0] A hard-fork change to have the block merkle tree be ordered by txid,<br=
>
=C2=A0 =C2=A0 and have the transactions topologically sorted before being v=
alidated<br>
=C2=A0 =C2=A0 would be kind-of interesting here -- apart from making sortin=
g<br>
=C2=A0 =C2=A0 obvious, it&#39;d make it easy to prove that a block doesn&#3=
9;t contain a<br>
=C2=A0 =C2=A0 transaction. Bit altcoin-y though...<br>
<br>
[1] Maybe having the shared mempool indexes be sorted rather than FIFO<br>
=C2=A0 =C2=A0 would make the data structures hard; I don&#39;t think so, bu=
t not sure.<br>
<br>
<br>----------<br><span class=3D"gmail_quote"><font color=3D"#888">From: <b=
 class=3D"gmail_sendername">Gregory Maxwell</b> <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:gmaxwell@gmail.com">gmaxwell@gmail.com</a>&gt;</span><br>Date:=
 Sun, Mar 13, 2016 at 5:06 AM<br>To: Anthony Towns &lt;<a href=3D"mailto:aj=
@erisian.com.au">aj@erisian.com.au</a>&gt;<br></font><br></span><br><span c=
lass=3D"">On Sun, Mar 13, 2016 at 3:58 AM, Anthony Towns &lt;<a href=3D"mai=
lto:aj@erisian.com.au">aj@erisian.com.au</a>&gt; wrote:<br>
&gt; On Thu, Mar 03, 2016 at 07:55:06AM +1000, Anthony Towns wrote:<br>
&gt;&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 - 7168 byte bloom filter<br>
&gt;&gt; &gt; FWIW, thats significantly larger than the amount of data typi=
cally<br>
&gt;&gt; &gt; needed to send the whole block using the fast block relay pro=
tocol.<br>
&gt;&gt; Really? Hmm, if you have 2-byte indexes into the most likely to be=
 mined<br>
&gt;&gt; 60k transactions, by 2000 transactions per block is about 4000 byt=
es. So<br>
&gt;&gt; I guess that makes sense. And weak blocks would make that generali=
sable<br>
&gt;&gt; and only add maybe a 32B index to include on the wire, presumably.=
<br>
&gt;&gt; It&#39;d only take a dozen missed transactions to be longer though=
.<br>
&gt;<br>
&gt; So I think there&#39;s two levels of withholding adversarial miners co=
uld<br>
&gt; do:<br>
&gt;<br>
&gt;=C2=A0 - block withholding, so they have more time to build on top of t=
heir<br>
&gt;=C2=A0 =C2=A0 own block, maybe increasing their effective hashrate if t=
hey have<br>
&gt;=C2=A0 =C2=A0 above average connectivity<br>
<br>
</span>Also called &quot;selfish mining&quot;.<br>
<span class=3D""><br>
&gt;=C2=A0 - transaction withholding, so an entire block can be invalidated=
<br>
&gt;=C2=A0 =C2=A0 after the fact, hitting SPV nodes. if there are SPV miner=
s, this can<br>
&gt;=C2=A0 =C2=A0 invalidate their work (potentially profitably, if you&#39=
;ve accidently<br>
&gt;=C2=A0 =C2=A0 orphaned yourself)<br>
&gt; You could solve transaction withholding for miners just by saying<br>
&gt; &quot;a PoW isn&#39;t valid unless the merkle tree is valid&quot;, tha=
t way you<br>
&gt; can&#39;t retroactively invalidate a block, but then you need fast rel=
ay<br>
&gt; before starting to mine, not just the header and some hint as to what<=
br>
&gt; transactions might be included, and therefore the bloom filter idea<br=
>
&gt; is pointless...<br>
<br>
</span>Right, this is how Bitcoin Core works (won&#39;t extend a chain it h=
asn&#39;t<br>
validated)-- but some miners have shortcutted it to reduce latency.<br>
(And not just bypassing validation, but the whole process, e.g.<br>
transaction selection; which historically has taken more time than<br>
propagation).<br>
<span class=3D""><br>
&gt; Having actually tried the relay network now, it seems like:<br>
&gt;<br>
&gt;=C2=A0 a) it gets less coding gain than it theoretically could; the day=
 or<br>
&gt;=C2=A0 =C2=A0 =C2=A0so&#39;s worth of blocks from Lightsword only seeme=
d to be ~8x less data,<br>
&gt;=C2=A0 =C2=A0 =C2=A0rather than ~125x-250x, and what I&#39;m seeing see=
ms similar. So still<br>
&gt;=C2=A0 =C2=A0 =C2=A0room for improvement?<br>
<br>
</span>It&#39;s pretty variable.=C2=A0 It depends a lot on consistency betw=
een the<br>
transactions the server side selects and the client. When spam attacks<br>
go on, or miners change their policy compression falls off until the<br>
far end twiddles.<br>
<br>
Go look at the distribution of the results.<br>
<span class=3D""><br>
&gt;=C2=A0 c) it seems like it would be possible to adapt the relay protoco=
l into<br>
&gt;=C2=A0 =C2=A0 =C2=A0a p2p environment to me? I&#39;m thinking that you =
provide a bidirectional<br>
&gt;=C2=A0 =C2=A0 =C2=A0mapping for (a subset of) your mempool for each con=
nection you<br>
&gt;=C2=A0 =C2=A0 =C2=A0have, so that you can quickly go to/from a 2-byte i=
ndex to a<br>
&gt;=C2=A0 =C2=A0 =C2=A0transaction. If you make it so that whoever was lis=
tening gets to<br>
&gt;=C2=A0 =C2=A0 =C2=A0decide what transactions are okay, then you&#39;d j=
ust need 9 of these<br>
&gt;=C2=A0 =C2=A0 =C2=A0maps -- 1 for each of your outgoing connections (ie=
, 8 total), plus<br>
&gt;=C2=A0 =C2=A0 =C2=A0another 1 that covers all your incoming connections=
, and each map<br>
&gt;=C2=A0 =C2=A0 =C2=A0should only really need to use up to about a meg of=
 memory, which<br>
&gt;=C2=A0 =C2=A0 =C2=A0seems pretty feasible.=C2=A0 Maybe it means up to 8=
x5MB of your mempool<br>
&gt;=C2=A0 =C2=A0 =C2=A0is controlled by other people&#39;s policies rather=
 than your own,<br>
&gt;=C2=A0 =C2=A0 =C2=A0but that doesn&#39;t seem to bad either.<br>
<br>
</span>That is a bit kludgy, but yes-- it would work.<br>
<br>
But the key thing about latency minimization is that you _must_ send a<br>
block with no request; because otherwise the RTT for just the request<br>
alone will totally dominate the transfer in most cases.=C2=A0 And having N<=
br>
peers send you the whole block redundantly ends up hurting your<br>
performance (esp because packet losses mean more round trips) even if<br>
the compression is very high.<br>
<br>
All these problems can be avoided; at least in theory. Optimal latency<br>
mitigation would be achieved by something like block network coding<br>
techniques:<br>
<br>
<a href=3D"https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding" r=
el=3D"noreferrer" target=3D"_blank">https://en.bitcoin.it/wiki/<wbr>User:Gm=
axwell/block_network_<wbr>coding</a><br>
<br>
With these techniques peers could blindly send you data without you<br>
requesting it, while every byte they send would usefully contribute to<br>
your reconstruction. With extra effort and opportunistic forwarding<br>
the entire network could, in theory, receive a block in the time it<br>
took the original host to send only one block, while making use of a<br>
significant fraction of the network&#39;s whole bisection bandwidth.<br>
<span class=3D""><br>
&gt;=C2=A0 d) I&#39;m a bit confused how it compares to IBLT; it seems like=
 IBLT has<br>
&gt;=C2=A0 =C2=A0 =C2=A0really strong ordering requirements to work correct=
ly, but if you<br>
&gt;=C2=A0 =C2=A0 =C2=A0had that you could compress the fast relay protocol=
 really well,<br>
&gt;=C2=A0 =C2=A0 =C2=A0since you could apply the same ordering to your sha=
red mempool, and<br>
&gt;=C2=A0 =C2=A0 =C2=A0then just send &quot;next tx, next tx, skip 1 tx, n=
ext tx, next tx, skip<br>
&gt;=C2=A0 =C2=A0 =C2=A03 tx, next tx, here&#39;s one you missed, ...&quot;=
, which with compression<br>
&gt;=C2=A0 =C2=A0 =C2=A0would probably get you to just a few /bits/ per (pr=
eviously seen)<br>
&gt;=C2=A0 =C2=A0 =C2=A0transaction...=C2=A0 [0] [1]<br>
<br>
</span>Latency of block relay easily ends up CPU bound; even when not doing=
<br>
anything too smart (this is why Matt&#39;s relay protocol stuff has AVX<br>
sha2 code in it). Prior IBLT implementation attempts have performance<br>
so low that their decode time ends up dwarfing transmission time, and<br>
plain uncoded blocks are faster for common host/bandwidth<br>
configurations.<br>
<br>
The ordering requirements stuff is not that relevant in my view; you<br>
likely believe this because Gavin rat-holed himself on it trying to<br>
spec out ordering requirements for miners...=C2=A0 The reality of it is<br>
that a uniform permutation of, say, 4000 transactions can be stored in<br>
log2(4000!)/8 bytes, or about 5.2kbytes (and this is easily achieved<br>
just by using range coding to optimally pack integers in the range<br>
[0..n_unpicked_txn) to pick transactions out of a lexagraphically<br>
sorted list) ... and this is without any prediction at all-- randomly<br>
ordered txn in the block would work just as well.<br>
<br>
[E.g. using the uint coder from the daala video codec project can code<br>
these values with about 1% overhead, and runs at about 12MB/sec doing<br>
so on my slow laptop]<br>
<br>
Recently some folks have been working privately on a block network<br>
coding implementation... earlier attempts (even before IBLT became<br>
trendy) were thwarted by the same thing that thwarts IBLT: the<br>
decoding was so slow it dominated the latency. We&#39;ve found some faster<=
br>
coding schemes though... so it looks like it might be practical now. I<br>
could send you more info if you read the block network coding page and<br>
are interested in helping.<br>
<br>
Both IBLT and BNC would both be more useful in the weakblocks model<br>
because there the decode speed isn&#39;t latency critical-- so if it needs<=
br>
100ms of cpu time to decode an efficiently encoded block, that is no<br>
big deal.<br>
<span class=3D""><br>
&gt;=C2=A0 e) for p2p relay, maybe it would make sense to have the protocol=
 only<br>
&gt;=C2=A0 =C2=A0 =C2=A0allow sending blocks where all the transactions are=
 &quot;previously<br>
&gt;=C2=A0 =C2=A0 =C2=A0seen&quot;. that way if you get a block where some =
txes haven&#39;t been<br>
&gt;=C2=A0 =C2=A0 =C2=A0seen before, you stall that block, and start sendin=
g transactions<br>
&gt;=C2=A0 =C2=A0 =C2=A0through. if another block comes in in the meantime,=
 that doesn&#39;t<br>
&gt;=C2=A0 =C2=A0 =C2=A0have any new transactions, you send that block thro=
ugh straight away.<br>
&gt;=C2=A0 =C2=A0 =C2=A0that encourages sending weak blocks through first, =
to ensure your<br>
&gt;=C2=A0 =C2=A0 =C2=A0transactions are already in mempools and no one els=
e can sneak<br>
&gt;=C2=A0 =C2=A0 =C2=A0in first.<br>
<br>
</span>Yes, it&#39;s perfectly reasonable to do that for bandwidth minimiza=
tion--<br>
though it doesn&#39;t minimize latency.=C2=A0 &quot;Seen&quot; is complex, =
you have no<br>
guarantee a peer will accept any transaction you&#39;ve sent it, or even<br=
>
that it will retain any it sent you. So multiple round trips are<br>
required to resolve missing transactions.<br>
<br>
We haven&#39;t bothered implementing this historically because the<br>
bandwidth reduction is small overall, and it&#39;s not the right strategy<b=
r>
for reducing latency-- the vast majority of bandwidth is eaten by<br>
relay. Right now maybe 15% is used by blocks... so at most you&#39;d get a<=
br>
15% improvement here.<br>
<br>
I did some fairly informal measurements and posted about it:<br>
<a href=3D"https://bitcointalk.org/index.php?topic=3D1377345.0" rel=3D"nore=
ferrer" target=3D"_blank">https://bitcointalk.org/index.<wbr>php?topic=3D13=
77345.0</a><br>
<br>
I also point out there that the existing blocksonly mode achieves<br>
bandwidth optimal transport already (ignoring things like transaction<br>
format compression)... just so long as you don&#39;t care about learning<br=
>
about unconfirmed transactions. :)<br>
<span class=3D""><br>
&gt; [0] A hard-fork change to have the block merkle tree be ordered by txi=
d,<br>
&gt;=C2=A0 =C2=A0 =C2=A0and have the transactions topologically sorted befo=
re being validated<br>
&gt;=C2=A0 =C2=A0 =C2=A0would be kind-of interesting here -- apart from mak=
ing sorting<br>
&gt;=C2=A0 =C2=A0 =C2=A0obvious, it&#39;d make it easy to prove that a bloc=
k doesn&#39;t contain a<br>
&gt;=C2=A0 =C2=A0 =C2=A0transaction. Bit altcoin-y though...<br>
<br>
</span>If you sort by data (or ID) without requiring the verifier to<br>
topologically sort then an efficient permutation coder would only<br>
spend bits on places where dependencies push things out of the<br>
expected order... which is fairly rare.<br>
<br>
Seems like a reasonable cost for avoiding the hardfork, no? The<br>
receiver topo sort requirement would also require more memory in a<br>
block verifier; and would be more complex to fraud proof, I think.<br>
<br>
Engineering wise it&#39;s not quite so simple. It&#39;s helpful for miners =
to<br>
have blocks sorted by feerate so that later stages of the mining<br>
process can drop the least profitable transactions simply by<br>
truncating the block.<br>
<span class=3D""><br>
&gt; [1] Maybe having the shared mempool indexes be sorted rather than FIFO=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0would make the data structures hard; I don&#39;t th=
ink so, but not sure.<br>
<br>
</span>I tried to get Matt to do that for his stuff previously; pointing ou=
t<br>
the sorted indexes would be easier to efficiently code. His<br>
counterargument was that for 2000 txn, the two bytes indexes take 4kb,<br>
which is pretty insignificant... and that his time would be better<br>
spent trying to get the hit-rate up. I found that hard to argue with.<br>
:)<br>
<br>----------<br><span class=3D"gmail_quote"><font color=3D"#888">From: <b=
 class=3D"gmail_sendername">Anthony Towns</b> <span dir=3D"ltr">&lt;<a href=
=3D"mailto:aj@erisian.com.au">aj@erisian.com.au</a>&gt;</span><br>Date: Mon=
, Mar 14, 2016 at 3:08 AM<br>To: Gregory Maxwell &lt;<a href=3D"mailto:gmax=
well@gmail.com">gmaxwell@gmail.com</a>&gt;<br></font><br></span><br><span c=
lass=3D"">On Sun, Mar 13, 2016 at 05:06:25AM +0000, Gregory Maxwell wrote:<=
br>
&gt; &gt;=C2=A0 - block withholding, so they have more time to build on top=
 of their<br>
&gt; &gt;=C2=A0 =C2=A0 own block, maybe increasing their effective hashrate=
 if they have<br>
&gt; &gt;=C2=A0 =C2=A0 above average connectivity<br>
&gt; Also called &quot;selfish mining&quot;.<br>
<br>
</span>Yup.<br>
<span class=3D""><br>
&gt; &gt;=C2=A0 c) it seems like it would be possible to adapt the relay pr=
otocol into<br>
</span>&gt; &gt;=C2=A0 =C2=A0 =C2=A0a p2p environment to me? [...]<br>
<span class=3D"">&gt; That is a bit kludgy, but yes-- it would work.<br>
&gt; But the key thing about latency minimization is that you _must_ send a=
<br>
&gt; block with no request; because otherwise the RTT for just the request<=
br>
&gt; alone will totally dominate the transfer in most cases.=C2=A0 And havi=
ng N<br>
&gt; peers send you the whole block redundantly ends up hurting your<br>
&gt; performance (esp because packet losses mean more round trips) even if<=
br>
&gt; the compression is very high.<br>
<br>
</span>If the block can be encoded fully, then it&#39;s up to maybe 10kB pe=
r block<br>
max (at 1MB blocksize); I don&#39;t think multiple transmissions matter muc=
h<br>
in that case? Hmm, maybe it does...<br>
<span class=3D""><br>
&gt; All these problems can be avoided; at least in theory. Optimal latency=
<br>
&gt; mitigation would be achieved by something like block network coding<br=
>
&gt; techniques:<br>
&gt; <a href=3D"https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_codi=
ng" rel=3D"noreferrer" target=3D"_blank">https://en.bitcoin.it/wiki/<wbr>Us=
er:Gmaxwell/block_network_<wbr>coding</a><br>
<br>
</span>Ugh, patents. Interesting that the patents on turbo codes have expir=
ed,<br>
last time I looked they hadn&#39;t.<br>
<span class=3D""><br>
&gt; With these techniques peers could blindly send you data without you<br=
>
&gt; requesting it, while every byte they send would usefully contribute to=
<br>
&gt; your reconstruction.<br>
<br>
</span>Yeah, that makes sense I think. Pretty complicated though. The &quot=
;someone<br>
sent corrupt data&quot; seems a little bit problematic to deal with too,<br=
>
especially in the &quot;optimistically forward stuff before you can validat=
e<br>
it&quot; phase. At least if you&#39;re using error correcting codes anyway,=
<br>
that&#39;s probably a self-solving problem.<br>
<br>
What&#39;s with the switch from 32 bit faux ids in the original section<br>
to 63 bits in the reimagination? I guess you use most of that for the<br>
additional encoded length though...<br>
<br>
Keying with the previous block&#39;s hash seems kind-of painful, doesn&#39;=
t it?<br>
Once you receive the ids, you want to lookup the actual transactions<br>
from your mempool, but since you can&#39;t decrypt anything useful with<br>
only the first 50/60 bits of cyphertext, the only way to do that is<br>
to have already cycled through all the transactions in your mempool<br>
and pre-calculated what their network coded id for that block is, and<br>
you have to do that everytime you receive a block (including orphans,<br>
I guess). It&#39;d make reorgs more expensive too, because you&#39;d have t=
o<br>
reindex all the mempool then as well?<br>
<br>
Maybe if you&#39;re only doing that predictively it&#39;s not so bad? The 5=
MB-20MB<br>
of txes with highest fees get coded up, and you just download any other<br>
transactions in full? If you&#39;re downloading large coinbase txes regular=
ly<br>
anyway, that&#39;s probably no big deal.<br>
<span class=3D""><br>
&gt; Latency of block relay easily ends up CPU bound; even when not doing<b=
r>
&gt; anything too smart (this is why Matt&#39;s relay protocol stuff has AV=
X<br>
&gt; sha2 code in it).<br>
<br>
</span>Yeah, that seemed a little odd to me; there shouldn&#39;t be that mu=
ch<br>
hashing to validate a block (1MB of transactions, then maybe 128kB to<br>
get to sha256d, then another 2*128kB for the rest of the merkle tree?).<br>
Matt&#39;s code seems like it&#39;s doing a linear search through the tx in=
dex<br>
to find each tx though, which probably doesn&#39;t help.<br>
<span class=3D""><br>
&gt; Prior IBLT implementation attempts have performance<br>
&gt; so low that their decode time ends up dwarfing transmission time, and<=
br>
&gt; plain uncoded blocks are faster for common host/bandwidth<br>
&gt; configurations.<br>
<br>
</span>Heh.<br>
<span class=3D""><br>
&gt; The ordering requirements stuff is not that relevant in my view; you<b=
r>
&gt; likely believe this because Gavin rat-holed himself on it trying to<br=
>
&gt; spec out ordering requirements for miners...=C2=A0 The reality of it i=
s<br>
&gt; that a uniform permutation of, say, 4000 transactions can be stored in=
<br>
&gt; log2(4000!)/8 bytes, or about 5.2kbytes<br>
<br>
</span>Right, but 5.2 kB is a lot of overhead; at least compared to the cas=
es<br>
where Matt&#39;s stuff already works well :)<br>
<span class=3D""><br>
&gt; Recently some folks have been working privately on a block network<br>
&gt; coding implementation... earlier attempts (even before IBLT became<br>
&gt; trendy) were thwarted by the same thing that thwarts IBLT: the<br>
&gt; decoding was so slow it dominated the latency. We&#39;ve found some fa=
ster<br>
&gt; coding schemes though...=C2=A0 so it looks like it might be practical =
now. I<br>
&gt; could send you more info if you read the block network coding page and=
<br>
&gt; are interested in helping.<br>
<br>
</span>Sure. (Though, fair warning, I&#39;ve already failed a few times at =
doing<br>
anything useful with erasure coding...)<br>
<span class=3D""><br>
&gt; &gt;=C2=A0 e) for p2p relay, maybe it would make sense to have the pro=
tocol only<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0allow sending blocks where all the transaction=
s are &quot;previously<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0seen&quot;. that way if you get a block where =
some txes haven&#39;t been<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0seen before, you stall that block, and start s=
ending transactions<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0through. if another block comes in in the mean=
time, that doesn&#39;t<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0have any new transactions, you send that block=
 through straight away.<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0that encourages sending weak blocks through fi=
rst, to ensure your<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0transactions are already in mempools and no on=
e else can sneak<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0in first.<br>
&gt; Yes, it&#39;s perfectly reasonable to do that for bandwidth minimizati=
on--<br>
&gt; though it doesn&#39;t minimize latency.=C2=A0 &quot;Seen&quot; is comp=
lex, you have no<br>
&gt; guarantee a peer will accept any transaction you&#39;ve sent it, or ev=
en<br>
&gt; that it will retain any it sent you. So multiple round trips are<br>
&gt; required to resolve missing transactions.<br>
<br>
</span>The &quot;p2p relay&quot; in my head has &quot;seen&quot; meaning &q=
uot;the 5MB of transactions<br>
the listening peer thinks is most likely to be mined&quot;, odds on both<br=
>
peers have actually seen something like 145MB of additional transactions<br=
>
too. You don&#39;t do round trips; you just start sending the &quot;unseen&=
quot;<br>
transactions automatically (by id or in full?), then you send the<br>
compressed block. The only round trip is if you sent the id, but they<br>
actually needed the full tx.<br>
<br>
In my head, you get good latency if you do weak blocks beforehand,<br>
and somewhat poorer latency if you don&#39;t. Even in my head, I&#39;m not =
sure<br>
that&#39;s actually feasible, though: I&#39;m not sure weak blocks for coin=
base<br>
transactions really work, and comparatively high latency on 5% of blocks<br=
>
that didn&#39;t get any weak blocks beforehand isn&#39;t very attractive...=
<br>
<span class=3D""><br>
&gt; We haven&#39;t bothered implementing this historically because the<br>
&gt; bandwidth reduction is small overall, and it&#39;s not the right strat=
egy<br>
&gt; for reducing latency-- the vast majority of bandwidth is eaten by<br>
&gt; relay. Right now maybe 15% is used by blocks... so at most you&#39;d g=
et a<br>
&gt; 15% improvement here.<br>
<br>
</span>Yeah, I&#39;m assuming a non-trivial increase in bandwidth usage com=
pared<br>
to current relay. Compared to relaying spam transactions (that don&#39;t<br=
>
get mined prior to expiry), not sure it&#39;s significant though.<br>
<span class=3D""><br>
&gt; &gt; [0] A hard-fork change to have the block merkle tree be ordered b=
y txid,<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0and have the transactions topologically sorted=
 before being validated<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0would be kind-of interesting here -- apart fro=
m making sorting<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0obvious, it&#39;d make it easy to prove that a=
 block doesn&#39;t contain a<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0transaction. Bit altcoin-y though...<br>
&gt; If you sort by data (or ID) without requiring the verifier to<br>
&gt; topologically sort then an efficient permutation coder would only<br>
&gt; spend bits on places where dependencies push things out of the<br>
&gt; expected order... which is fairly rare.<br>
<br>
</span>Really? I was seeing a lot of transaction chains in the couple of bl=
ocks I<br>
looked at. Also, you wouldn&#39;t get short proofs that a transaction isn&#=
39;t<br>
present in a block that way either afaics.<br>
<span class=3D""><br>
&gt; Seems like a reasonable cost for avoiding the hardfork, no? The<br>
&gt; receiver topo sort requirement would also require more memory in a<br>
&gt; block verifier; and would be more complex to fraud proof, I think.<br>
<br>
</span>Hmm, I think it&#39;d be easy to fraud proof -- just show adjacent m=
erkle<br>
paths where the results are in the wrong order. Maybe the same&#39;s true<b=
r>
with the id-order-but-toposorted too -- just show adjacent merkle paths<br>
where the results are in the wrong order, and the later doesn&#39;t depend<=
br>
on the former. I&#39;m not sure that gives a unique sort though (but maybe<=
br>
that doesn&#39;t actually matter).<br>
<span class=3D""><br>
&gt; Engineering wise it&#39;s not quite so simple. It&#39;s helpful for mi=
ners to<br>
&gt; have blocks sorted by feerate so that later stages of the mining<br>
&gt; process can drop the least profitable transactions simply by<br>
&gt; truncating the block.<br>
<br>
</span>Yeah; not having ordering requirements seems far more practical.<br>
<span class=3D""><br>
&gt; &gt; [1] Maybe having the shared mempool indexes be sorted rather than=
 FIFO<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0would make the data structures hard; I don&#39=
;t think so, but not sure.<br>
&gt; I tried to get Matt to do that for his stuff previously; pointing out<=
br>
&gt; the sorted indexes would be easier to efficiently code. His<br>
&gt; counterargument was that for 2000 txn, the two bytes indexes take 4kb,=
<br>
&gt; which is pretty insignificant... and that his time would be better<br>
&gt; spent trying to get the hit-rate up. I found that hard to argue with.<=
br>
&gt; :)<br>
<br>
</span>Yeah. Having the bitcoin mempool and fee info (and heck, priority in=
fo)<br>
more readily available when seeing new transactions and choosing what to<br=
>
include seems like it&#39;d be helpful here. Seems relatively painful to do=
<br>
that outside of bitcoin though.<br>
<br>
Cheers,<br>
aj<br>
<br>
<br>----------<br><span class=3D"gmail_quote"><font color=3D"#888">From: <b=
 class=3D"gmail_sendername">Gregory Maxwell</b> <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:gmaxwell@gmail.com">gmaxwell@gmail.com</a>&gt;</span><br>Date:=
 Mon, May 15, 2017 at 8:03 PM<br>To: Anthony Towns &lt;<a href=3D"mailto:aj=
@erisian.com.au">aj@erisian.com.au</a>&gt;<br></font><br></span><br>I ran i=
nto someone proposing the same thing as you. Can I share this<br>
discussion with them? (with the public?)<br>
<div class=3D"HOEnZb"></div><br>----------<br><span class=3D"gmail_quote"><=
font color=3D"#888">From: <b class=3D"gmail_sendername">Anthony Towns</b> <=
span dir=3D"ltr">&lt;<a href=3D"mailto:aj@erisian.com.au">aj@erisian.com.au=
</a>&gt;</span><br>Date: Mon, May 15, 2017 at 11:00 PM<br>To: Gregory Maxwe=
ll &lt;<a href=3D"mailto:gmaxwell@gmail.com">gmaxwell@gmail.com</a>&gt;<br>=
</font><br></span><br><div class=3D"HOEnZb"></div>Yes, go ahead on both cou=
nts.<br>
<span class=3D"HOEnZb"><font color=3D"#888888">--<br>
Sent from my phone.<br>
</font></span><br></div><br></div>

--001a1143447a97023c054f9872b8--