summaryrefslogtreecommitdiff
path: root/7d/068eb6be03bd1ebeaebf264b32763d03f4d0fe
blob: c7cf920068d54eceb93146794a516b7a5602245e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
Return-Path: <mark@friedenbach.org>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 02B688B4
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed,  4 Nov 2015 22:47:57 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-io0-f181.google.com (mail-io0-f181.google.com
	[209.85.223.181])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id E899A162
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed,  4 Nov 2015 22:47:55 +0000 (UTC)
Received: by ioll68 with SMTP id l68so71712427iol.3
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed, 04 Nov 2015 14:47:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=friedenbach_org.20150623.gappssmtp.com; s=20150623;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=ypkYUrCL7DGhTz7dEy02peyM35Pxns/eP6AvYEgb+14=;
	b=xd7ZXIWoYFAo3kVw4z+c1gDU7+rMVPdxV8HYHnA5e6NSu3c6ANteQHcjBqwnK0sbn1
	Vxa5o3N1NAX/wOsrb+/qq49Mmp1kGlhVIgx8DHEEJ8hzsnnG2kmi34Y3D1IgjBR9Dq6J
	JGyVToGB2dtzPr+NvuOBOHgFEfXRbK+w6WeWFUi8bSE4l6mHDbQFggdO3HQxnWkHi5uM
	O9tAbs+oB5FMRjYAaI5Fx0NdSGpaRnHTL1qq8VEuy39ak114BirQFiRm66q69bbxAXZP
	c6IM791Oer7mKQyzvvhjAojjecuppKu+JwcGG0BROE8+UkjZHIFQE6jJZmZeF0BPmXK4
	FO2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:from:date:message-id:subject:to
	:content-type;
	bh=ypkYUrCL7DGhTz7dEy02peyM35Pxns/eP6AvYEgb+14=;
	b=Pc9wNj6ahL20OBGgJMvCDFstczBXLkHPGhNwWVbRPkZ1p38tEj7ehBgoGd7xahoXTe
	hBosSMi/aeXGNU338rTATGX/7cf702VV3jyXgGuceSLsp6Kq9+wiMubBg6bsaFE7GrMV
	lbXTBFttftSg4YPGfkKZZP5d6wYDAAlZiQRRdHnF75tLBOTTyj3DmBBMYN+JvuSQkciu
	C3Pvgs1nGk1Ll1nDCWTsEPHw/v8r4Rik63FqSvypFvQeqUNokj3i9s3owykNs31mFTO6
	Kd6UMH83Da3i2szEFqy0rbxk1E21y4+OQUXU89A19IRE9OlDSuMS7RoxML0TmgWC2O9z
	I7pg==
X-Gm-Message-State: ALoCoQknkF2N/bC8BGXJDOcXPeaKKXgsDJylD0DVOt83cacIqr/A46tzTe8QsGaz7nKwQAutVJpr
X-Received: by 10.107.31.66 with SMTP id f63mr247534iof.105.1446677275174;
	Wed, 04 Nov 2015 14:47:55 -0800 (PST)
MIME-Version: 1.0
Received: by 10.107.15.73 with HTTP; Wed, 4 Nov 2015 14:47:35 -0800 (PST)
X-Originating-IP: [173.228.107.141]
From: Mark Friedenbach <mark@friedenbach.org>
Date: Wed, 4 Nov 2015 14:47:35 -0800
Message-ID: <CAOG=w-tR89zm_VDCR-MR_+F9bRNvm4TCZSQcTmYGKRW1JQhkUg@mail.gmail.com>
To: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
Content-Type: multipart/alternative; boundary=001a1140f8d41797e70523becd4b
X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID,HTML_MESSAGE,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
X-Mailman-Approved-At: Wed, 04 Nov 2015 22:50:08 +0000
Subject: [bitcoin-dev] A validation-cost metric for aggregate limits and fee
	determination
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Development Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Nov 2015 22:47:57 -0000

--001a1140f8d41797e70523becd4b
Content-Type: text/plain; charset=UTF-8

At the first Scaling Bitcoin workshop in Montreal I presented on the topic
of "bad blocks" that take an excessive amount of time to validate. You can
read a transcript of this talk here:

http://diyhpl.us/wiki/transcripts/scalingbitcoin/alternatives-to-block-size-as-aggregate-resource-limits/

The core message was that the assumption made by the design parameters of
the system, namely that validation costs scale linearly with transaction or
block size, is wrong. In particular, in certain kinds of transactions there
are validation costs which scale quadraticly with size. For example, the
construction of SIGHASH_ALL results in each input signing a different
message digest, meaning that the entire transaction (minus the scriptSigs)
is rehashed for each input. As another example, the number of signature
operation performed during block validation is unlimited if the validations
are contained within the scriptPubKey (this scales linearly but with a very
large constant factor). The severity of these issues increase as the
aggregate limits in place on maximum transaction and block size increase.

There have been various solutions suggested, and I would like to start a
public discussion to see if consensus can be reached over a viable approach.

Gavin, for example, has written code that tracks the number of bytes hashed
and enforces a separate limit for a block over this aggregate value. Other
costs could be constrained in a similar whack-a-mole way. I have two
concerns with this approach:

1. There would still exist a gap between the average-case validation cost
of a full block and the worst case validation cost of a block that was
specifically constructed to hit every limit.

2. Transaction selection and by extension fee determination would become
much more complicated multi-dimensional optimization problems. Since fee
management in particular is code replicated in a lot of infrastructure, I
would be very concerned over making optimal behavior greatly more difficult.

My own suggestion, which I submit for consideration, is to use a linear
function of the various costs involved (signatures verified, bytes hashed,
inputs consumed, script opcodes executed, etc.). The various algorithms
used for transaction selection and fee determination can then be reused,
using the output of this new linear function as the "size" of the
transaction.

Separately, many others including Greg Maxwell have advocated for a
"net-UTXO" metric instead of, or in combination with a validation-cost
metric. In the pure form the block size limit would be replaced with a
maximum UTXO set increase, thereby applying a cost in extra fee required to
create unspent outputs. This has the distinct advantage of making dust
outputs considerably more expensive than regular spend outputs.

For myself, I remain open to the possibility of adding a UTXO set size
corrective factor to a chiefly validation-cost metric. It would be nice to
reward users for cleaning up scattered small output, reward miners for
including dust-be-gone outputs, and make spam attacks more costly. But
doing so requires setting aside some unused validation resources in order
to reward miners who clean up the UTXO, which means it widens the gap
between average and worst case block validation times. Also, worry over the
size of the UTXO database is only a concern for how Bitcoin Core is
currently structured -- with e.g. UTXO or STXO commitments it could be the
case that in the future full nodes do not store the UTXO and instead carry
proofs of their inputs as prunable witness data. If we choose a net-UTXO
metric however, we will be stuck with it for some time.

I will be submitting a talk proposal for Scaling Bitcoin on this topic, but
I would like to get some feedback from the developer community first.
Anyone have any thoughts to add?

--001a1140f8d41797e70523becd4b
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>At the first Scaling Bitcoin workshop in Montreal I p=
resented on the topic of &quot;bad blocks&quot; that take an excessive amou=
nt of time to validate. You can read a transcript of this talk here:<br><br=
><a href=3D"http://diyhpl.us/wiki/transcripts/scalingbitcoin/alternatives-t=
o-block-size-as-aggregate-resource-limits/">http://diyhpl.us/wiki/transcrip=
ts/scalingbitcoin/alternatives-to-block-size-as-aggregate-resource-limits/<=
/a><br><br>The core message was that the assumption made by the design para=
meters of the system, namely that validation costs scale linearly with tran=
saction or block size, is wrong. In particular, in certain kinds of transac=
tions there are validation costs which scale quadraticly with size. For exa=
mple, the construction of SIGHASH_ALL results in each input signing a diffe=
rent message digest, meaning that the entire transaction (minus the scriptS=
igs) is rehashed for each input. As another example, the number of signatur=
e operation performed during block validation is unlimited if the validatio=
ns are contained within the scriptPubKey (this scales linearly but with a v=
ery large constant factor). The severity of these issues increase as the ag=
gregate limits in place on maximum transaction and block size increase.<br>=
<br>There have been various solutions suggested, and I would like to start =
a public discussion to see if consensus can be reached over a viable approa=
ch.<br><br>Gavin, for example, has written code that tracks the number of b=
ytes hashed and enforces a separate limit for a block over this aggregate v=
alue. Other costs could be constrained in a similar whack-a-mole way. I hav=
e two concerns with this approach:<br><br>1. There would still exist a gap =
between the average-case validation cost of a full block and the worst case=
 validation cost of a block that was specifically constructed to hit every =
limit.<br><br>2. Transaction selection and by extension fee determination w=
ould become much more complicated multi-dimensional optimization problems. =
Since fee management in particular is code replicated in a lot of infrastru=
cture, I would be very concerned over making optimal behavior greatly more =
difficult.<br><br>My own suggestion, which I submit for consideration, is t=
o use a linear function of the various costs involved (signatures verified,=
 bytes hashed, inputs consumed, script opcodes executed, etc.). The various=
 algorithms used for transaction selection and fee determination can then b=
e reused, using the output of this new linear function as the &quot;size&qu=
ot; of the transaction.<br><br>Separately, many others including Greg Maxwe=
ll have advocated for a &quot;net-UTXO&quot; metric instead of, or in combi=
nation with a validation-cost metric. In the pure form the block size limit=
 would be replaced with a maximum UTXO set increase, thereby applying a cos=
t in extra fee required to create unspent outputs. This has the distinct ad=
vantage of making dust outputs considerably more expensive than regular spe=
nd outputs.<br><br>For myself, I remain open to the possibility of adding a=
 UTXO set size corrective factor to a chiefly validation-cost metric. It wo=
uld be nice to reward users for cleaning up scattered small output, reward =
miners for including dust-be-gone outputs, and make spam attacks more costl=
y. But doing so requires setting aside some unused validation resources in =
order to reward miners who clean up the UTXO, which means it widens the gap=
 between average and worst case block validation times. Also, worry over th=
e size of the UTXO database is only a concern for how Bitcoin Core is curre=
ntly structured -- with e.g. UTXO or STXO commitments it could be the case =
that in the future full nodes do not store the UTXO and instead carry proof=
s of their inputs as prunable witness data. If we choose a net-UTXO metric =
however, we will be stuck with it for some time.<br><br></div>I will be sub=
mitting a talk proposal for Scaling Bitcoin on this topic, but I would like=
 to get some feedback from the developer community first. Anyone have any t=
houghts to add?<br></div>

--001a1140f8d41797e70523becd4b--