summaryrefslogtreecommitdiff
path: root/d6/9317292782da963840368bb2352fa8a30523d3
blob: 4173d5a07ec6272d76d487ceffe0f0721890b852 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
Return-Path: <laolu32@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 4860D102C
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Tue, 12 Jun 2018 23:51:44 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-wm0-f66.google.com (mail-wm0-f66.google.com [74.125.82.66])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 303D0EC
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Tue, 12 Jun 2018 23:51:43 +0000 (UTC)
Received: by mail-wm0-f66.google.com with SMTP id j15-v6so1912219wme.0
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Tue, 12 Jun 2018 16:51:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
	h=mime-version:references:in-reply-to:from:date:message-id:subject:to
	:cc; bh=44fI/zvMLUhS49iDkUFw/KBAMZRdXg8iTFjidfOlPVA=;
	b=Ai3FIfrfEFBU0nOiJ1LRA+0OD5kksBk3NKPW1jyWho366JfFQuj/QQh+MTRioDShNn
	lK+KHXdMNzmo5TCdPvvIUqvjqvV082HUVQGp5LLWZshrUdaPZTj5B6Lx1BDd7ML7ESf8
	zrVBveNtOgPz03tJQNyxfq2kPTOCKw36d07BT1A3GsUX820eHNe6r/7i7E6PYqiDH/Bs
	oCWrx84iRFcNZtuWMY1ljWl3gr1DT6iKR50JFfRdAWcmTaEumkVWAwdjX73Kbz4vCNga
	wLkY0iwlhQ0Ls8BQDvZmRV0HpFewxS56O7RtHt87+n+4qo9rpWytPfQB4cXGazW+JVHE
	kwNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20161025;
	h=x-gm-message-state:mime-version:references:in-reply-to:from:date
	:message-id:subject:to:cc;
	bh=44fI/zvMLUhS49iDkUFw/KBAMZRdXg8iTFjidfOlPVA=;
	b=Ibcn8de2jWNjQ9TXCQpZu8eT1LJV7TJ9DuN58umdTF65Zo7Vbk1DUwKuQQ7SIWkKrP
	B3PoRSMNzST1F4l8ndy/CIbP7hEqFBWCri+PUWJAdNk6ewDBI+gJWrC/wF7xo/54fHLd
	29esR03Bqh0K3mgG7MuJRhVHKCRMh/5djMWEJEwDkg1IgbsDyDBmQX/trbZmXYoZAT6G
	MDpqkM9HwydhMiHVlUSNIt4Efv5iuSpqY8pHDUlaOxcHYplkv++rDTVClmmS97GRinx5
	4iraRH+iR3yc/E1yWRYAy13RGBHmtXlpZHD7uNLbxioPDYPRxtkNOZ5uzaKgHW1bWsW9
	JN4w==
X-Gm-Message-State: APt69E0VCfMdV2F2RYMXUBUBtukMF/IrAM3zxXq+jsqedvIg3y3We1vJ
	/1htD1e5oQk+9Je2ubvOpQzd0DBT3x0qe1RYFb2Vxg==
X-Google-Smtp-Source: ADUXVKIE7u+5Z9kwQhCLwDxI0SfP1/TJGj7GHz+JHHhJ64Hgm8KFUuYsNTVoOkp2VJYYFt8wHEL7rrbHaYFRo0AfvvA=
X-Received: by 2002:a50:aba5:: with SMTP id
	u34-v6mr1519339edc.252.1528847501643; 
	Tue, 12 Jun 2018 16:51:41 -0700 (PDT)
MIME-Version: 1.0
References: <CAAS2fgTs+aKyiL8Kg_AZk=Mdh6896MEg=KHa6ANAZO7unsGEsg@mail.gmail.com>
	<CADZtCShyYbgKk2zsKzQniqDw--XKfYWTk3Hk3o50V=MgT6zeuQ@mail.gmail.com>
	<20180602124157.744x7j4u7dqtaa43@email>
	<343A3542-3103-42E9-95B7-640DFE958FFA@gmail.com>
	<CAAS2fgQDdJpzPR9Ve81hhyqU+MO7Ryy125fzK-iv=sfwwORDCw@mail.gmail.com>
	<37BECD1A-7515-4081-85AC-871B9FB57772@gmail.com>
	<CAPg+sBjXbwTKW+qbGwJgau-Q2-uJC6N1JH8hH4KThv0Ah3WuqA@mail.gmail.com>
	<CAO3Pvs9BQ2Dc9GCuJNxko_34Jx5kSOd8jxYkfpMW2E_1EOBEuQ@mail.gmail.com>
	<CAAS2fgRmvqJrtk5n7e9xc-zPpDLCKa2Te_dGCk9xb9OH_AG0nw@mail.gmail.com>
	<CAO3Pvs89_196socS-mxZpciYNO172Fiif=ncSQF0DA9n1g0+fQ@mail.gmail.com>
	<20180609103445.alxrchjbbbxklkzt@email>
In-Reply-To: <20180609103445.alxrchjbbbxklkzt@email>
From: Olaoluwa Osuntokun <laolu32@gmail.com>
Date: Tue, 12 Jun 2018 16:51:29 -0700
Message-ID: <CAO3Pvs_GYnFAS-pM=+OYCbJaEw8TOo-opnv5GVCBiDEurLvjYg@mail.gmail.com>
To: "David A. Harding" <dave@dtrt.org>
Content-Type: multipart/alternative; boundary="000000000000405741056e7a8cde"
X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM,
	HTML_MESSAGE,RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
Cc: Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Tue, 12 Jun 2018 23:51:44 -0000

--000000000000405741056e7a8cde
Content-Type: text/plain; charset="UTF-8"

> Doesn't the current BIP157 protocol have each filter commit to the filter
> for the previous block?

Yep!

> If that's the case, shouldn't validating the commitment at the tip of the
> chain (or buried back whatever number of blocks that the SPV client
trusts)
> obliviate the need to validate the commitments for any preceeding blocks
in
> the SPV trust model?

Yeah, just that there'll be a gap between the p2p version, and when it's
ultimately committed.

> It seems like you're claiming better security here without providing any
> evidence for it.

What I mean is that one allows you to fully verify the filter, while the
other allows you to only validate a portion of the filter and requires other
added heuristics.

> In the case of prevout+output filters, when a client receives
advertisements
> for different filters from different peers, it:

Alternatively, they can decompress the filter and at least verify that
proper _output scripts_ have been included. Maybe this is "good enough"
until its committed. If a command is added to fetch all the prev outs along
w/ a block (which would let you do another things like verify fees), then
they'd be able to fully validate the filter as well.

-- Laolu


On Sat, Jun 9, 2018 at 3:35 AM David A. Harding <dave@dtrt.org> wrote:

> On Fri, Jun 08, 2018 at 04:35:29PM -0700, Olaoluwa Osuntokun via
> bitcoin-dev wrote:
> >   2. Since the coinbase transaction is the first in a block, it has the
> >      longest merkle proof path. As a result, it may be several hundred
> bytes
> >      (and grows with future capacity increases) to present a proof to the
> >      client.
>
> I'm not sure why commitment proof size is a significant issue.  Doesn't
> the current BIP157 protocol have each filter commit to the filter for
> the previous block?  If that's the case, shouldn't validating the
> commitment at the tip of the chain (or buried back whatever number of
> blocks that the SPV client trusts) obliviate the need to validate the
> commitments for any preceeding blocks in the SPV trust model?
>
> > Depending on the composition of blocks, this may outweigh the gains
> > had from taking advantage of the additional compression the prev outs
> > allow.
>
> I think those are unrelated points.  The gain from using a more
> efficient filter is saved bytes.  The gain from using block commitments
> is SPV-level security---that attacks have a definite cost in terms of
> generating proof of work instead of the variable cost of network
> compromise (which is effectively free in many situations).
>
> Comparing the extra bytes used by block commitments to the reduced bytes
> saved by prevout+output filters is like comparing the extra bytes used
> to download all blocks for full validation to the reduced bytes saved by
> only checking headers and merkle inclusion proofs in simplified
> validation.  Yes, one uses more bytes than the other, but they're
> completely different security models and so there's no normative way for
> one to "outweigh the gains" from the other.
>
> > So should we optimize for the ability to validate in a particular
> > model (better security), or lower bandwidth in this case?
>
> It seems like you're claiming better security here without providing any
> evidence for it.  The security model is "at least one of my peers is
> honest."  In the case of outpoint+output filters, when a client receives
> advertisements for different filters from different peers, it:
>
>     1. Downloads the corresponding block
>     2. Locally generates the filter for that block
>     3. Kicks any peers that advertised a different filter than what it
>        generated locally
>
> This ensures that as long as the client has at least one honest peer, it
> will see every transaction affecting its wallet.  In the case of
> prevout+output filters, when a client receives advertisements for
> different filters from different peers, it:
>
>     1. Downloads the corresponding block and checks it for wallet
>        transactions as if there had been a filter match
>
> This also ensures that as long as the client has at least one honest
> peer, it will see every transaction affecting its wallet.  This is
> equivilant security.
>
> In the second case, it's possible for the client to eventually
> probabalistically determine which peer(s) are dishonest and kick them.
> The most space efficient of these protocols may disclose some bits of
> evidence for what output scripts the client is looking for, but a
> slightly less space-efficient protocol simply uses randomly-selected
> outputs saved from previous blocks to make the probabalistic
> determination (rather than the client's own outputs) and so I think
> should be quite private.  Neither protocol seems significantly more
> complicated than keeping an associative array recording the number of
> false positive matches for each peer's filters.
>
> -Dave
>

--000000000000405741056e7a8cde
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>&gt; Doesn&#39;t the current BIP157 protocol have eac=
h filter commit to the filter</div><div>&gt; for the previous block?</div><=
div><br></div><div>Yep!</div><div><br></div><div>&gt; If that&#39;s the cas=
e, shouldn&#39;t validating the commitment at the tip of the</div><div>&gt;=
 chain (or buried back whatever number of blocks that the SPV client trusts=
)</div><div>&gt; obliviate the need to validate the commitments for any pre=
ceeding blocks in</div><div>&gt; the SPV trust model?</div><div><br></div><=
div>Yeah, just that there&#39;ll be a gap between the p2p version, and when=
 it&#39;s</div><div>ultimately committed.</div><div><br></div><div>&gt; It =
seems like you&#39;re claiming better security here without providing any</=
div><div>&gt; evidence for it.</div><div><br></div><div>What I mean is that=
 one allows you to fully verify the filter, while the</div><div>other allow=
s you to only validate a portion of the filter and requires other</div><div=
>added heuristics.=C2=A0</div><div><br></div><div>&gt; In the case of prevo=
ut+output filters, when a client receives advertisements</div><div>&gt; for=
 different filters from different peers, it:</div><div><br></div><div>Alter=
natively, they can decompress the filter and at least verify that</div><div=
>proper _output scripts_ have been included. Maybe this is &quot;good enoug=
h&quot;</div><div>until its committed. If a command is added to fetch all t=
he prev outs along</div><div>w/ a block (which would let you do another thi=
ngs like verify fees), then</div><div>they&#39;d be able to fully validate =
the filter as well.</div><div><br></div><div>-- Laolu</div><div><br></div><=
br><div class=3D"gmail_quote"><div dir=3D"ltr">On Sat, Jun 9, 2018 at 3:35 =
AM David A. Harding &lt;<a href=3D"mailto:dave@dtrt.org">dave@dtrt.org</a>&=
gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0=
 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Fri, Jun 08, 2018 at =
04:35:29PM -0700, Olaoluwa Osuntokun via bitcoin-dev wrote:<br>
&gt;=C2=A0 =C2=A02. Since the coinbase transaction is the first in a block,=
 it has the<br>
&gt;=C2=A0 =C2=A0 =C2=A0 longest merkle proof path. As a result, it may be =
several hundred bytes<br>
&gt;=C2=A0 =C2=A0 =C2=A0 (and grows with future capacity increases) to pres=
ent a proof to the<br>
&gt;=C2=A0 =C2=A0 =C2=A0 client.<br>
<br>
I&#39;m not sure why commitment proof size is a significant issue.=C2=A0 Do=
esn&#39;t<br>
the current BIP157 protocol have each filter commit to the filter for<br>
the previous block?=C2=A0 If that&#39;s the case, shouldn&#39;t validating =
the<br>
commitment at the tip of the chain (or buried back whatever number of<br>
blocks that the SPV client trusts) obliviate the need to validate the<br>
commitments for any preceeding blocks in the SPV trust model?<br>
<br>
&gt; Depending on the composition of blocks, this may outweigh the gains<br=
>
&gt; had from taking advantage of the additional compression the prev outs<=
br>
&gt; allow.<br>
<br>
I think those are unrelated points.=C2=A0 The gain from using a more<br>
efficient filter is saved bytes.=C2=A0 The gain from using block commitment=
s<br>
is SPV-level security---that attacks have a definite cost in terms of<br>
generating proof of work instead of the variable cost of network<br>
compromise (which is effectively free in many situations).<br>
<br>
Comparing the extra bytes used by block commitments to the reduced bytes<br=
>
saved by prevout+output filters is like comparing the extra bytes used<br>
to download all blocks for full validation to the reduced bytes saved by<br=
>
only checking headers and merkle inclusion proofs in simplified<br>
validation.=C2=A0 Yes, one uses more bytes than the other, but they&#39;re<=
br>
completely different security models and so there&#39;s no normative way fo=
r<br>
one to &quot;outweigh the gains&quot; from the other.<br>
<br>
&gt; So should we optimize for the ability to validate in a particular<br>
&gt; model (better security), or lower bandwidth in this case?<br>
<br>
It seems like you&#39;re claiming better security here without providing an=
y<br>
evidence for it.=C2=A0 The security model is &quot;at least one of my peers=
 is<br>
honest.&quot;=C2=A0 In the case of outpoint+output filters, when a client r=
eceives<br>
advertisements for different filters from different peers, it:<br>
<br>
=C2=A0 =C2=A0 1. Downloads the corresponding block<br>
=C2=A0 =C2=A0 2. Locally generates the filter for that block<br>
=C2=A0 =C2=A0 3. Kicks any peers that advertised a different filter than wh=
at it<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0generated locally<br>
<br>
This ensures that as long as the client has at least one honest peer, it<br=
>
will see every transaction affecting its wallet.=C2=A0 In the case of<br>
prevout+output filters, when a client receives advertisements for<br>
different filters from different peers, it:<br>
<br>
=C2=A0 =C2=A0 1. Downloads the corresponding block and checks it for wallet=
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0transactions as if there had been a filter match=
<br>
<br>
This also ensures that as long as the client has at least one honest<br>
peer, it will see every transaction affecting its wallet.=C2=A0 This is<br>
equivilant security.<br>
<br>
In the second case, it&#39;s possible for the client to eventually<br>
probabalistically determine which peer(s) are dishonest and kick them.<br>
The most space efficient of these protocols may disclose some bits of<br>
evidence for what output scripts the client is looking for, but a<br>
slightly less space-efficient protocol simply uses randomly-selected<br>
outputs saved from previous blocks to make the probabalistic<br>
determination (rather than the client&#39;s own outputs) and so I think<br>
should be quite private.=C2=A0 Neither protocol seems significantly more<br=
>
complicated than keeping an associative array recording the number of<br>
false positive matches for each peer&#39;s filters.<br>
<br>
-Dave<br>
</blockquote></div></div>

--000000000000405741056e7a8cde--