summaryrefslogtreecommitdiff
path: root/ab/0201b7af287a35a4a339d85cb4b378693afaff
blob: 72d6c6a071ab7c54a0aa4b7bb9baf4bdc535e222 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
Return-Path: <laolu32@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 25841D48
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Fri,  8 Jun 2018 23:35:44 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-wm0-f51.google.com (mail-wm0-f51.google.com [74.125.82.51])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 77315E3
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Fri,  8 Jun 2018 23:35:42 +0000 (UTC)
Received: by mail-wm0-f51.google.com with SMTP id j15-v6so6428487wme.0
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Fri, 08 Jun 2018 16:35:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
	h=mime-version:references:in-reply-to:from:date:message-id:subject:to
	:cc; bh=k4rZ2D7MGWoMmPcPUFs2bka4yeTasqEq8/Zi3mfDGWw=;
	b=IapVyg+VAU2ZYheJDENSwBW8lorphC3XSCnXCivIWDsRK3o/j2+PwFnLe07/X1H/7X
	lwzsd9CH/Cnmy2+W/8aX/VExU/37tFXEC+Fd5Z5Gbkg0G+SakEwK1rzQ2IbnBsbNZI3p
	CCFyxp8CvY2BRharvJ93T1ugEPMEg3tzCokj/k+U4LK3VUtpK5fZJ7gHPy+P5QobsgI2
	lmpGoHhRJI8Tcxgh9c6zH4d/Ij7NRunTlVNedCgdPk8+auyElKbRwbKINfiv1axVmpze
	30pewKgwr30Hq3UrHXksakodHm4CnARZPsGPOya5BuXqQQVdpgw15CWJGRHR6hiIU/ax
	OBcQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20161025;
	h=x-gm-message-state:mime-version:references:in-reply-to:from:date
	:message-id:subject:to:cc;
	bh=k4rZ2D7MGWoMmPcPUFs2bka4yeTasqEq8/Zi3mfDGWw=;
	b=U5zr3clmCoVaZQtuUVHgzl0gS7qouRMKl5eKYgAR9QQgblXuwqUDjSMsS4fZWKYcaR
	nagOLtB4HiEFzUDuBsEtwkJCjV+0YVg7blcOjka9bWe9fyRQ0uHr08MZg3f+Juh2DSlo
	pWLDHKfGf+qbIZvC0dNX3PK+MY8dU9I13+oN751ma4/uiNLAv2BUPUyPuqpbNH8k/0wc
	c303E1ZWGVtEgEFliPRK6XgEhrbJQAt4/b962FHJFsJYPZwF39gzFEvB8bQ9KIfXceYg
	9nqwl7Le8R0t1dBJjHo4nA70uBIiV8d/PVBNW4Fd2RyZE1w8CvMXC0fdaNimxiJSLNk7
	066A==
X-Gm-Message-State: APt69E1SKH9VxycVbqLRR38IBQJ6HcI6ROAWDbp0oJTur1bcg/QMat87
	FKJTt6Q6cD+vAdB0vNnmFtdrLDNBfIvBMafvth8=
X-Google-Smtp-Source: ADUXVKJysAA8MPMW9IOjfAsk7n0tsSa5UrWi8XU6P4C8ztxqltLEplA+J3W+VitV4NNohltk3/5Cuw6DGLvXg2owyU8=
X-Received: by 2002:a50:b6bc:: with SMTP id
	d57-v6mr8947988ede.250.1528500940937; 
	Fri, 08 Jun 2018 16:35:40 -0700 (PDT)
MIME-Version: 1.0
References: <7E4FA664-BBAF-421F-8C37-D7CE3AA5310A@gmail.com>
	<F87D7069-0FDC-4572-B02B-398A2A455935@gmail.com>
	<CAAS2fgT716PiP0ucoASxryM9y+s9H2z06Z0ToaP1xT3BozAtNw@mail.gmail.com>
	<CADZtCSguto2z6Z9CykymxnCokqo1G=sW0Ov0ht+KcD+KMnYyow@mail.gmail.com>
	<CAO3Pvs-YDzfRqmyJ85wTH0ciccjCvkm5stGyP_tVGGna=PMv3A@mail.gmail.com>
	<CAO3Pvs9p5COiS_7Jbj1r2iAKTEdXUcnVTRzL27c3=CeuB9WDTQ@mail.gmail.com>
	<CAAS2fgSyVi0d_ixp-auRPPzPfFeffN=hsWhWT5=EzDO3O+Ue1g@mail.gmail.com>
	<CAO3Pvs_0qCZbRCfL8EJw6gzWjZeXWcJrtg27g_SJ7+PkYTHg6A@mail.gmail.com>
	<CAAS2fgTs+aKyiL8Kg_AZk=Mdh6896MEg=KHa6ANAZO7unsGEsg@mail.gmail.com>
	<CADZtCShyYbgKk2zsKzQniqDw--XKfYWTk3Hk3o50V=MgT6zeuQ@mail.gmail.com>
	<20180602124157.744x7j4u7dqtaa43@email>
	<343A3542-3103-42E9-95B7-640DFE958FFA@gmail.com>
	<CAAS2fgQDdJpzPR9Ve81hhyqU+MO7Ryy125fzK-iv=sfwwORDCw@mail.gmail.com>
	<37BECD1A-7515-4081-85AC-871B9FB57772@gmail.com>
	<CAPg+sBjXbwTKW+qbGwJgau-Q2-uJC6N1JH8hH4KThv0Ah3WuqA@mail.gmail.com>
	<CAO3Pvs9BQ2Dc9GCuJNxko_34Jx5kSOd8jxYkfpMW2E_1EOBEuQ@mail.gmail.com>
	<CAAS2fgRmvqJrtk5n7e9xc-zPpDLCKa2Te_dGCk9xb9OH_AG0nw@mail.gmail.com>
In-Reply-To: <CAAS2fgRmvqJrtk5n7e9xc-zPpDLCKa2Te_dGCk9xb9OH_AG0nw@mail.gmail.com>
From: Olaoluwa Osuntokun <laolu32@gmail.com>
Date: Fri, 8 Jun 2018 16:35:29 -0700
Message-ID: <CAO3Pvs89_196socS-mxZpciYNO172Fiif=ncSQF0DA9n1g0+fQ@mail.gmail.com>
To: Gregory Maxwell <greg@xiph.org>
Content-Type: multipart/alternative; boundary="0000000000009f9fc0056e29db9d"
X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM,
	HTML_MESSAGE,RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
Cc: Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Fri, 08 Jun 2018 23:35:44 -0000

--0000000000009f9fc0056e29db9d
Content-Type: text/plain; charset="UTF-8"

> That in argument against adopting the inferior version, as that will
> contribute more momentum to doing it in a way that doesn't make sense long
> term.

That was moreso an attempt at a disclosure, rather than may argument. But
also as noted further up in the thread, both approaches have a trade off:
one is better for light clients in a p2p "one honest peer mode", while the
other is more compact, but is less verifiable for the light clients. They're
"inferior" in different ways.

My argument goes more like: moving to prev scripts means clients cannot
verify in full unless a block message is added to include the prev outs.
This is a downgrade assuming a "one honest peer" model for the p2p
interactions. A commitment removes this drawback, but ofc requires a soft
fork. Soft forks take a "long" time to deploy. So what's the cost in using
the current filter (as it lets the client verify the filter if they want to,
or in an attempted "bamboozlement" scenario) in the short term (as we don't
yet have a proposal for committing the filters) which would allow us to
experiment more with the technique on mainnet before making the step up to
committing the filter. Also, depending on the way the commitment is done,
the filters themselves would need to be modified.

> I don't agree at all, and I can't see why you say so.

Sure it doesn't _have_ to, but from my PoV as "adding more commitments" is
on the top of every developers wish list for additions to Bitcoin, it would
make sense to coordinate on an "ultimate" extensible commitment once, rather
than special case a bunch of distinct commitments. I can see arguments for
either really.

> This is inherent in how e.g. the segwit commitment is encoded, the initial
> bytes are an identifying cookies. Different commitments would have
different
> cookies.

Indeed, if the filter were to be committed, using an output on the coinbase
would be a likely candidate. However, I see two issues with this:

  1. The current filter format (even moving to prevouts) cannot be committed
     in this fashion as it indexes each of the coinbase output scripts. This
     creates a circular dependency: the commitment is modified by the
     filter, which is modified by the commitment (the filter atm indexes the
     commitment). So we'd need to add a special case to skip outputs with a
     particular witness magic. However, we don't know what that witness
     magic looks like (as there's no proposal). As a result, the type
     filters that can be served over the p2p network may be distinct from
     the type of filters that are to be committed, as the commitment may
     have an impact on the filter itself.

  2. Since the coinbase transaction is the first in a block, it has the
     longest merkle proof path. As a result, it may be several hundred bytes
     (and grows with future capacity increases) to present a proof to the
     client. Depending on the composition of blocks, this may outweigh the
     gains had from taking advantage of the additional compression the prev
     outs allow.

In regards to the second item above, what do you think of the old Tier Nolan
proposal [1] to create a "constant" sized proof for future commitments by
constraining the size of the block and placing the commitments within the
last few transactions in the block?

> but with an added advantage of permitting expirementation ahead of the
> commitment.

Indeed! To my knowledge, lnd is the only software deployed that even has
code to experiment with the filtering proposal in general. Also, as I
pointed out above, we may require an additional modification in order to be
able to commit the filter. The nature of that modification may depend on how
the filter is to be committed. As a result, why hinder experimentation today
(since it might need to be changed anyway, and as you point out the filter
being committed can even be swapped) by delaying until we know what the
commitment will look like?

> You can still scan blocks directly when peers disagree on the filter
> content, regardless of how the filter is constructed

But the difference is that one options lets you fully construct the filter
from a block, while the other requires additional data.

> but it makes the attack ineffective and using outpoints considerably
increases
> bandwidth for everyone without an attack

So should we optimize for the ability to validate in a particular model
(better
security), or lower bandwidth in this case? It may also be the case that the
overhead of receiving proofs of the commitment outweigh the savings
depending
on block composition (ofc entire block that re-uses the same address is
super
small).

> It seems to me this point is being overplayed, especially considering the
> current state of non-existing validation in SPV software (if SPV software
> doesn't validate anything else they could be validating, why would they
> implement a considerable amount of logic for this?).

I don't think its fair to compare those that wish to implement this proposal
(and actually do the validation) to the legacy SPV software that to my
knowledge is all but abandoned. The project I work on that seeks to deploy
this proposal (already has, but mainnet support is behind a flag as I
anticipated further modifications) indeed has implemented the "considerable"
amount of logic to check for discrepancies and ban peers trying to bamboozle
the light clients. I'm confident that the other projects seeking to
implement
this (rust-bitcoin-spv, NBitcoin, bcoin, maybe missing a few too) won't
find it
too difficult to implement "full" validation, as they're bitcoin developers
with quite a bit of experience.

I think we've all learned from the past defects of past light clients, and
don't seek to repeat history by purposefully implementing as little
validation
as possible. With these new projects by new authors, I think we have an
opprotunity to implement light clients "correctly" this time around.

[1]:
https://github.com/TierNolan/bips/blob/00a8d3e1ac066ce3728658c6c40240e1c2ab859e/bip-aux-header.mediawiki

-- Laolu


On Fri, Jun 8, 2018 at 9:14 AM Gregory Maxwell <greg@xiph.org> wrote:

> On Fri, Jun 8, 2018 at 5:03 AM, Olaoluwa Osuntokun via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> > As someone who's written and reviews code integrating the proposal all
> the
> > way up the stack (from node to wallet, to application), IMO, there's no
> > immediate cost to deferring the inclusion/creation of a filter that
> includes
> > prev scripts (b) instead of the outpoint as the "regular" filter does
> now.
> > Switching to prev script in the _short term_ would be costly for the set
> of
> > applications already deployed (or deployed in a minimal or flag flip
> gated
> > fashion) as the move from prev script to outpoint is a cascading one that
> > impacts wallet operation, rescans, HD seed imports, etc.
>
> It seems to me that you're making the argument against your own case
> here: I'm reading this as a "it's hard to switch so it should be done
> the inferior way".  That in argument against adopting the inferior
> version, as that will contribute more momentum to doing it in a way
> that doesn't make sense long term.
>
> > Such a proposal would need to be generalized enough to allow several
> components to be committed,
>
> I don't agree at all, and I can't see why you say so.
>
> > likely have versioning,
>
> This is inherent in how e.g. the segwit commitment is encoded, the
> initial bytes are an identifying cookies. Different commitments would
> have different cookies.
>
> > and also provide the necessary extensibility to allow additional items
> to be committed in the future
>
> What was previously proposed is that the commitment be required to be
> consistent if present but not be required to be present.  This would
> allow changing whats used by simply abandoning the old one.  Sparsity
> in an optional commitment can be addressed when there is less than
> 100% participation by having each block that includes a commitment
> commit to the missing filters ones from their immediate ancestors.
>
> Additional optionality can be provided by the other well known
> mechanisms,  e.g. have the soft fork expire at a block 5 years out
> past deployment, and continue to soft-fork it in for a longer term so
> long as its in use (or eventually without expiration if its clear that
> it's not going away).
>
> > wallets which wish to primarily use the filters for rescan purposes can't
> > just construct them locally for this particular use case independent of
> > what's currently deployed on the p2p network.
>
> Absolutely, but given the failure of BIP37 on the network-- and the
> apparent strong preference of end users for alternatives that don't
> scan (e.g. electrum and web wallets)-- supporting making this
> available via P2P was already only interesting to many as a nearly
> free side effect of having filters for local scanning.  If it's a
> different filter, it's no longer attractive.
>
> It seems to me that some people have forgotten that this whole idea
> was originally proposed to be a committed data-- but with an added
> advantage of permitting expirementation ahead of the commitment.
>
> > Maintaining the outpoint also allows us to rely on a "single honest
> peer"security model in the short term.
>
> You can still scan blocks directly when peers disagree on the filter
> content, regardless of how the filter is constructed-- yes, it uses
> more bandwidth if you're attacked, but it makes the attack ineffective
> and using outpoints considerably increases bandwidth for everyone
> without an attack.  These ineffective (except for increasing
> bandwidth) attacks would have to be common to offset the savings. It
> seems to me this point is being overplayed, especially considering the
> current state of non-existing validation in SPV software (if SPV
> software doesn't validate anything else they could be validating, why
> would they implement a considerable amount of logic for this?).
>

--0000000000009f9fc0056e29db9d
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>&gt; That in argument against adopting the inferior v=
ersion, as that will</div><div>&gt; contribute more momentum to doing it in=
 a way that doesn&#39;t make sense long</div><div>&gt; term.</div><div><br>=
</div><div>That was moreso an attempt at a disclosure, rather than may argu=
ment. But</div><div>also as noted further up in the thread, both approaches=
 have a trade off:</div><div>one is better for light clients in a p2p &quot=
;one honest peer mode&quot;, while the</div><div>other is more compact, but=
 is less verifiable for the light clients. They&#39;re</div><div>&quot;infe=
rior&quot; in different ways.</div><div><br></div><div>My argument goes mor=
e like: moving to prev scripts means clients cannot</div><div>verify in ful=
l unless a block message is added to include the prev outs.</div><div>This =
is a downgrade assuming a &quot;one honest peer&quot; model for the p2p</di=
v><div>interactions. A commitment removes this drawback, but ofc requires a=
 soft</div><div>fork. Soft forks take a &quot;long&quot; time to deploy. So=
 what&#39;s the cost in using</div><div>the current filter (as it lets the =
client verify the filter if they want to,</div><div>or in an attempted &quo=
t;bamboozlement&quot; scenario) in the short term (as we don&#39;t</div><di=
v>yet have a proposal for committing the filters) which would allow us to</=
div><div>experiment more with the technique on mainnet before making the st=
ep up to</div><div>committing the filter. Also, depending on the way the co=
mmitment is done,</div><div>the filters themselves would need to be modifie=
d.=C2=A0</div><div><br></div><div>&gt; I don&#39;t agree at all, and I can&=
#39;t see why you say so.</div><div><br></div><div>Sure it doesn&#39;t _hav=
e_ to, but from my PoV as &quot;adding more commitments&quot; is</div><div>=
on the top of every developers wish list for additions to Bitcoin, it would=
</div><div>make sense to coordinate on an &quot;ultimate&quot; extensible c=
ommitment once, rather</div><div>than special case a bunch of distinct comm=
itments. I can see arguments for</div><div>either really.</div><div><br></d=
iv><div>&gt; This is inherent in how e.g. the segwit commitment is encoded,=
 the initial</div><div>&gt; bytes are an identifying cookies. Different com=
mitments would have different</div><div>&gt; cookies.</div><div><br></div><=
div>Indeed, if the filter were to be committed, using an output on the coin=
base</div><div>would be a likely candidate. However, I see two issues with =
this:=C2=A0</div><div>=C2=A0=C2=A0</div><div>=C2=A0 1. The current filter f=
ormat (even moving to prevouts) cannot be committed</div><div>=C2=A0 =C2=A0=
 =C2=A0in this fashion as it indexes each of the coinbase output scripts. T=
his</div><div>=C2=A0 =C2=A0 =C2=A0creates a circular dependency: the commit=
ment is modified by the</div><div>=C2=A0 =C2=A0 =C2=A0filter, which is modi=
fied by the commitment (the filter atm indexes the</div><div>=C2=A0 =C2=A0 =
=C2=A0commitment). So we&#39;d need to add a special case to skip outputs w=
ith a</div><div>=C2=A0 =C2=A0 =C2=A0particular witness magic. However, we d=
on&#39;t know what that witness</div><div>=C2=A0 =C2=A0 =C2=A0magic looks l=
ike (as there&#39;s no proposal). As a result, the type</div><div>=C2=A0 =
=C2=A0 =C2=A0filters that can be served over the p2p network may be distinc=
t from</div><div>=C2=A0 =C2=A0 =C2=A0the type of filters that are to be com=
mitted, as the commitment may</div><div>=C2=A0 =C2=A0 =C2=A0have an impact =
on the filter itself.=C2=A0</div><div><br></div><div>=C2=A0 2. Since the co=
inbase transaction is the first in a block, it has the</div><div>=C2=A0 =C2=
=A0 =C2=A0longest merkle proof path. As a result, it may be several hundred=
 bytes</div><div>=C2=A0 =C2=A0 =C2=A0(and grows with future capacity increa=
ses) to present a proof to the</div><div>=C2=A0 =C2=A0 =C2=A0client. Depend=
ing on the composition of blocks, this may outweigh the</div><div>=C2=A0 =
=C2=A0 =C2=A0gains had from taking advantage of the additional compression =
the prev</div><div>=C2=A0 =C2=A0 =C2=A0outs allow.</div><div><br></div><div=
>In regards to the second item above, what do you think of the old Tier Nol=
an</div><div>proposal [1] to create a &quot;constant&quot; sized proof for =
future commitments by</div><div>constraining the size of the block and plac=
ing the commitments within the</div><div>last few transactions in the block=
?</div><div><br></div><div>&gt; but with an added advantage of permitting e=
xpirementation ahead of the</div><div>&gt; commitment.</div><div><br></div>=
<div>Indeed! To my knowledge, lnd is the only software deployed that even h=
as</div><div>code to experiment with the filtering proposal in general. Als=
o, as I</div><div>pointed out above, we may require an additional modificat=
ion in order to be</div><div>able to commit the filter. The nature of that =
modification may depend on how</div><div>the filter is to be committed. As =
a result, why hinder experimentation today</div><div>(since it might need t=
o be changed anyway, and as you point out the filter</div><div>being commit=
ted can even be swapped) by delaying until we know what the</div><div>commi=
tment will look like?</div><div><br></div><div>&gt; You can still scan bloc=
ks directly when peers disagree on the filter</div><div>&gt; content, regar=
dless of how the filter is constructed</div><div><br></div><div>But the dif=
ference is that one options lets you fully construct the filter</div><div>f=
rom a block, while the other requires additional data.</div><div><br></div>=
<div>&gt; but it makes the attack ineffective and using outpoints considera=
bly increases</div><div>&gt; bandwidth for everyone without an attack</div>=
<div><br></div><div>So should we optimize for the ability to validate in a =
particular model (better</div><div>security), or lower bandwidth in this ca=
se? It may also be the case that the</div><div>overhead of receiving proofs=
 of the commitment outweigh the savings depending</div><div>on block compos=
ition (ofc entire block that re-uses the same address is super</div><div>sm=
all).</div><div><br></div><div>&gt; It seems to me this point is being over=
played, especially considering the</div><div>&gt; current state of non-exis=
ting validation in SPV software (if SPV software</div><div>&gt; doesn&#39;t=
 validate anything else they could be validating, why would they</div><div>=
&gt; implement a considerable amount of logic for this?).</div><div><br></d=
iv><div>I don&#39;t think its fair to compare those that wish to implement =
this proposal</div><div>(and actually do the validation) to the legacy SPV =
software that to my</div><div>knowledge is all but abandoned. The project I=
 work on that seeks to deploy</div><div>this proposal (already has, but mai=
nnet support is behind a flag as I</div><div>anticipated further modificati=
ons) indeed has implemented the &quot;considerable&quot;</div><div>amount o=
f logic to check for discrepancies and ban peers trying to bamboozle</div><=
div>the light clients. I&#39;m confident that the other projects seeking to=
 implement</div><div>this (rust-bitcoin-spv, NBitcoin, bcoin, maybe missing=
 a few too) won&#39;t find it</div><div>too difficult to implement &quot;fu=
ll&quot; validation, as they&#39;re bitcoin developers</div><div>with quite=
 a bit of experience.=C2=A0</div><div><br></div><div>I think we&#39;ve all =
learned from the past defects of past light clients, and</div><div>don&#39;=
t seek to repeat history by purposefully implementing as little validation<=
/div><div>as possible. With these new projects by new authors, I think we h=
ave an</div><div>opprotunity to implement light clients &quot;correctly&quo=
t; this time around.</div><div><br></div><div>[1]: <a href=3D"https://githu=
b.com/TierNolan/bips/blob/00a8d3e1ac066ce3728658c6c40240e1c2ab859e/bip-aux-=
header.mediawiki">https://github.com/TierNolan/bips/blob/00a8d3e1ac066ce372=
8658c6c40240e1c2ab859e/bip-aux-header.mediawiki</a></div><div><br></div><di=
v>-- Laolu</div><div><br></div></div><br><div class=3D"gmail_quote"><div di=
r=3D"ltr">On Fri, Jun 8, 2018 at 9:14 AM Gregory Maxwell &lt;<a href=3D"mai=
lto:greg@xiph.org">greg@xiph.org</a>&gt; wrote:<br></div><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">On Fri, Jun 8, 2018 at 5:03 AM, Olaoluwa Osuntokun via bitcoi=
n-dev<br>
&lt;<a href=3D"mailto:bitcoin-dev@lists.linuxfoundation.org" target=3D"_bla=
nk">bitcoin-dev@lists.linuxfoundation.org</a>&gt; wrote:<br>
&gt; As someone who&#39;s written and reviews code integrating the proposal=
 all the<br>
&gt; way up the stack (from node to wallet, to application), IMO, there&#39=
;s no<br>
&gt; immediate cost to deferring the inclusion/creation of a filter that in=
cludes<br>
&gt; prev scripts (b) instead of the outpoint as the &quot;regular&quot; fi=
lter does now.<br>
&gt; Switching to prev script in the _short term_ would be costly for the s=
et of<br>
&gt; applications already deployed (or deployed in a minimal or flag flip g=
ated<br>
&gt; fashion) as the move from prev script to outpoint is a cascading one t=
hat<br>
&gt; impacts wallet operation, rescans, HD seed imports, etc.<br>
<br>
It seems to me that you&#39;re making the argument against your own case<br=
>
here: I&#39;m reading this as a &quot;it&#39;s hard to switch so it should =
be done<br>
the inferior way&quot;.=C2=A0 That in argument against adopting the inferio=
r<br>
version, as that will contribute more momentum to doing it in a way<br>
that doesn&#39;t make sense long term.<br>
<br>
&gt; Such a proposal would need to be generalized enough to allow several c=
omponents to be committed,<br>
<br>
I don&#39;t agree at all, and I can&#39;t see why you say so.<br>
<br>
&gt; likely have versioning,<br>
<br>
This is inherent in how e.g. the segwit commitment is encoded, the<br>
initial bytes are an identifying cookies. Different commitments would<br>
have different cookies.<br>
<br>
&gt; and also provide the necessary extensibility to allow additional items=
 to be committed in the future<br>
<br>
What was previously proposed is that the commitment be required to be<br>
consistent if present but not be required to be present.=C2=A0 This would<b=
r>
allow changing whats used by simply abandoning the old one.=C2=A0 Sparsity<=
br>
in an optional commitment can be addressed when there is less than<br>
100% participation by having each block that includes a commitment<br>
commit to the missing filters ones from their immediate ancestors.<br>
<br>
Additional optionality can be provided by the other well known<br>
mechanisms,=C2=A0 e.g. have the soft fork expire at a block 5 years out<br>
past deployment, and continue to soft-fork it in for a longer term so<br>
long as its in use (or eventually without expiration if its clear that<br>
it&#39;s not going away).<br>
<br>
&gt; wallets which wish to primarily use the filters for rescan purposes ca=
n&#39;t<br>
&gt; just construct them locally for this particular use case independent o=
f<br>
&gt; what&#39;s currently deployed on the p2p network.<br>
<br>
Absolutely, but given the failure of BIP37 on the network-- and the<br>
apparent strong preference of end users for alternatives that don&#39;t<br>
scan (e.g. electrum and web wallets)-- supporting making this<br>
available via P2P was already only interesting to many as a nearly<br>
free side effect of having filters for local scanning.=C2=A0 If it&#39;s a<=
br>
different filter, it&#39;s no longer attractive.<br>
<br>
It seems to me that some people have forgotten that this whole idea<br>
was originally proposed to be a committed data-- but with an added<br>
advantage of permitting expirementation ahead of the commitment.<br>
<br>
&gt; Maintaining the outpoint also allows us to rely on a &quot;single hone=
st peer&quot;security model in the short term.<br>
<br>
You can still scan blocks directly when peers disagree on the filter<br>
content, regardless of how the filter is constructed-- yes, it uses<br>
more bandwidth if you&#39;re attacked, but it makes the attack ineffective<=
br>
and using outpoints considerably increases bandwidth for everyone<br>
without an attack.=C2=A0 These ineffective (except for increasing<br>
bandwidth) attacks would have to be common to offset the savings. It<br>
seems to me this point is being overplayed, especially considering the<br>
current state of non-existing validation in SPV software (if SPV<br>
software doesn&#39;t validate anything else they could be validating, why<b=
r>
would they implement a considerable amount of logic for this?).<br>
</blockquote></div>

--0000000000009f9fc0056e29db9d--