summaryrefslogtreecommitdiff
path: root/d1/66941556ff376ed815671547917c25df9c4310
blob: c68f0abea39b8e8218e1166310cb1ac2563a9714 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
Return-Path: <pieter.wuille@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 362B2D66
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed, 27 Jun 2018 15:06:54 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-ot0-f176.google.com (mail-ot0-f176.google.com
	[74.125.82.176])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 0A172734
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed, 27 Jun 2018 15:06:52 +0000 (UTC)
Received: by mail-ot0-f176.google.com with SMTP id a6-v6so2563177otf.2
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed, 27 Jun 2018 08:06:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
	h=mime-version:references:in-reply-to:from:date:message-id:subject:to
	:cc; bh=DE/JEvDouhZUyWNJHMEk8jEZ7Ij9nMzQDVa8xC1p/uQ=;
	b=VSzr5qSTzGM+Firb+/6R55NpNQnHWebHoduOaOEEoNWtPzosLskQ6AamLRoJ/xVqlf
	jTUuPxxKEWBluAR/gEIB/exZLcCrw7t1L1IdlstLkyRsOMyLk2viIR9+XQfJFWrVzboX
	BMv+J9b1rHl5lqWl7usTvlMH7F5Uq/yXTVIIJ+FX3eTjlPSoPrlaIp9SbPGD35gHlVYU
	2DQzCDMPGdXTfRnWEV4ITk6JcHXvBjDv8j+4wUQPGk0Hp8YuESICl9jZsjlV5nOihshX
	n6OLjZCxwcZKClg+v/SFaWauHTYOTN8EAM8uqNlCs/P4Dg/4Ejkkq4NdIV9422CSn65C
	ZdAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20161025;
	h=x-gm-message-state:mime-version:references:in-reply-to:from:date
	:message-id:subject:to:cc;
	bh=DE/JEvDouhZUyWNJHMEk8jEZ7Ij9nMzQDVa8xC1p/uQ=;
	b=SQmwxpXfdipiugsQqYYtlcjs1qjrLPzhqFSnYPE0s5ZDJWUVUNrs42DwbDrRC93C5H
	m/Ib1ZhR/jnd+Xl9iU+ItYsi6aJdMvm95roIeaFVzWiqcyvklUquUXfP9U4LF6YvrBpb
	MOIRA75ivV/D610zz+k+qiP6qsfFzSAN1lItrZJWUgKGjfgk7lGrODyJEcex5fA63LDN
	uzgNl1zqCxopkFx8oOV7g+MZ0XdaVgnuheQjH9QsCCIb2E0oWEzzXJxGrSBxzT7WXyR+
	aex8OycCeaByX611VddLDXweO4YEY5x7i6el8xmplEuUZENeCIA6DwRUfzRbvsh0I7/Y
	JYjQ==
X-Gm-Message-State: APt69E09QuWev6a1WhYxvzBBTEONjiWtdEBaHeCdaQttHLWAvigcjIJ3
	/gQmsMvMgCrvIJBBpfkTJeY9DqhuA2hmIGNaeCEkhQ==
X-Google-Smtp-Source: AAOMgpcqMdfcwYRPnnbnWk6FdYYj23nbQJrogmapXxRRjwLez62/FtYFfiQgPxLeD+QiMbHtAOSz3AlbMQHlVzRnIio=
X-Received: by 2002:a9d:5d18:: with SMTP id
	b24-v6mr3771186oti.227.1530112012019; 
	Wed, 27 Jun 2018 08:06:52 -0700 (PDT)
MIME-Version: 1.0
References: <CAPg+sBhGMxXatsyCAqeboQKH8ASSFAfiXzxyXR9UrNFnah5PPw@mail.gmail.com>
	<CHCiA27GTRiVfkF1DoHdroJL1rQS77ocB42nWxIIhqi_fY3VbB3jsMQveRJOtsJiA4RaCAVe3VZmLZsXVYS3A5wVLNP2OgKQiHE0T27P2qc=@achow101.com>
	<21a616f5-7a17-35b9-85ea-f779f20a6a2d@satoshilabs.com>
	<20180621195654.GC99379@coinkite.com>
	<CAPg+sBgdQqZ8sRSn=dd9EkavYJA6GBiCu6-v5k9ca-9WLPp72Q@mail.gmail.com>
	<ljk5Z_a3KK6DHfmPJxI8o9W2CkwszkUG34h0i1MTGU4ss8r3BTQ3GnTtDTfWF6J7ZqcSAmejzrr11muWqYN-_wnWw_0NFn5_lggNnjI0_Rc=@achow101.com>
	<f8f5b1e3-692a-fc1e-2ad3-c4ad4464957f@satoshilabs.com>
	<TGyS7Azu3inMQFv9QFn8USr9v2m5QbhDRmiOI-4FWwscUeuIB9rA7mCmZA4-kwCJOMAx92fO7XICHtE7ES_QmIYLDy6RHof1WLALskGUYAc=@achow101.com>
	<c32dc90d-9919-354b-932c-f93fe329760b@satoshilabs.com>
	<CAPg+sBhhYuMi6E1in7wZovX7R7M=450cm6vxaGC1Sxr=cJAZsw@mail.gmail.com>
	<881def14-696c-3207-cf6c-49f337ccf0d1@satoshilabs.com>
In-Reply-To: <881def14-696c-3207-cf6c-49f337ccf0d1@satoshilabs.com>
From: Pieter Wuille <pieter.wuille@gmail.com>
Date: Wed, 27 Jun 2018 08:06:39 -0700
Message-ID: <CAPg+sBg4MCOoMDBVQ2eZ=p3iS3dq506Jh4vUNBmmM20a6uCwYw@mail.gmail.com>
To: matejcik <jan.matejek@satoshilabs.com>
Content-Type: multipart/alternative; boundary="000000000000f16fc2056fa0f670"
X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, HTML_MESSAGE,
	RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] BIP 174 thoughts
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Wed, 27 Jun 2018 15:06:54 -0000

--000000000000f16fc2056fa0f670
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 27, 2018, 07:04 matejcik <jan.matejek@satoshilabs.com> wrote:

> hello,
>
> On 26.6.2018 22:30, Pieter Wuille wrote:
> >> (Moreover, as I wrote previously, the Combiner seems like a weirdly
> >> placed role. I still don't see its significance and why is it important
> >> to correctly combine PSBTs by agents that don't understand them. If you
> >> have a usecase in mind, please explain.
> >
> > Forward compatibility with new script types. A transaction may spend
> > inputs from different outputs, with different script types. Perhaps
> > some of these are highly specialized things only implemented by some
> > software (say HTLCs of a particular structure), in non-overlapping
> > ways where no piece of software can handle all scripts involved in a
> > single transaction. If Combiners cannot deal with unknown fields, they
> > won't be able to deal with unknown scripts.
>
> Record-based Combiners *can* deal with unknown fields. Either by
> including both versions, or by including one selected at random. This is
> the same in k-v model.
>

Yes, I wasn't claiming otherwise. This was just a response to your question
why it is important that Combiners can process unknown fields. It is not an
argument in favor of one model or the other.

> combining must be done independently by Combiner implementations for
> > each script type involved. As this is easily avoided by adding a
> > slight bit of structure (parts of the fields that need to be unique -
> > "keys"), this seems the preferable option.
>
> IIUC, you're proposing a "semi-smart Combiner" that understands and
> processes some fields but not others? That doesn't seem to change
> things. Either the "dumb" combiner throws data away before the "smart"
> one sees it, or it needs to include all of it anyway.
>

No, I'm exactly arguing against smartness in the Combiner. It should always
be possible to implement a Combiner without any script specific logic.

> No, a Combiner can pick any of the values in case different PSBTs have
> > different values for the same key. That's the point: by having a
> > key-value structure the choice of fields can be made such that
> > Combiners don't need to care about the contents. Finalizers do need to
> > understand the contents, but they only operate once at the end.
> > Combiners may be involved in any PSBT passing from one entity to
> > another.
>
> Yes. Combiners don't need to care about the contents.
> So why is it important that a Combiner properly de-duplicates the case
> where keys are the same but values are different? This is a job that,
> AFAICT so far, can be safely left to someone along the chain who
> understands that particular record.
>

That's because PSBTs can be copied, signed, and combined back together. A
Combiner which does not deduplicate (at all) would end up having every
original record present N times, one for each copy, a possibly large blowup.

For all fields I can think of right now, that type of deduplication can be
done through whole-record uniqueness.

The question whether you need whole-record uniqueness or specified-length
uniqueness (=what is offered by a key-value model) is a philosophical one
(as I mentioned before). I have a preference for stronger invariants on the
file format, so that it becomes illegal for a PSBT to contain multiple
signatures for the same key for example, and implementations do not need to
deal with the case where multiple are present.

It seems that you consider the latter PSBT "invalid". But it is well
> formed and doesn't contain duplicate records. A Finalizer, or a
> different Combiner that understands field F, can as well have the rule
> "throw away all but one" for this case.
>

It's not about considering. We're writing a specification. Either it is
made invalid, or not.

In a key-value model you can have dumb combiners that must pick one of the
keys in case of duplication, and remove the necessity of dealing with
duplication from all other implementations (which I consider to be a good
thing). In a record-based model you cannot guarantee deduplication of
records that permit repetition per type, because a dumb combiner cannot
understand what part is supposed to be unique. As a result, a record-based
model forces you to let all implementations deal with e.g. multiple partial
signatures for a single key. This is a minor issue, but in my view shows
how records are a less than perfect match for the problem at hand.

To repeat and restate my central question:
> Why is it important, that an agent which doesn't understand a particular
> field structure, can nevertheless make decisions about its inclusion or
> omission from the result (based on a repeated prefix)?
>

Again, because otherwise you may need a separate Combiner for each type of
script involved. That would be unfortunate, and is very easily avoided.

Actually, I can imagine the opposite: having fields with same "key"
> (identifying data), and wanting to combine their "values" intelligently
> without losing any of the data. Say, two Signers producing separate
> parts of a combined-signature under the same common public key?
>

That can always be avoided by using different identifying information as
key for these fields. In your example, assuming you're talking about some
form of threshold signature scheme, every party has their own "shard" of
the key, which still uniquely identifies the participant. If they have no
data that is unique to the participant, they are clones, and don't need to
interact regardless.

> In case of BIP32 derivation, computing the pubkeys is possibly
> > expensive. A simple signer can choose to just sign with whatever keys
> > are present, but they're not the only way to implement a signer, and
> > even less the only software interacting with this format. Others may
> > want to use a matching approach to find keys that are relevant;
> > without pubkeys in the format, they're forced to perform derivations
> > for all keys present.
>
> I'm going to search for relevant keys by comparing master fingerprint; I
> would expect HWWs generally don't have index based on leaf pubkeys.
> OTOH, Signers with lots of keys probably aren't resource-constrained and
> can do the derivations in case of collisions.
>

Perhaps you want to avoid signing with keys that are already signed with?
If you need to derive all the keys before even knowing what was already
signed with, you've already performed 80% of the work.

> If you take the records model, and then additionally drop the
> > whole-record uniqueness constraint, yes, though that seems pushing it
> > a bit by moving even more guarantees from the file format to
> > application level code.
>
> The "file format" makes no guarantees, because the parsing code and
> application code is the same anyway. You could say I'm proposing to
> separate these concerns ;)
>

Of course a file format can make guarantees. If certain combinations of
data in it do not satsify the specification, the file is illegal, and
implementations do not need to deal with it. Stricter file formats are
easier to deal with, because there are less edge cases to consider.

To your point: proto v2 afaik has no way to declare "whole record
uniqueness", so either you drop that (which I think is unacceptable - see
the copy/sign/combine argument above), or you deal with it in your
application code.

Cheers,

-- 
Pieter

--000000000000f16fc2056fa0f670
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><div class=3D"gmail_quote"><div dir=3D"ltr">On Wed, =
Jun 27, 2018, 07:04 matejcik &lt;<a href=3D"mailto:jan.matejek@satoshilabs.=
com">jan.matejek@satoshilabs.com</a>&gt; wrote:<br></div><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">hello,<br>
<br>
On 26.6.2018 22:30, Pieter Wuille wrote:<br>
&gt;&gt; (Moreover, as I wrote previously, the Combiner seems like a weirdl=
y<br>
&gt;&gt; placed role. I still don&#39;t see its significance and why is it =
important<br>
&gt;&gt; to correctly combine PSBTs by agents that don&#39;t understand the=
m. If you<br>
&gt;&gt; have a usecase in mind, please explain.<br>
&gt; <br>
&gt; Forward compatibility with new script types. A transaction may spend<b=
r>
&gt; inputs from different outputs, with different script types. Perhaps<br=
>
&gt; some of these are highly specialized things only implemented by some<b=
r>
&gt; software (say HTLCs of a particular structure), in non-overlapping<br>
&gt; ways where no piece of software can handle all scripts involved in a<b=
r>
&gt; single transaction. If Combiners cannot deal with unknown fields, they=
<br>
&gt; won&#39;t be able to deal with unknown scripts.<br>
<br>
Record-based Combiners *can* deal with unknown fields. Either by<br>
including both versions, or by including one selected at random. This is<br=
>
the same in k-v model.<br></blockquote></div></div><div dir=3D"auto"><br></=
div><div dir=3D"auto">Yes, I wasn&#39;t claiming otherwise. This was just a=
 response to your question why it is important that Combiners can process u=
nknown fields. It is not an argument in favor of one model or the other.</d=
iv><div dir=3D"auto"><br></div><div dir=3D"auto"><div class=3D"gmail_quote"=
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=
px #ccc solid;padding-left:1ex">
&gt; combining must be done independently by Combiner implementations for<b=
r>
&gt; each script type involved. As this is easily avoided by adding a<br>
&gt; slight bit of structure (parts of the fields that need to be unique -<=
br>
&gt; &quot;keys&quot;), this seems the preferable option.<br>
<br>
IIUC, you&#39;re proposing a &quot;semi-smart Combiner&quot; that understan=
ds and<br>
processes some fields but not others? That doesn&#39;t seem to change<br>
things. Either the &quot;dumb&quot; combiner throws data away before the &q=
uot;smart&quot;<br>
one sees it, or it needs to include all of it anyway.<br></blockquote></div=
></div><div dir=3D"auto"><br></div><div dir=3D"auto">No, I&#39;m exactly ar=
guing against smartness in the Combiner. It should always be possible to im=
plement a Combiner without any script specific logic.</div><div dir=3D"auto=
"><br></div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
&gt; No, a Combiner can pick any of the values in case different PSBTs have=
<br>
&gt; different values for the same key. That&#39;s the point: by having a<b=
r>
&gt; key-value structure the choice of fields can be made such that<br>
&gt; Combiners don&#39;t need to care about the contents. Finalizers do nee=
d to<br>
&gt; understand the contents, but they only operate once at the end.<br>
&gt; Combiners may be involved in any PSBT passing from one entity to<br>
&gt; another.<br>
<br>
Yes. Combiners don&#39;t need to care about the contents.<br>
So why is it important that a Combiner properly de-duplicates the case<br>
where keys are the same but values are different? This is a job that,<br>
AFAICT so far, can be safely left to someone along the chain who<br>
understands that particular record.<br></blockquote></div></div><div dir=3D=
"auto"><br></div><div dir=3D"auto">That&#39;s because PSBTs can be copied, =
signed, and combined back together. A Combiner which does not deduplicate (=
at all) would end up having every original record present N times, one for =
each copy, a possibly large blowup.</div><div dir=3D"auto"><br></div><div d=
ir=3D"auto">For all fields I can think of right now, that type of deduplica=
tion can be done through whole-record uniqueness.</div><div dir=3D"auto"><b=
r></div><div dir=3D"auto">The question whether you need whole-record unique=
ness or specified-length uniqueness (=3Dwhat is offered by a key-value mode=
l) is a philosophical one (as I mentioned before). I have a preference for =
stronger invariants on the file format, so that it becomes illegal for a PS=
BT to contain multiple signatures for the same key for example, and impleme=
ntations do not need to deal with the case where multiple are present.</div=
><div dir=3D"auto"><br></div><div dir=3D"auto"><div class=3D"gmail_quote"><=
blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px=
 #ccc solid;padding-left:1ex">
It seems that you consider the latter PSBT &quot;invalid&quot;. But it is w=
ell<br>
formed and doesn&#39;t contain duplicate records. A Finalizer, or a<br>
different Combiner that understands field F, can as well have the rule<br>
&quot;throw away all but one&quot; for this case.<br></blockquote></div></d=
iv><div dir=3D"auto"><br></div><div dir=3D"auto">It&#39;s not about conside=
ring. We&#39;re writing a specification. Either it is made invalid, or not.=
</div><div dir=3D"auto"><br></div><div dir=3D"auto">In a key-value model yo=
u can have dumb combiners that must pick one of the keys in case of duplica=
tion, and remove the necessity of dealing with duplication from all other i=
mplementations (which I consider to be a good thing). In a record-based mod=
el you cannot guarantee deduplication of records that permit repetition per=
 type, because a dumb combiner cannot understand what part is supposed to b=
e unique. As a result, a record-based model forces you to let all implement=
ations deal with e.g. multiple partial signatures for a single key. This is=
 a minor issue, but in my view shows how records are a less than perfect ma=
tch for the problem at hand.</div><div dir=3D"auto"><br></div><div dir=3D"a=
uto"><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"=
margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
To repeat and restate my central question:<br>
Why is it important, that an agent which doesn&#39;t understand a particula=
r<br>
field structure, can nevertheless make decisions about its inclusion or<br>
omission from the result (based on a repeated prefix)?<br></blockquote></di=
v></div><div dir=3D"auto"><br></div><div dir=3D"auto">Again, because otherw=
ise you may need a separate Combiner for each type of script involved. That=
 would be unfortunate, and is very easily avoided.</div><div dir=3D"auto"><=
br></div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=3D"=
gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-=
left:1ex">
Actually, I can imagine the opposite: having fields with same &quot;key&quo=
t;<br>
(identifying data), and wanting to combine their &quot;values&quot; intelli=
gently<br>
without losing any of the data. Say, two Signers producing separate<br>
parts of a combined-signature under the same common public key?<br></blockq=
uote></div></div><div dir=3D"auto"><br></div><div dir=3D"auto">That can alw=
ays be avoided by using different identifying information as key for these =
fields. In your example, assuming you&#39;re talking about some form of thr=
eshold signature scheme, every party has their own &quot;shard&quot; of the=
 key, which still uniquely identifies the participant. If they have no data=
 that is unique to the participant, they are clones, and don&#39;t need to =
interact regardless.</div><div dir=3D"auto"><br></div><div dir=3D"auto"><di=
v class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0=
 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
&gt; In case of BIP32 derivation, computing the pubkeys is possibly<br>
&gt; expensive. A simple signer can choose to just sign with whatever keys<=
br>
&gt; are present, but they&#39;re not the only way to implement a signer, a=
nd<br>
&gt; even less the only software interacting with this format. Others may<b=
r>
&gt; want to use a matching approach to find keys that are relevant;<br>
&gt; without pubkeys in the format, they&#39;re forced to perform derivatio=
ns<br>
&gt; for all keys present.<br>
<br>
I&#39;m going to search for relevant keys by comparing master fingerprint; =
I<br>
would expect HWWs generally don&#39;t have index based on leaf pubkeys.<br>
OTOH, Signers with lots of keys probably aren&#39;t resource-constrained an=
d<br>
can do the derivations in case of collisions.<br></blockquote></div></div><=
div dir=3D"auto"><br></div><div dir=3D"auto">Perhaps you want to avoid sign=
ing with keys that are already signed with? If you need to derive all the k=
eys before even knowing what was already signed with, you&#39;ve already pe=
rformed 80% of the work.</div><div dir=3D"auto"><br></div><div dir=3D"auto"=
><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"marg=
in:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
&gt; If you take the records model, and then additionally drop the<br>
&gt; whole-record uniqueness constraint, yes, though that seems pushing it<=
br>
&gt; a bit by moving even more guarantees from the file format to<br>
&gt; application level code.<br>
<br>
The &quot;file format&quot; makes no guarantees, because the parsing code a=
nd<br>
application code is the same anyway. You could say I&#39;m proposing to<br>
separate these concerns ;)<br></blockquote></div></div><div dir=3D"auto"><b=
r></div><div dir=3D"auto">Of course a file format can make guarantees. If c=
ertain combinations of data in it do not satsify the specification, the fil=
e is illegal, and implementations do not need to deal with it. Stricter fil=
e formats are easier to deal with, because there are less edge cases to con=
sider.</div><div dir=3D"auto"><br></div><div dir=3D"auto">To your point: pr=
oto v2 afaik has no way to declare &quot;whole record uniqueness&quot;, so =
either you drop that (which I think is unacceptable - see the copy/sign/com=
bine argument above), or you deal with it in your application code.=C2=A0</=
div><div dir=3D"auto"><br></div><div dir=3D"auto">Cheers,</div><div dir=3D"=
auto"><br></div><div dir=3D"auto">--=C2=A0</div><div dir=3D"auto">Pieter</d=
iv><div dir=3D"auto"><br></div></div>

--000000000000f16fc2056fa0f670--