summaryrefslogtreecommitdiff
path: root/6e/3281c9084a100ae4a888efc9771add47787814
blob: 558fe3242f741daa51e06f1f41016d6024aed162 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
Return-Path: <laolu32@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 1A06A6C
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Fri,  9 Jun 2017 03:04:05 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-yw0-f175.google.com (mail-yw0-f175.google.com
	[209.85.161.175])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id D9288AB
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Fri,  9 Jun 2017 03:04:03 +0000 (UTC)
Received: by mail-yw0-f175.google.com with SMTP id e142so9804036ywa.1
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Thu, 08 Jun 2017 20:04:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
	h=mime-version:references:in-reply-to:from:date:message-id:subject:to
	:cc; bh=n7S6b3sakaBhXxWCM8Oj8rKTIPL8hpX9HKHQqR6o00g=;
	b=rbJOqwHJBlA3V8/bbu3u+TFdPBscKsEqhG80MfM4ngHoM+knmohpYPW0JivXzMFxUz
	W5dHKmZdJdEYeQMvWW/6FJvowLSCj2MWRLUAFeUtT6wMEDqGHvBW9aNv3CmjoB0g466U
	MzQkU5WSP/G2SUfJzDh846sz0Jq4K4z9CgezZAX/Z+RgQBdTonbmQEtpI0xKqt44nl03
	nZrYMtqLh6bkg4vD6zAGDcKXI0LpfzRKSscUjRrU0BOa0dZ51pCvev07qUGn2F5lB90n
	ErkJ4/42YbPMnzr3UMkL8o2aiwhl3FCx6+iG/1xQC+ET70id7FsyhRCFjLG1HRHCdrVA
	G+AA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20161025;
	h=x-gm-message-state:mime-version:references:in-reply-to:from:date
	:message-id:subject:to:cc;
	bh=n7S6b3sakaBhXxWCM8Oj8rKTIPL8hpX9HKHQqR6o00g=;
	b=tY3xVqSCwE97aljKGKFEpk5TnqkvN3OIvNdd08+0mEp+aKCgzch9WSukBN6ISpudTo
	tH/69SQ7W0nZBQ3mftt+IHRPNxXaXZbhiKTVvzm7MYTduQJlieXc85O4hOId29+iiYkG
	KiU2zWcmDYcKnqdzFws7oyRXX4pjwukwRmkiaSANSXkuy2QDtkgprhLZy37wwbnHye4p
	4fEkRRz0tss460B6vSoASsHtaVjRwXKqxzc8cJGcsMGhGUJheyqX4RMXXsp4KNW+BD8x
	C+yNHk6yWfyvHBnk43SPeVPDem3WoxLPn3GMfW333qCDveXDmcDZwjUxAZRkmZX6oTcR
	YT1g==
X-Gm-Message-State: AODbwcC+v2OXv5GTZLDxUDQC9JVG0MjtJt0b9Rao7nZFQeohs3VmDRLM
	9QFL30a8xQT5oMuF+OVUg6hGwAG6Cw==
X-Received: by 10.129.179.193 with SMTP id r184mr14139647ywh.39.1496977442943; 
	Thu, 08 Jun 2017 20:04:02 -0700 (PDT)
MIME-Version: 1.0
References: <CAO3Pvs8ccTkgrecJG6KFbBW+9moHF-FTU+4qNfayeE3hM9uRrg@mail.gmail.com>
	<CALJw2w5gUgbdX7XnxPsK2FZ6PZ5cSTgmCEqiPu7-S4gwXBM-_Q@mail.gmail.com>
	<CAE0pnx+RRAP269VeWAcxKbrcS9qX4LS8_6nY_js8X5NtQ22t_A@mail.gmail.com>
	<CAE0pnxLKYnwHnktTqW949s1AA9uK=6WnVYWmRoau8B1SszzYEg@mail.gmail.com>
	<CAE0pnxJxHYQ4+2pt3tt=1WZ0-K0vDxGB4KBXY+R=WfktMmATwA@mail.gmail.com>
	<CAE0pnxK5r2XfVks=emkK=v66XRN5c-Sz-Lm_dKY+6nO=kPk6Vw@mail.gmail.com>
	<CALJw2w6Vzq8PO3x607=ERK4XKU2vrHApqKP2rWm-sw2r1ZOJMw@mail.gmail.com>
In-Reply-To: <CALJw2w6Vzq8PO3x607=ERK4XKU2vrHApqKP2rWm-sw2r1ZOJMw@mail.gmail.com>
From: Olaoluwa Osuntokun <laolu32@gmail.com>
Date: Fri, 09 Jun 2017 03:03:51 +0000
Message-ID: <CAO3Pvs-0h=E0ZQmOHcNE9Q+XgJJb7761jz9QxgginMb6+n4ogw@mail.gmail.com>
To: Karl Johan Alm <karljohan-alm@garage.co.jp>,
	Alex Akselrod <alex@akselrod.org>
Content-Type: multipart/alternative; boundary="94eb2c002330b9368f05517e38ac"
X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM,
	HTML_MESSAGE,RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for
 Light Clients
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Fri, 09 Jun 2017 03:04:05 -0000

--94eb2c002330b9368f05517e38ac
Content-Type: text/plain; charset="UTF-8"

Karl wrote:

> I am also curious if you have considered digests containing multiple
> blocks. Retaining a permanent binsearchable record of the entire chain is
> obviously too space costly, but keeping the last X blocks as binsearchable
> could speed up syncing for clients tremendously, I feel.

Originally we hadn't considered such an idea. Grasping the concept a bit
better, I can see how that may result in considerable bandwidth savings
(for purely negative queries) for clients doing a historical sync, or
catching up to the chain after being inactive for months/weeks.

If we were to purse tacking this approach onto the current BIP proposal,
we could do it in the following way:

   * The `getcfilter` message gains an additional "Level" field. Using
     this field, the range of blocks to be included in the returned filter
     would be Level^2. So a level of 0 is just the single filter, 3 is 8
     blocks past the block hash etc.

   * Similarly, the `getcfheaders` message would also gain a similar field
     with identical semantics. In this case each "level" would have a
     distinct header chain for clients to verify.

> How fast are these to create? Would it make sense to provide digests on
> demand in some cases, rather than keeping them around indefinitely?

For larger blocks (like the one referenced at the end of this mail) full
construction of the regular filter takes ~10-20ms (most of this spent
extracting the data pushes). With smaller blocks, it quickly dips down to
the nano to micro second range.

Whether to keep _all_ the filters on disk, or to dynamically re-generate a
particular range (possibly most of the historical data) is an
implementation detail. Nodes that already do block pruning could discard
very old filters once the header chain is constructed allowing them to
save additional space, as it's unlikely most clients would care about the
first 300k or so blocks.

> Ahh, so you actually make a separate digest chain with prev hashes and
> everything. Once/if committed digests are soft forked in, it seems a bit
> overkill but maybe it's worth it.

Yep, this is only a hold-over until when/if a commitment to the filter is
soft-forked in. In that case, there could be some extension message to
fetch the filter hash for a particular block, along with a merkle proof of
the coinbase transaction to the merkle root in the header.

> I created digests for all blocks up until block #469805 and actually ended
> up with 5.8 GB, which is 1.1 GB lower than what you have, but may be worse
> perf-wise on false positive rates and such.

Interesting, are you creating the equivalent of both our "regular" and
"extended" filters? Each of the filter types consume about ~3.5GB in
isolation, with the extended filter type on average consuming more bytes
due to the fact that it includes sigScript/witness data as well.

It's worth noting that those numbers includes the fixed 4-byte value for
"N" that's prepended to each filter once it's serialized (though that
doesn't add a considerable amount of overhead).  Alex and I were
considering instead using Bitcoin's var-int encoding for that number
instead. This would result in using a single byte for empty filters, 1
byte for most filters (< 2^16 items), and 3 bytes for the remainder of the
cases.

> For comparison, creating the digests above (469805 of them) took
> roughly 30 mins on my end, but using the kstats format so probably
> higher on an actual node (should get around to profiling that...).

Does that include the time required to read the blocks from disk? Or just
the CPU computation of constructing the filters? I haven't yet kicked off
a full re-index of the filters, but for reference this block[1] on testnet
takes ~18ms for the _full_ indexing routine with our current code+spec.

[1]: 000000000000052184fbe86eff349e31703e4f109b52c7e6fa105cd1588ab6aa

-- Laolu


On Sun, Jun 4, 2017 at 7:18 PM Karl Johan Alm via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Sat, Jun 3, 2017 at 2:55 AM, Alex Akselrod via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> > Without a soft fork, this is the only way for light clients to verify
> that
> > peers aren't lying to them. Clients can request headers (just hashes of
> the
> > filters and the previous headers, creating a chain) and look for
> conflicts
> > between peers. If a conflict is found at a certain block, the client can
> > download the block, generate a filter, calculate the header by hashing
> > together the previous header and the generated filter, and banning any
> peers
> > that don't match. A full node could prune old filters if you wanted and
> > recalculate them as necessary if you just keep the filter header chain
> info
> > as really old filters are unlikely to be requested by correctly written
> > software but you can't guarantee every client will follow best practices
> > either.
>
> Ahh, so you actually make a separate digest chain with prev hashes and
> everything. Once/if committed digests are soft forked in, it seems a
> bit overkill but maybe it's worth it. (I was always assuming committed
> digests in coinbase would come after people started using this, and
> that people could just ask a couple of random peers for the digest
> hash and ensure everyone gave the same answer as the hash of the
> downloaded digest..).
>
> > The simulations are based on completely random data within given
> parameters.
>
> I noticed an increase in FP hits when using real data sampled from
> real scriptPubKeys and such. Address reuse and other weird stuff. See
> "lies.h" in github repo for experiments and chainsim.c initial part of
> main where wallets get random stuff from the chain.
>
> > I will definitely try to reproduce my experiments with Golomb-Coded
> > sets and see what I come up with. It seems like you've got a little
> > less than half the size of my digests for 1-block digests but I
> > haven't tried making digests for all blocks (and lots of early blocks
> > are empty).
> >
> >
> > Filters for empty blocks only take a few bytes and sometimes zero when
> the
> > coinbase output is a burn that doesn't push any data (example will be in
> the
> > test vectors that I'll have ready shortly).
>
> I created digests for all blocks up until block #469805 and actually
> ended up with 5.8 GB, which is 1.1 GB lower than what you have, but
> may be worse perf-wise on false positive rates and such.
>
> > How fast are these to create? Would it make sense to provide digests
> > on demand in some cases, rather than keeping them around indefinitely?
> >
> >
> > They're pretty fast and can be pruned if desired, as mentioned above, as
> > long as the header chain is kept.
>
> For comparison, creating the digests above (469805 of them) took
> roughly 30 mins on my end, but using the kstats format so probably
> higher on an actual node (should get around to profiling that...).
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

--94eb2c002330b9368f05517e38ac
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Karl wrote:</div><div><br></div><div>&gt; I am also c=
urious if you have considered digests containing multiple</div><div>&gt; bl=
ocks. Retaining a permanent binsearchable record of the entire chain is</di=
v><div>&gt; obviously too space costly, but keeping the last X blocks as bi=
nsearchable</div><div>&gt; could speed up syncing for clients tremendously,=
 I feel.</div><div><br></div><div>Originally we hadn&#39;t considered such =
an idea. Grasping the concept a bit</div><div>better, I can see how that ma=
y result in considerable bandwidth savings</div><div>(for purely negative q=
ueries) for clients doing a historical sync, or</div><div>catching up to th=
e chain after being inactive for months/weeks.=C2=A0</div><div><br></div><d=
iv>If we were to purse tacking this approach onto the current BIP proposal,=
</div><div>we could do it in the following way:</div><div><br></div><div>=
=C2=A0 =C2=A0* The `getcfilter` message gains an additional &quot;Level&quo=
t; field. Using</div><div>=C2=A0 =C2=A0 =C2=A0this field, the range of bloc=
ks to be included in the returned filter</div><div>=C2=A0 =C2=A0 =C2=A0woul=
d be Level^2. So a level of 0 is just the single filter, 3 is 8</div><div>=
=C2=A0 =C2=A0 =C2=A0blocks past the block hash etc.</div><div><br></div><di=
v>=C2=A0 =C2=A0* Similarly, the `getcfheaders` message would also gain a si=
milar field</div><div>=C2=A0 =C2=A0 =C2=A0with identical semantics. In this=
 case each &quot;level&quot; would have a</div><div>=C2=A0 =C2=A0 =C2=A0dis=
tinct header chain for clients to verify.</div><div><br></div><div>&gt; How=
 fast are these to create? Would it make sense to provide digests on</div><=
div>&gt; demand in some cases, rather than keeping them around indefinitely=
?</div><div><br></div><div>For larger blocks (like the one referenced at th=
e end of this mail) full</div><div>construction of the regular filter takes=
 ~10-20ms (most of this spent</div><div>extracting the data pushes). With s=
maller blocks, it quickly dips down to</div><div>the nano to micro second r=
ange.</div><div><br></div><div>Whether to keep _all_ the filters on disk, o=
r to dynamically re-generate a</div><div>particular range (possibly most of=
 the historical data) is an</div><div>implementation detail. Nodes that alr=
eady do block pruning could discard</div><div>very old filters once the hea=
der chain is constructed allowing them to</div><div>save additional space, =
as it&#39;s unlikely most clients would care about the</div><div>first 300k=
 or so blocks.</div><div><br></div><div>&gt; Ahh, so you actually make a se=
parate digest chain with prev hashes and</div><div>&gt; everything. Once/if=
 committed digests are soft forked in, it seems a bit</div><div>&gt; overki=
ll but maybe it&#39;s worth it.</div><div><br></div><div>Yep, this is only =
a hold-over until when/if a commitment to the filter is</div><div>soft-fork=
ed in. In that case, there could be some extension message to</div><div>fet=
ch the filter hash for a particular block, along with a merkle proof of</di=
v><div>the coinbase transaction to the merkle root in the header.</div><div=
><br></div><div>&gt; I created digests for all blocks up until block #46980=
5 and actually ended</div><div>&gt; up with 5.8 GB, which is 1.1 GB lower t=
han what you have, but may be worse</div><div>&gt; perf-wise on false posit=
ive rates and such.</div><div><br></div><div>Interesting, are you creating =
the equivalent of both our &quot;regular&quot; and</div><div>&quot;extended=
&quot; filters? Each of the filter types consume about ~3.5GB in</div><div>=
isolation, with the extended filter type on average consuming more bytes</d=
iv><div>due to the fact that it includes sigScript/witness data as well.</d=
iv><div><br></div><div>It&#39;s worth noting that those numbers includes th=
e fixed 4-byte value for</div><div>&quot;N&quot; that&#39;s prepended to ea=
ch filter once it&#39;s serialized (though that</div><div>doesn&#39;t add a=
 considerable amount of overhead).=C2=A0 Alex and I were</div><div>consider=
ing instead using Bitcoin&#39;s var-int encoding for that number</div><div>=
instead. This would result in using a single byte for empty filters, 1</div=
><div>byte for most filters (&lt; 2^16 items), and 3 bytes for the remainde=
r of the</div><div>cases.</div><div><br></div><div>&gt; For comparison, cre=
ating the digests above (469805 of them) took</div><div>&gt; roughly 30 min=
s on my end, but using the kstats format so probably</div><div>&gt; higher =
on an actual node (should get around to profiling that...).</div><div><br><=
/div><div>Does that include the time required to read the blocks from disk?=
 Or just</div><div>the CPU computation of constructing the filters? I haven=
&#39;t yet kicked off</div><div>a full re-index of the filters, but for ref=
erence this block[1] on testnet</div><div>takes ~18ms for the _full_ indexi=
ng routine with our current code+spec.</div><div><br></div><div>[1]: 000000=
000000052184fbe86eff349e31703e4f109b52c7e6fa105cd1588ab6aa</div><div><br></=
div><div>-- Laolu</div><div><br></div><br><div class=3D"gmail_quote"><div d=
ir=3D"ltr">On Sun, Jun 4, 2017 at 7:18 PM Karl Johan Alm via bitcoin-dev &l=
t;<a href=3D"mailto:bitcoin-dev@lists.linuxfoundation.org">bitcoin-dev@list=
s.linuxfoundation.org</a>&gt; wrote:<br></div><blockquote class=3D"gmail_qu=
ote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex=
">On Sat, Jun 3, 2017 at 2:55 AM, Alex Akselrod via bitcoin-dev<br>
&lt;<a href=3D"mailto:bitcoin-dev@lists.linuxfoundation.org" target=3D"_bla=
nk">bitcoin-dev@lists.linuxfoundation.org</a>&gt; wrote:<br>
&gt; Without a soft fork, this is the only way for light clients to verify =
that<br>
&gt; peers aren&#39;t lying to them. Clients can request headers (just hash=
es of the<br>
&gt; filters and the previous headers, creating a chain) and look for confl=
icts<br>
&gt; between peers. If a conflict is found at a certain block, the client c=
an<br>
&gt; download the block, generate a filter, calculate the header by hashing=
<br>
&gt; together the previous header and the generated filter, and banning any=
 peers<br>
&gt; that don&#39;t match. A full node could prune old filters if you wante=
d and<br>
&gt; recalculate them as necessary if you just keep the filter header chain=
 info<br>
&gt; as really old filters are unlikely to be requested by correctly writte=
n<br>
&gt; software but you can&#39;t guarantee every client will follow best pra=
ctices<br>
&gt; either.<br>
<br>
Ahh, so you actually make a separate digest chain with prev hashes and<br>
everything. Once/if committed digests are soft forked in, it seems a<br>
bit overkill but maybe it&#39;s worth it. (I was always assuming committed<=
br>
digests in coinbase would come after people started using this, and<br>
that people could just ask a couple of random peers for the digest<br>
hash and ensure everyone gave the same answer as the hash of the<br>
downloaded digest..).<br>
<br>
&gt; The simulations are based on completely random data within given param=
eters.<br>
<br>
I noticed an increase in FP hits when using real data sampled from<br>
real scriptPubKeys and such. Address reuse and other weird stuff. See<br>
&quot;lies.h&quot; in github repo for experiments and chainsim.c initial pa=
rt of<br>
main where wallets get random stuff from the chain.<br>
<br>
&gt; I will definitely try to reproduce my experiments with Golomb-Coded<br=
>
&gt; sets and see what I come up with. It seems like you&#39;ve got a littl=
e<br>
&gt; less than half the size of my digests for 1-block digests but I<br>
&gt; haven&#39;t tried making digests for all blocks (and lots of early blo=
cks<br>
&gt; are empty).<br>
&gt;<br>
&gt;<br>
&gt; Filters for empty blocks only take a few bytes and sometimes zero when=
 the<br>
&gt; coinbase output is a burn that doesn&#39;t push any data (example will=
 be in the<br>
&gt; test vectors that I&#39;ll have ready shortly).<br>
<br>
I created digests for all blocks up until block #469805 and actually<br>
ended up with 5.8 GB, which is 1.1 GB lower than what you have, but<br>
may be worse perf-wise on false positive rates and such.<br>
<br>
&gt; How fast are these to create? Would it make sense to provide digests<b=
r>
&gt; on demand in some cases, rather than keeping them around indefinitely?=
<br>
&gt;<br>
&gt;<br>
&gt; They&#39;re pretty fast and can be pruned if desired, as mentioned abo=
ve, as<br>
&gt; long as the header chain is kept.<br>
<br>
For comparison, creating the digests above (469805 of them) took<br>
roughly 30 mins on my end, but using the kstats format so probably<br>
higher on an actual node (should get around to profiling that...).<br>
_______________________________________________<br>
bitcoin-dev mailing list<br>
<a href=3D"mailto:bitcoin-dev@lists.linuxfoundation.org" target=3D"_blank">=
bitcoin-dev@lists.linuxfoundation.org</a><br>
<a href=3D"https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev" =
rel=3D"noreferrer" target=3D"_blank">https://lists.linuxfoundation.org/mail=
man/listinfo/bitcoin-dev</a><br>
</blockquote></div></div>

--94eb2c002330b9368f05517e38ac--