summaryrefslogtreecommitdiff
path: root/24/cce123efc1bcb4abc9b022afe2b8e2a68c1db1
blob: f565c63f63aa03c9a54954039eef025426e00a5f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
Return-Path: <peter.tschipper@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 98B8D901
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed,  2 Dec 2015 23:05:13 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-pf0-f181.google.com (mail-pf0-f181.google.com
	[209.85.192.181])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id BE091140
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed,  2 Dec 2015 23:05:12 +0000 (UTC)
Received: by pfu207 with SMTP id 207so2347726pfu.2
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed, 02 Dec 2015 15:05:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:to:references:from:message-id:date:user-agent:mime-version
	:in-reply-to:content-type;
	bh=6ihD4BBBoqjX5lj64AasRSleLxwgImwM0RODfuTkoqY=;
	b=Uaqo6Lx9kSg+/5msTrMVLyu9ukl1XfsyHAjHYbwrRYj99Oe5Gfxxet5l1VtmnRZ0Qn
	SnC0ueR/7EIqLSRSNa5TL37wqmVjnrhjagEh4GA+U9ulE6qvwwDT9GKQghyoagl7pGjf
	egEWf7tjeCiKY+k95cXxXjwtVWJcJS6G/MznenL8kzrmQoa28H/QEc2yFhXOL5Yfq3qM
	009KQu9XutVjfZFjhlmb4KQv+HjWm+NsmP4Ko8GgjWgpvAVB8+620cjjeQ0HLTbmR4lW
	qaRw04A5sj9d7QfUhrQa1cS1d0oS12UBF9rAIQkqEMPGMBUd16b0wQUWXpD7fT/yWYIP
	ELsg==
X-Received: by 10.98.43.18 with SMTP id r18mr8626523pfr.2.1449097512511;
	Wed, 02 Dec 2015 15:05:12 -0800 (PST)
Received: from [192.168.0.132] (S0106bcd165303d84.cc.shawcable.net.
	[96.54.102.88]) by smtp.googlemail.com with ESMTPSA id
	25sm6410964pfp.62.2015.12.02.15.05.11
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 02 Dec 2015 15:05:12 -0800 (PST)
To: Matt Corallo <lf-lists@mattcorallo.com>,
	Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
References: <565CD7D8.3070102@gmail.com>
	<90EF4E6C-9A71-4A35-A938-EAFC1A24DD24@mattcorallo.com>
From: Peter Tschipper <peter.tschipper@gmail.com>
X-Enigmail-Draft-Status: N1110
Message-ID: <565F7926.103@gmail.com>
Date: Wed, 2 Dec 2015 15:05:10 -0800
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101
	Thunderbird/38.3.0
MIME-Version: 1.0
In-Reply-To: <90EF4E6C-9A71-4A35-A938-EAFC1A24DD24@mattcorallo.com>
Content-Type: multipart/alternative;
	boundary="------------080601010707010102070301"
X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW
	autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
Subject: Re: [bitcoin-dev] [BIP Draft] Datastream compression of Blocks and
 Transactions
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Development Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Wed, 02 Dec 2015 23:05:13 -0000

This is a multi-part message in MIME format.
--------------080601010707010102070301
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit


On 30/11/2015 9:28 PM, Matt Corallo wrote:
> I'm really not a fan of this at all. To start with, adding a compression library that is directly accessible to the network on financial software is a really, really scary idea. 
Why scary?  LZO has no current security issues, and it will be
configureable by each node operator so it can be turned off completely
if needed or desired. 
> If there were a massive improvement, I'd find it acceptable, but the improvement you've shown really isn't all that much.
Why is 15% at the low end, to 27% at the high end not good?  It sounds
like a very good boost.   
>  The numbers you recently posted show it improving the very beginning of IBD somewhat over high-latency connections, but if we're throughput-limited after the very beginning of IBD, we should fix that, not compress the blocks. 
I only did the compression up to the 200,000 block to better isolate the
transmission of data from the post processing of blocks and determine
whether the compressing of data was adding to much to the total
transmission time.

I think it's clear from the data that as the data (blocks, transactions)
increase in size that (1) they compress better and (2) they have a
bigger and positive impact on improving performance when compressed.

> Additionally, I'd be very surprised if this had any significant effect on the speed at which new blocks traverse the network (do you have any simulations or other thoughts on this?).
From the table below, at 120000 blocks the time to sync the chain was
roughly the same for compressed vs. uncompressed however after that
point as block sizes start increasing, all compression libraries
peformed much faster than uncompressed. The data provided in this
testing clearly shows that as block size increases, the performance
improvement by compressing data also increases.

TABLE 5:
Results shown in seconds with 60ms of induced latency
Num blks sync'd  Uncmp  Zlib-1  Zlib-6  LZO1x-1  LZO1x-999
---------------  -----  ------  ------  -------  ---------
120000           3226   3416    3397    3266     3302
130000           4010   3983    3773    3625     3703
140000           4914   4503    4292    4127     4287
150000           5806   4928    4719    4529     4821
160000           6674   5249    5164    4840     5314
170000           7563   5603    5669    5289     6002
180000           8477   6054    6268    5858     6638
190000           9843   7085    7278    6868     7679
200000           11338  8215    8433    8044     8795


As far as, what happens after the block is received, then obviously
compression isn't going to help in post processing and validating the
block, but in the pure transmission of the object it most certainly and
logically does and in a fairly direct proportion to the file size (a
file that is 20% smaller will be transmited "at least" 20% faster, you
can use any data transfer time calculator
<http://www.calctool.org/CALC/prof/computing/transfer_time> for that). 
The only issue, that I can see that required testing was to show how
much compression there would be, and how much time the compression of
the data would add to the sending of the data.

 

--------------080601010707010102070301
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix"><br>
      On 30/11/2015 9:28 PM, Matt Corallo wrote:<br>
    </div>
    <blockquote
      cite="mid:90EF4E6C-9A71-4A35-A938-EAFC1A24DD24@mattcorallo.com"
      type="cite">
      <pre wrap="">I'm really not a fan of this at all. To start with, adding a compression library that is directly accessible to the network on financial software is a really, really scary idea. </pre>
    </blockquote>
    Why scary?  LZO has no current security issues, and it will be
    configureable by each node operator so it can be turned off
    completely if needed or desired.  <br>
    <blockquote
      cite="mid:90EF4E6C-9A71-4A35-A938-EAFC1A24DD24@mattcorallo.com"
      type="cite">
      <pre wrap="">If there were a massive improvement, I'd find it acceptable, but the improvement you've shown really isn't all that much.</pre>
    </blockquote>
    Why is 15% at the low end, to 27% at the high end not good?  It
    sounds like a very good boost.    <br>
    <blockquote
      cite="mid:90EF4E6C-9A71-4A35-A938-EAFC1A24DD24@mattcorallo.com"
      type="cite">
      <pre wrap=""> The numbers you recently posted show it improving the very beginning of IBD somewhat over high-latency connections, but if we're throughput-limited after the very beginning of IBD, we should fix that, not compress the blocks. </pre>
    </blockquote>
    I only did the compression up to the 200,000 block to better isolate
    the transmission of data from the post processing of blocks and
    determine whether the compressing of data was adding to much to the
    total transmission time.<br>
    <br>
    I think it's clear from the data that as the data (blocks,
    transactions) increase in size that (1) they compress better and (2)
    they have a bigger and positive impact on improving performance when
    compressed.<br>
    <br>
    <blockquote
      cite="mid:90EF4E6C-9A71-4A35-A938-EAFC1A24DD24@mattcorallo.com"
      type="cite">
      <pre wrap="">Additionally, I'd be very surprised if this had any significant effect on the speed at which new blocks traverse the network (do you have any simulations or other thoughts on this?).
</pre>
    </blockquote>
    From the table below, at 120000 blocks the time to sync the chain
    was roughly the same for compressed vs. uncompressed however after
    that point as block sizes start increasing, all compression
    libraries peformed much faster than uncompressed. The data provided
    in this testing clearly shows that as block size increases, the
    performance improvement by compressing data also increases.<br>
    <br>
    TABLE 5:<br>
    Results shown in seconds with 60ms of induced latency<br>
    Num blks sync'd  Uncmp  Zlib-1  Zlib-6  LZO1x-1  LZO1x-999<br>
    ---------------  -----  ------  ------  -------  --------- <br>
    120000           3226   3416    3397    3266     3302<br>
    130000           4010   3983    3773    3625     3703<br>
    140000           4914   4503    4292    4127     4287<br>
    150000           5806   4928    4719    4529     4821<br>
    160000           6674   5249    5164    4840     5314<br>
    170000           7563   5603    5669    5289     6002<br>
    180000           8477   6054    6268    5858     6638<br>
    190000           9843   7085    7278    6868     7679<br>
    200000           11338  8215    8433    8044     8795<br>
    <br>
    <br>
    As far as, what happens after the block is received, then obviously
    compression isn't going to help in post processing and validating
    the block, but in the pure transmission of the object it most
    certainly and logically does and in a fairly direct proportion to
    the file size (a file that is 20% smaller will be transmited "at
    least" 20% faster, you can use any data transfer time <a
      href="http://www.calctool.org/CALC/prof/computing/transfer_time">calculator</a>
    for that).  The only issue, that I can see that required testing was
    to show how much compression there would be, and how much time the
    compression of the data would add to the sending of the data. <br>
    <br>
      <br>
  </body>
</html>

--------------080601010707010102070301--