summaryrefslogtreecommitdiff
path: root/4f/c2e18059fdca5f035414a3839c0d8514521c5d
blob: 3ae0bb0f182b86be60eb515d4d4a3e46e8451c73 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
Return-Path: <dermoth@aei.ca>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id E27964A3
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Sat, 26 Aug 2017 21:31:22 +0000 (UTC)
X-Greylist: from auto-whitelisted by SQLgrey-1.7.6
Received: from mail001.aei.ca (mail001.aei.ca [206.123.6.130])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 05C8DD4
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Sat, 26 Aug 2017 21:31:21 +0000 (UTC)
Received: (qmail 10450 invoked by uid 89); 26 Aug 2017 21:31:20 -0000
Received: by simscan 1.2.0 ppid: 10442, pid: 10444, t: 0.0505s
	scanners: regex: 1.2.0 attach: 1.2.0
Received: from mail002.aei.ca (HELO mail002.contact.net) (206.123.6.132)
	by mail001.aei.ca with (DHE-RSA-AES256-SHA encrypted) SMTP;
	26 Aug 2017 21:31:20 -0000
Received: (qmail 28643 invoked by uid 89); 26 Aug 2017 21:31:20 -0000
Received: by simscan 1.2.0 ppid: 28604, pid: 28606, t: 8.8001s
	scanners: regex: 1.2.0 attach: 1.2.0 clamav: 0.97.8/m: spam: 3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
X-Spam-Level: 
X-Spam-Status: No, score=-0.7 required=5.0 tests=HTML_MESSAGE,
	RCVD_IN_DNSWL_LOW,RP_MATCHES_RCVD autolearn=disabled version=3.3.1
Received: from dsl-66-36-135-64.mtl.aei.ca (HELO ?192.168.67.200?)
	(dermoth@66.36.135.64)
	by mail.aei.ca with ESMTPA; 26 Aug 2017 21:31:11 -0000
To: Adam Tamir Shem-Tov <tshachaf@gmail.com>,
	Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
References: <CACQPdjphPmSC7bmicXGytuD3YAXYmsEGOECTTTuLfB_5iqDQGw@mail.gmail.com>
From: Thomas Guyot-Sionnest <dermoth@aei.ca>
Message-ID: <a9015271-7f61-7bba-c550-afdd76319b21@aei.ca>
Date: Sat, 26 Aug 2017 17:31:11 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101
	Thunderbird/45.8.0
MIME-Version: 1.0
In-Reply-To: <CACQPdjphPmSC7bmicXGytuD3YAXYmsEGOECTTTuLfB_5iqDQGw@mail.gmail.com>
Content-Type: multipart/alternative;
	boundary="------------7640D33F35D714B8DFF27B3B"
X-Mailman-Approved-At: Sat, 26 Aug 2017 21:34:13 +0000
Subject: Re: [bitcoin-dev] Solving the Scalability Problem on Bitcoin
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Sat, 26 Aug 2017 21:31:23 -0000

This is a multi-part message in MIME format.
--------------7640D33F35D714B8DFF27B3B
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Pruning is already implemented in the nodes... Once enabled only unspent
inputs and most recent blocks are kept. IIRC there was also a proposal
to include UTXO in some blocks for SPV clients to use, but that would be
additional to the blockchain data.

Implementing your solution is impossible because there is no way to
determine authenticity of the blockchain mid way. The proof that a block
hash leads to the genesis block is also a proof of all the work that's
been spent on it (the years of hashing). At the very least we'd have to
keep all blocks until a hard-coded checkpoint in the code, which also
means that as nodes upgrades and prune more blocks older nodes will have
difficulty syncing the blockchain.

Finally it's not just the addresses and balance you need to save, but
also each unspent output block number, tx position and script that are
required for validation on input. That's a lot of data that you're
suggesting to save every 1000 blocks (and why 1000?), and as said
earlier it doesn't even guarantee you can drop older blocks. I'm not
even going into the details of making it work (hard fork, large block
sync/verification issues, possible attack vectors opened by this...)

What is wrong with the current implementation of node pruning that you
are trying to solve?

--
Thomas

On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote:
>
> <B> Solving the Scalability issue for bitcoin </B> <BR>
>
> I have this idea to solve the scalability problem I wish to make public=
=2E
>
> If I am wrong I hope to be corrected, and if I am right we will all
> gain by it. <BR>
>
> Currently each block is being hashed, and in its contents are the hash
> of the block preceding it, this goes back to the genesis block.
>
> <BR>
>
> What if we decide, for example, we decide to combine and prune the
> blockchain in its entirety every 999 blocks to one block (Genesis
> block not included in count).
>
> <BR>
>
> How would this work?: Once block 1000 has been created, the network
> would be waiting for a special "pruned block", and until this block
> was created and verified, block 1001 would not be accepted by any nodes=
=2E
>
> This pruned block would prune everything from block 2 to block 1000,
> leaving only the genesis block. Blocks 2 through 1000, would be
> calculated, to create a summed up transaction of all transactions
> which occurred in these 999 blocks.
>
> <BR>
>
> And its hash pointer would be the Genesis block.
>
> This block would now be verified by the full nodes, which if accepted
> would then be willing to accept a new block (block 1001, not including
> the pruned block in the count).
>
> <BR>
>
> The new block 1001, would use as its hash pointer the pruned block as
> its reference. And the count would begin again to the next 1000. The
> next pruned block would be created, its hash pointer will be
> referenced to the Genesis Block. And so on..
>
> <BR>
>
> In this way the ledger will always be a maximum of 1000 blocks.
>
>


--------------7640D33F35D714B8DFF27B3B
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Pruning is already implemented in the nodes... Once enabled only
    unspent inputs and most recent blocks are kept. IIRC there was also
    a proposal to include UTXO in some blocks for SPV clients to use,
    but that would be additional to the blockchain data.<br>
    <br>
    Implementing your solution is impossible because there is no way to
    determine authenticity of the blockchain mid way. The proof that a
    block hash leads to the genesis block is also a proof of all the
    work that's been spent on it (the years of hashing). At the very
    least we'd have to keep all blocks until a hard-coded checkpoint in
    the code, which also means that as nodes upgrades and prune more
    blocks older nodes will have difficulty syncing the blockchain.<br>
    <br>
    Finally it's not just the addresses and balance you need to save,
    but also each unspent output block number, tx position and script
    that are required for validation on input. That's a lot of data that
    you're suggesting to save every 1000 blocks (and why 1000?), and as
    said earlier it doesn't even guarantee you can drop older blocks.
    I'm not even going into the details of making it work (hard fork,
    large block sync/verification issues, possible attack vectors opened
    by this...)<br>
    <br>
    What is wrong with the current implementation of node pruning that
    you are trying to solve?<br>
    <br>
    --<br>
    Thomas<br>
    <br>
    <div class="moz-cite-prefix">On 26/08/17 03:21 PM, Adam Tamir
      Shem-Tov via bitcoin-dev wrote:<br>
    </div>
    <blockquote
cite="mid:CACQPdjphPmSC7bmicXGytuD3YAXYmsEGOECTTTuLfB_5iqDQGw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <p style="margin-bottom:0in;line-height:100%">&lt;B&gt; Solving
          the Scalability issue for bitcoin &lt;/B&gt; &lt;BR&gt;</p>
        <p style="margin-bottom:0in;line-height:100%">I have this idea
          to
          solve the scalability problem I wish to make public.</p>
        <p style="margin-bottom:0in;line-height:100%">If I am wrong I
          hope
          to be corrected, and if I am right we will all gain by it.
          &lt;BR&gt;</p>
        <p style="margin-bottom:0in;line-height:100%">Currently each
          block
          is being hashed, and in its contents are the hash of the block
          preceding it, this goes back to the genesis block.</p>
        <p style="margin-bottom:0in;line-height:100%">&lt;BR&gt;</p>
        <p style="margin-bottom:0in;line-height:100%">What if we decide,
          for example, we decide to combine and prune the blockchain in
          its
          entirety every 999 blocks to one block (Genesis block not
          included in
          count).</p>
        <p style="margin-bottom:0in;line-height:100%">&lt;BR&gt;</p>
        <p style="margin-bottom:0in;line-height:100%">How would this
          work?: Once block 1000 has been created, the network would be
          waiting
          for a special "pruned block", and until this block was
          created and verified, block 1001 would not be accepted by any
          nodes.</p>
        <p style="margin-bottom:0in;line-height:100%">This pruned block
          would prune everything from block 2 to block 1000, leaving
          only the
          genesis block. Blocks 2 through 1000, would be calculated, to
          create
          a summed up transaction of all transactions which occurred in
          these
          999 blocks.</p>
        <p style="margin-bottom:0in;line-height:100%">&lt;BR&gt;</p>
        <p style="margin-bottom:0in;line-height:100%">And its hash
          pointer
          would be the Genesis block.</p>
        <p style="margin-bottom:0in;line-height:100%">This block would
          now
          be verified by the full nodes, which if accepted would then be
          willing to accept a new block (block 1001, not including the
          pruned
          block in the count).</p>
        <p style="margin-bottom:0in;line-height:100%">&lt;BR&gt;</p>
        <p style="margin-bottom:0in;line-height:100%">The new block
          1001,
          would use as its hash pointer the pruned block as its
          reference. And
          the count would begin again to the next 1000. The next pruned
          block
          would be created, its hash pointer will be referenced to the
          Genesis
          Block. And so on..</p>
        <p style="margin-bottom:0in;line-height:100%">&lt;BR&gt;</p>
        <p style="margin-bottom:0in;line-height:100%">In this way the
          ledger will always be a maximum of 1000 blocks.</p>
        <br>
      </div>
    </blockquote>
    <br>
  </body>
</html>

--------------7640D33F35D714B8DFF27B3B--