Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id C90AB4A3 for ; Sun, 27 Aug 2017 12:10:23 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-vk0-f45.google.com (mail-vk0-f45.google.com [209.85.213.45]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id D4D00AA for ; Sun, 27 Aug 2017 12:10:21 +0000 (UTC) Received: by mail-vk0-f45.google.com with SMTP id s199so9679115vke.1 for ; Sun, 27 Aug 2017 05:10:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=RCH+RGqCA3Fm0RkBldqmyFFI7JMV5VpmUklbTF8EPxo=; b=a3Jo1ZoGMOP+5rXcu19bQ5iGBrl6I/tNPpwv7Ed1QUQBnIFV12VIuMGknDXAShrJBc 6X6zvIPeaw8fmyIDBbCIE/kCJENRDQXMbKgUvXRPbNXWIZ4Ay5i+6+8tdZrwtsFt+ol4 FIv71tSocu/IJOiyNJ9pO2ANtwKokGGG3GPZWIt2vKQoKyk61u2qH7P0+vDh1r1zh9PC 5OzoQLdkgbj5wYnLsefX+g9dPPvBZD4fwwSREs27eEHLRgZmHeRErW0kp7YnArR5X6IO txgnBvgrY8V8IVBFWXUSWuyRUINbDRrOuFzQTwoEmibGv79EPRfToyPvR2fzzM5ntFdB fUAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=RCH+RGqCA3Fm0RkBldqmyFFI7JMV5VpmUklbTF8EPxo=; b=Heo6E06bcOwnAwTywM6LfnbEfHT86OyE77zU/gLkKdFGBlw6pVNVHzmXtEYK/HMsJb XX+w6jbW86Cv7wr8FYPm5x4zbFtYiRS9awmbnESaw26TKDhK9reKJioyJsmTV/KYbotA mx+eJIzooA+Rxya0lAIlK/zsWPRmlbY1gTfF2f6/8XvFFBsQi96xaMd0pvCDS6Mpfnxu 0iU2tbcGKGrVYrI5olW1bwl1/8PpHB/7jH0C8mnvBYWHvi9as7JyjD5yqPJE8IfSeDZ5 zJRXlEP+E/goYnYX7W6MV6iDdOLd6xZhXuwFXfjRb08tKC3fmje/dEjGCJNstHQNg5qQ I+GQ== X-Gm-Message-State: AHYfb5geEYKwefVjywlpy+aysL7lbNpiOIHkmXqNupCnDVCaW9hsMgKh Aqj6oZ8dK4GtMBGbxCY+7sL60+1Wsw== X-Received: by 10.31.16.42 with SMTP id g42mr2968116vki.132.1503835820920; Sun, 27 Aug 2017 05:10:20 -0700 (PDT) MIME-Version: 1.0 Received: by 10.176.76.9 with HTTP; Sun, 27 Aug 2017 05:10:19 -0700 (PDT) Received: by 10.176.76.9 with HTTP; Sun, 27 Aug 2017 05:10:19 -0700 (PDT) In-Reply-To: References: From: Leandro Coutinho Date: Sun, 27 Aug 2017 09:10:19 -0300 Message-ID: To: Adam Tamir Shem-Tov , Bitcoin Protocol Discussion Content-Type: multipart/alternative; boundary="001a11c11ba6e8178d0557bb0f73" X-Spam-Status: No, score=0.4 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM autolearn=disabled version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Mailman-Approved-At: Sun, 27 Aug 2017 13:17:43 +0000 Subject: Re: [bitcoin-dev] Solving the Scalability Problem on Bitcoin X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Aug 2017 12:10:23 -0000 --001a11c11ba6e8178d0557bb0f73 Content-Type: text/plain; charset="UTF-8" >>> 5) The problem with node pruning is that it is not standardized, and for a new node to enter the network and to verify the data, it needs to download all data and prune it by itself. This will drastically lower the information needed by the full nodes by getting rid of the junk. Currently we are around 140GB, that number is getting bigger exponentially, by the number of users and transactions created. It could reach a Terrabyte sooner than expected, we need to act now. To have to download all blockchain for then prune is a big drawback. So I thought about the concept of "trusted" nodes, where you could choose some nodes to connect and from which block you want to download. Of course they would do this by their own risk, but there are ways to minimize the risk, like: - check the latest blocks (hashes) if they match what you find in some sites, like blockchain.info - download and compare the utxo from all (some) the nodes you are connected Currently utxo size is around 2GB and we cant know how fast it will grow (?) Em 26/08/2017 19:39, "Adam Tamir Shem-Tov via bitcoin-dev" < bitcoin-dev@lists.linuxfoundation.org> escreveu: Thank you Thomas for your response. 1) Implement solution is impossible... I have given a solution in part II. By adding a Genesis Account which will be the new sender. 2)Keeping older blocks: Yes as I said 10 older blocks should be kept, that should suffice. I am not locked on that number, if you think there is a reason to keep more than that, it is open to debate. 3) Why 1000? To be honest, that number came off the top of my head. These are minor details, the concept must first be accepted, then we can work on the minor details. 4)Finally it's not just the addresses and balance you need to save... I think the Idea of the Genesis Account, solves this issue. 5) The problem with node pruning is that it is not standardized, and for a new node to enter the network and to verify the data, it needs to download all data and prune it by itself. This will drastically lower the information needed by the full nodes by getting rid of the junk. Currently we are around 140GB, that number is getting bigger exponentially, by the number of users and transactions created. It could reach a Terrabyte sooner than expected, we need to act now. On your second email: When I say account: I mean private-public key. The way bitcoin works, as I understand it, is that the funds are verified by showing that they have an origin, this "origin" needs to provide a signature, otherwise the transaction won't be accepted. If I am proposing to remove all intermediate origins, then the funds become untraceable and hence unverifiable. To fix that, a new transaction needs to replace old ones. A simplified version: If there was a transaction chain A->B->C->D, and I wish to show only A->D, only a transaction like that never actually occurred, it would be impossible to say that it did without having A's private key, in order to sign this transaction. In order to create this transaction, I need A's private key. And if I wish this to be publicly implemented I need this key to be public, so that any node creating this Exodus Block can sign with it. Hence the Genesis Account. And yes, it is not really an account. On 27 August 2017 at 00:31, Thomas Guyot-Sionnest wrote: > Pruning is already implemented in the nodes... Once enabled only unspent > inputs and most recent blocks are kept. IIRC there was also a proposal to > include UTXO in some blocks for SPV clients to use, but that would be > additional to the blockchain data. > > Implementing your solution is impossible because there is no way to > determine authenticity of the blockchain mid way. The proof that a block > hash leads to the genesis block is also a proof of all the work that's been > spent on it (the years of hashing). At the very least we'd have to keep all > blocks until a hard-coded checkpoint in the code, which also means that as > nodes upgrades and prune more blocks older nodes will have difficulty > syncing the blockchain. > > Finally it's not just the addresses and balance you need to save, but also > each unspent output block number, tx position and script that are required > for validation on input. That's a lot of data that you're suggesting to > save every 1000 blocks (and why 1000?), and as said earlier it doesn't even > guarantee you can drop older blocks. I'm not even going into the details of > making it work (hard fork, large block sync/verification issues, possible > attack vectors opened by this...) > > What is wrong with the current implementation of node pruning that you are > trying to solve? > > -- > Thomas > > On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote: > > Solving the Scalability issue for bitcoin
> > I have this idea to solve the scalability problem I wish to make public. > > If I am wrong I hope to be corrected, and if I am right we will all gain > by it.
> > Currently each block is being hashed, and in its contents are the hash of > the block preceding it, this goes back to the genesis block. > >
> > What if we decide, for example, we decide to combine and prune the > blockchain in its entirety every 999 blocks to one block (Genesis block not > included in count). > >
> > How would this work?: Once block 1000 has been created, the network would > be waiting for a special "pruned block", and until this block was created > and verified, block 1001 would not be accepted by any nodes. > > This pruned block would prune everything from block 2 to block 1000, > leaving only the genesis block. Blocks 2 through 1000, would be calculated, > to create a summed up transaction of all transactions which occurred in > these 999 blocks. > >
> > And its hash pointer would be the Genesis block. > > This block would now be verified by the full nodes, which if accepted > would then be willing to accept a new block (block 1001, not including the > pruned block in the count). > >
> > The new block 1001, would use as its hash pointer the pruned block as its > reference. And the count would begin again to the next 1000. The next > pruned block would be created, its hash pointer will be referenced to the > Genesis Block. And so on.. > >
> > In this way the ledger will always be a maximum of 1000 blocks. > > > _______________________________________________ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev --001a11c11ba6e8178d0557bb0f73 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
>>>=C2=A05) The problem with node pruning is that it is not = standardized, and for a new node to enter the network and to verify the dat= a, it needs to download all data and prune it by itself. This will drastica= lly lower the information needed by the full nodes by getting rid of the ju= nk.=C2=A0 Currently we are around 140GB, that number is getting bigger expo= nentially, by the number of users and transactions created. It could reach = a Terrabyte sooner than expected, we need to act now.

=
To have to download all blockchain for then prune i= s a big drawback.
So I thought about the concept of = "trusted" nodes, where you could choose some nodes to connect and= from which block you want to download. Of course they would do this by the= ir own risk, but there are ways to minimize the risk, like:
=C2=A0 - check the latest blocks (hashes) if they match what you fi= nd in some sites, like blockchain.info=C2=A0
Currently utxo size is around 2GB and we cant know how fast= it will grow (?)
=
Em 26/08/2017 19:39, "Adam Tamir Shem-T= ov via bitcoin-dev" <bitcoin-dev@lists.linuxfoundation.org> escreveu:
<= div>
Thank you Thomas for your response.

1) Implement solution is impossible... I have given a solution in part II. By adding a Genesis Account which will be the new sender.

2)K= eeping older blocks: Yes as I said 10 older blocks should be kept, that should suffice. I am not locked on that number, if you think there is a reason to keep more than that, it is open to debate.

3) Why 1000? To be honest, that number came off the top of my head. These are minor=20 details, the concept must first be accepted, then we can work on the=20 minor details.

4)Finally it's not just the addresses and= =20 balance you need to save...=C2=A0 I think the Idea of the Genesis Account,= =20 solves this issue.

5) The problem with node pruning is that it is not standardized, and for a new node to enter the network and to=20 verify the data, it needs to download all data and prune it by itself.=20 This will drastically lower the information needed by the full nodes by=20 getting rid of the junk.=C2=A0 Currently we are around 140GB, that number i= s=20 getting bigger exponentially, by the number of users and transactions=20 created. It could reach a Terrabyte sooner than expected, we need to act now.

On your second email:
When I say= account: I mean private-public key.
The way bitcoin works, as I understand it, is that the funds are verified=20 by showing that they have an origin, this "origin" needs to provi= de a=20 signature, otherwise the transaction won't be accepted.
If I am proposing to remove all intermediate origins, then the funds=20 become untraceable and hence unverifiable. To fix that, a new=20 transaction needs to replace old ones. A simplified version: If there=20 was a transaction chain A->B->C->D, and I wish to show only=20 A->D, only a transaction like that never actually occurred, it would=20 be impossible to say that it did without having A's private key, in=20 order to sign this transaction. In order to create this transaction, I=20 need A's private key. And if I wish this to be publicly implemented I= =20 need this key to be public, so that any node creating this Exodus Block=20 can sign with it. Hence the Genesis Account. And yes, it is not really=20 an account.

=
On 27 August 2017 at 00:31, Thomas Guyot-Sionnes= t <dermoth@aei.ca> wrote:
=20 =20 =20
Pruning is already implemented in the nodes... Once enabled only unspent inputs and most recent blocks are kept. IIRC there was also a proposal to include UTXO in some blocks for SPV clients to use, but that would be additional to the blockchain data.

Implementing your solution is impossible because there is no way to determine authenticity of the blockchain mid way. The proof that a block hash leads to the genesis block is also a proof of all the work that's been spent on it (the years of hashing). At the very least we'd have to keep all blocks until a hard-coded checkpoint in the code, which also means that as nodes upgrades and prune more blocks older nodes will have difficulty syncing the blockchain.

Finally it's not just the addresses and balance you need to save, but also each unspent output block number, tx position and script that are required for validation on input. That's a lot of data tha= t you're suggesting to save every 1000 blocks (and why 1000?), and as said earlier it doesn't even guarantee you can drop older blocks. I'm not even going into the details of making it work (hard fork, large block sync/verification issues, possible attack vectors opened by this...)

What is wrong with the current implementation of node pruning that you are trying to solve?

--
Thomas

On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote:

<B> Solving the Scalability issue for bitcoin </B> <BR>

I have this idea to solve the scalability problem I wish to make public.

If I am wrong I hope to be corrected, and if I am right we will all gain by it. <BR>

Currently each block is being hashed, and in its contents are the hash of the block preceding it, this goes back to the genesis block.

<BR>

What if we decide, for example, we decide to combine and prune the blockchain in its entirety every 999 blocks to one block (Genesis block not included in count).

<BR>

How would this work?: Once block 1000 has been created, the network would be waiting for a special "pruned block", and until this block was created and verified, block 1001 would not be accepted by any nodes.

This pruned block would prune everything from block 2 to block 1000, leaving only the genesis block. Blocks 2 through 1000, would be calculated, to create a summed up transaction of all transactions which occurred in these 999 blocks.

<BR>

And its hash pointer would be the Genesis block.

This block would now be verified by the full nodes, which if accepted would then be willing to accept a new block (block 1001, not including the pruned block in the count).

<BR>

The new block 1001, would use as its hash pointer the pruned block as its reference. And the count would begin again to the next 1000. The next pruned block would be created, its hash pointer will be referenced to the Genesis Block. And so on..

<BR>

In this way the ledger will always be a maximum of 1000 blocks.





_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.= linuxfoundation.org
https://lists.linuxfoundation.org= /mailman/listinfo/bitcoin-dev


--001a11c11ba6e8178d0557bb0f73--