Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 6CA23E9C for ; Mon, 29 Jul 2019 02:49:12 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mail-40136.protonmail.ch (mail-40136.protonmail.ch [185.70.40.136]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 5BAAD5E4 for ; Mon, 29 Jul 2019 02:49:11 +0000 (UTC) Date: Mon, 29 Jul 2019 02:49:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=default; t=1564368548; bh=ti7O3uSuJnTJSz9H1EZF1zlbjRK0iQdfPEQCd+EnO30=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References: Feedback-ID:From; b=PZL5D+zytKZqOq8zGZByPmokqDvHpRXzmxtbBbizY14d7+dnnK/dy8j5rqBrV2mKw CKrrlM9f3WRhRb0tSLukTvlDcm5a8BYnaiarMp/HPzM5UlRntZH2PJhhC8TVtRAPRo g5B22nsHfQ815WFWsOMPl8Wn8tQx2C8xdNTPMbdM= To: Mike Brooks From: ZmnSCPxj Reply-To: ZmnSCPxj Message-ID: In-Reply-To: References: Feedback-ID: el4j0RWPRERue64lIQeq9Y2FP-mdB86tFqjmrJyEPR9VAtMovPEo9tvgA0CrTsSHJeeyPXqnoAu6DN-R04uJUg==:Ext:ProtonMail MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, FROM_LOCAL_NOVOWEL, RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Mailman-Approved-At: Mon, 29 Jul 2019 02:53:15 +0000 Cc: Bitcoin Protocol Discussion , "pieter.wuille@gmail.com" Subject: Re: [bitcoin-dev] PubRef - Script OP Code For Public Data References X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jul 2019 02:49:12 -0000 Good morning Mike, > =C2=A0I think that this implication affects other applications built on t= he blockchain, not just the PubRef proposal: > I believe not? Current applications use txids to refer to previous transactions, so even a= short-ranged history rewrite will mostly not affect them --- they can just= rebroadcast the transactions they are spending and get those reconfirmed a= gain. There is admittedly a risk of double-spending, but each individual applicat= ion can just spend deeply-confirmed transactions, and tune what it consider= s "deeply-confirmed" depending on how large the value being spent is. The point is that history rewrites are costly, but if the value being put i= n a `scriptPubKey` that uses `OP_PUBREF` is large enough, it may justify th= e cost of history rewrites --- but if the value is small, the individual ap= plication (which refers to transactions by their txid anyway) can generally= assume miners will not bother to history-rewrite. Since `OP_PUBREF` would be a consensus rule, we need to select a "deeply-co= nfirmed" point that is deep enough for *all* cases, unlike applications **o= n top of the blockchain** which can tune their rule of "deeply-confirmed" b= ased on value. Thus my suggestion to use 100, which we consider "deep enough" to risk allo= wing miners to sell their coins. Lightning uses a "short channel ID" which is basically an index of block nu= mber + index of transaction + index of output to refer to channels. This is not a problem, however, even in case of short-ranged history rewrit= es. The short channel ID is only used for public routing. Between the channel counterparties, no security is based on short channel I= D being stable; it just loses you potential routing fees from the channel (= and can be fixed by increasing your "deeply-confirmed" enough level before = you announce the channel for public routing). > =C2=A0> There is a potential for a targeted attack where a large payout g= oing to a `scriptPubKey` that uses `OP_PUBREF` on a recently-confirmed tran= saction finds that recently-confirmed transaction is replaced with one that= pays to a different public key, via a history-rewrite attack. > =C2=A0> Such an attack is doable by miners, and if we consider that we ac= cept 100 blocks for miner coinbase maturity as "acceptably low risk" agains= t miner shenanigans, then we might consider that 100 blocks might be accept= able for this also. > =C2=A0> Whether 100 is too high or not largely depends on your risk appet= ite. > > I agree 100% this attack is unexpected and very interesting. It is precisely because of this possibility that we tend to avoid making SC= RIPT validity dependent on anything that is not in the transaction. We would have to re-evaluate the SCRIPT every time there is a chain tip reo= rganization (increasing validation CPU load), unless we do something like "= only allow `OP_PUBREF` to data that is more than 100 blocks confirmed". >=C2=A0 However, I find the arbitrary '100' to be unsatisfying - I'll have = to do some more digging. It would be interesting to trigger this on the tes= tnet to see what happens.=C2=A0 Do you know if anyone has pushed these limi= ts?=C2=A0 I am so taken by this attack I might attempt it. > > =C2=A0> Data derived from > 220Gb of perpetually-growing blockchain is ha= rdly, to my mind, "only needs an array". > > There are other open source projects that have to deal with larger data s= ets and have accounted for the real-world limits on computability. Apache H= TTPD's Bucket-Brigade comes to mind, which has been well tested and can acc= ount for limited RAM when accessing linear data structures. For a more gene= ral purpose utility leveldb (bsd-license) provides random access to arbitra= ry data collections. Which is the point: we need to use something, the details need to be consid= ered during implementation, implementation details may leak in the effectiv= e spec (e.g. DER-encoding), etc. >=C2=A0 Pruning can also be a real asset for PubRef. If all transactions f= or a wallet have been pruned, then there is no need to index this PubRef - = a validator can safely skip over it. What? The problem with transaction being pruned is that the data in them might no= w be used in a *future* `OP_PUBREF`. Further, pruned nodes are still full validators --- transactions may be pru= ned, but the pruned node will ***still*** validate any `OP_PUBREF` it uses,= because it is still a full validator, it just does not archive old blocks = in local storage. Regards, ZmnSCPxj