Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 0E653FF4 for ; Tue, 12 Jun 2018 23:59:05 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-wm0-f49.google.com (mail-wm0-f49.google.com [74.125.82.49]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id DAB1C165 for ; Tue, 12 Jun 2018 23:59:03 +0000 (UTC) Received: by mail-wm0-f49.google.com with SMTP id p126-v6so1640791wmb.2 for ; Tue, 12 Jun 2018 16:59:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=O6YlHxzyzQ/DGXv0pFPfS6FtIjA+V+GZcFpX1MiziAo=; b=T422DLFuQfmugtIGDo0KD76FNMBTnczn87+mTRgq6LP3hn9yJD8M5wRgPjYxkNLweq ehxdOg0gOCMdt6G6cY+YyHSsCENAyZiaQwCAGtZ59YmgU6dtED0PIXCyR2906zmoDVpX xz6ylzGtq+EFQira/l7fc4pg/Yb5tCPsv+NrIwE8HFEe/Ofb7rr8QbrfOoI4C6as1kRN G9CpZJQZ/Q0VhfgoghkQfe6FVlcSumGrVg7dNFv9xCxFLOSVfGA33xTI8/agRVUKsxE8 jeqOEeMnpSrFRZtEwJTuOVtTSokur1lmp4xjVtGARXn3uCVOajXx+QHd2vf+/vwZO7Ug 4vhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=O6YlHxzyzQ/DGXv0pFPfS6FtIjA+V+GZcFpX1MiziAo=; b=fzj9LAPhQ02i5CiIhoIIDPcBP9EBibImYePqIEgkoMbXoLjrJAmIgyOqP/GS4mpxR6 H5qvbbGuq5TS2AOm/Y815zCL+VQ10wDPrl9Iy8/IfR5HObl2zqNCpRbh5+Fcpfa0rxWJ Z83sqE/793J09hSxtgRiyp1ht2DnnDmfvXqa4ZGRcIOsxct1gvt3wslEvmVnfOaz0yJJ q+SmJlaeExzx6tjuBfKClDWxwRcNm5dR1hP13KyQyye/jRoLElwHQutD65hh/4zTY18z rmIp4zpoFJ3EnQ/qmrK91jB+Ei9i3HmEnUJUYQWq4iVaumViI0NBx/W4B0si/Q2oqbdH eZJA== X-Gm-Message-State: APt69E11hL/h5ncVKabychUNy2PP92ap75p8TjnpyP6t6CaW8PuG9hqv Ee60fkbd+zpTYbs5KiUqQEA5QOX7K2JZHfg2wMs= X-Google-Smtp-Source: ADUXVKIIaELfjYZJ25V1OiWz3ILQfQEIvseBEgtd64ATf35d2RxILtefckVjE0253XzC7IPh2+zz+ZlSLC9yh9MXsSE= X-Received: by 2002:a50:86eb:: with SMTP id 40-v6mr1493742edu.177.1528847942442; Tue, 12 Jun 2018 16:59:02 -0700 (PDT) MIME-Version: 1.0 References: <7E4FA664-BBAF-421F-8C37-D7CE3AA5310A@gmail.com> <20180602124157.744x7j4u7dqtaa43@email> <343A3542-3103-42E9-95B7-640DFE958FFA@gmail.com> <37BECD1A-7515-4081-85AC-871B9FB57772@gmail.com> In-Reply-To: From: Olaoluwa Osuntokun Date: Tue, 12 Jun 2018 16:58:50 -0700 Message-ID: To: Gregory Maxwell Content-Type: multipart/alternative; boundary="00000000000086612b056e7aa67b" X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM, HTML_MESSAGE,RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Bitcoin Protocol Discussion Subject: Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Jun 2018 23:59:05 -0000 --00000000000086612b056e7aa67b Content-Type: text/plain; charset="UTF-8" > An example of that cost is you arguing against specifying and supporting the > design that is closer to one that would be softforked, which increases the > time until we can make these filters secure because it > slows convergence on the design of what would get committed Agreed, since the commitment is just flat out better, and also also less code to validate compared to the cross p2p validation, the filter should be as close to the committed version. This way, wallet and other apps don't need to modify their logic in X months when the commitment is rolled out. > Great point, but it should probably exclude coinbase OP_RETURN output. > This would exclude the current BIP141 style commitment and likely any > other. Definitely. I chatted offline with sipa recently, and he suggested this as well. Upside is that the filters will get even smaller, and also the first filter type becomes even more of a "barebones" wallet filter. If folks reaally want to also search OP_RETURN in the filter (as no widely deployed applications I know of really use it), then an additional filter type can be added in the future. It would need to be special cased to filter out the commitment itself. Alright, color me convinced! I'll further edit my open BIP 158 PR to: * exclude all OP_RETURN * switch to prev scripts instead of outpoints * update the test vectors to include the prev scripts from blocks in addition to the block itself -- Laolu On Sat, Jun 9, 2018 at 8:45 AM Gregory Maxwell wrote: > > So what's the cost in using > > the current filter (as it lets the client verify the filter if they want > to, > > An example of that cost is you arguing against specifying and > supporting the design that is closer to one that would be softforked, > which increases the time until we can make these filters secure > because it slows convergence on the design of what would get > committed. > > >> I don't agree at all, and I can't see why you say so. > > > > Sure it doesn't _have_ to, but from my PoV as "adding more commitments" > is > > on the top of every developers wish list for additions to Bitcoin, it > would > > make sense to coordinate on an "ultimate" extensible commitment once, > rather > > than special case a bunch of distinct commitments. I can see arguments > for > > either really. > > We have an extensible commitment style via BIP141 already. I don't see > why this in particular demands a new one. > > > 1. The current filter format (even moving to prevouts) cannot be > committed > > in this fashion as it indexes each of the coinbase output scripts. > This > > creates a circular dependency: the commitment is modified by the > > filter, > > Great point, but it should probably exclude coinbase OP_RETURN output. > This would exclude the current BIP141 style commitment and likely any > other. > > Should I start a new thread on excluding all OP_RETURN outputs from > BIP-158 filters for all transactions? -- they can't be spent, so > including them just pollutes the filters. > > > 2. Since the coinbase transaction is the first in a block, it has the > > longest merkle proof path. As a result, it may be several hundred > bytes > > (and grows with future capacity increases) to present a proof to the > > If 384 bytes is a concern, isn't 3840 bytes (the filter size > difference is in this ballpark) _much_ more of a concern? Path to the > coinbase transaction increases only logarithmically so further > capacity increases are unlikely to matter much, but the filter size > increases linearly and so it should be much more of a concern. > > > In regards to the second item above, what do you think of the old Tier > Nolan > > proposal [1] to create a "constant" sized proof for future commitments by > > constraining the size of the block and placing the commitments within the > > last few transactions in the block? > > I think it's a fairly ugly hack. esp since it requires that mining > template code be able to stuff the block if they just don't know > enough actual transactions-- which means having a pool of spendable > outputs in order to mine, managing private keys, etc... it also > requires downstream software not tinker with the transaction count > (which I wish it didn't but as of today it does). A factor of two > difference in capacity-- if you constrain to get the smallest possible > proof-- is pretty stark, optimal txn selection with this cardinality > constraint would be pretty weird. etc. > > If the community considers tree depth for proofs like that to be such > a concern to take on technical debt for that structure, we should > probably be thinking about more drastic (incompatible) changes... but > I don't think it's actually that interesting. > > > I don't think its fair to compare those that wish to implement this > proposal > > (and actually do the validation) to the legacy SPV software that to my > > knowledge is all but abandoned. The project I work on that seeks to > deploy > > Yes, maybe it isn't. But then that just means we don't have good > information. > > When a lot of people were choosing electrum over SPV wallets when > those SPV wallets weren't abandoned, sync time was frequently cited as > an actual reason. BIP158 makes that worse, not better. So while I'm > hopeful, I'm also somewhat sceptical. Certainly things that reduce > the size of the 158 filters make them seem more likely to be a success > to me. > > > too difficult to implement "full" validation, as they're bitcoin > developers > > with quite a bit of experience. > > ::shrugs:: Above you're also arguing against fetching down to the > coinbase transaction to save a couple hundred bytes a block, which > makes it impossible to validate a half dozen other things (including > as mentioned in the other threads depth fidelity of returned proofs). > There are a lot of reasons why things don't get implemented other than > experience! :) > --00000000000086612b056e7aa67b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
> An example of that cost is you arguing against s= pecifying and supporting the
> design that is closer to one th= at would be softforked, which increases the
> time until we ca= n make these filters secure because it
> slows convergence on = the design of what would get committed

Agreed, sin= ce the commitment is just flat out better, and also also less
cod= e to validate compared to the cross p2p validation, the filter should be
as close to the committed version. This way, wallet and other apps = don't
need to modify their logic in X months when the commitm= ent is rolled out.

> Great point, but it should= probably exclude coinbase OP_RETURN output.
> This would excl= ude the current BIP141 style commitment and likely any
> other= .

Definitely. I chatted offline with sipa recently= , and he suggested this as
well. Upside is that the filters will = get even smaller, and also the first
filter type becomes even mor= e of a "barebones" wallet filter. If folks
reaally want= to also search OP_RETURN in the filter (as no widely deployed
ap= plications I know of really use it), then an additional filter type can be<= /div>
added in the future. It would need to be special cased to filter = out the
commitment itself.

Alright, colo= r me convinced! I'll further edit my open BIP 158 PR to:

=
=C2=A0 * exclude all OP_RETURN=C2=A0
=C2=A0 * switch t= o prev scripts instead of outpoints
=C2=A0 * update the test vect= ors to include the prev scripts from blocks in
=C2=A0 =C2=A0 addi= tion to the block itself

-- Laolu


On Sat, Jun 9, 2018 at = 8:45 AM Gregory Maxwell <greg@xiph.org<= /a>> wrote:
> So what's t= he cost in using
> the current filter (as it lets the client verify the filter if they wa= nt to,

An example of that cost is you arguing against specifying and
supporting the design that is closer to one that would be softforked,
which increases the time until we can make these filters secure
because it slows convergence on the design of what would get
committed.

>> I don't agree at all, and I can't see why you say so.
>
> Sure it doesn't _have_ to, but from my PoV as "adding more co= mmitments" is
> on the top of every developers wish list for additions to Bitcoin, it = would
> make sense to coordinate on an "ultimate" extensible commitm= ent once, rather
> than special case a bunch of distinct commitments. I can see arguments= for
> either really.

We have an extensible commitment style via BIP141 already. I don't see<= br> why this in particular demands a new one.

>=C2=A0 =C2=A01. The current filter format (even moving to prevouts) can= not be committed
>=C2=A0 =C2=A0 =C2=A0 in this fashion as it indexes each of the coinbase= output scripts. This
>=C2=A0 =C2=A0 =C2=A0 creates a circular dependency: the commitment is m= odified by the
>=C2=A0 =C2=A0 =C2=A0 filter,

Great point, but it should probably exclude coinbase OP_RETURN output.
This would exclude the current BIP141 style commitment and likely any
other.

Should I start a new thread on excluding all OP_RETURN outputs from
BIP-158 filters for all transactions? -- they can't be spent, so
including them just pollutes the filters.

>=C2=A0 =C2=A02. Since the coinbase transaction is the first in a block,= it has the
>=C2=A0 =C2=A0 =C2=A0 longest merkle proof path. As a result, it may be = several hundred bytes
>=C2=A0 =C2=A0 =C2=A0 (and grows with future capacity increases) to pres= ent a proof to the

If 384 bytes is a concern, isn't 3840 bytes (the filter size
difference is in this ballpark) _much_ more of a concern?=C2=A0 Path to the=
coinbase transaction increases only logarithmically so further
capacity increases are unlikely to matter much, but the filter size
increases linearly and so it should be much more of a concern.

> In regards to the second item above, what do you think of the old Tier= Nolan
> proposal [1] to create a "constant" sized proof for future c= ommitments by
> constraining the size of the block and placing the commitments within = the
> last few transactions in the block?

I think it's a fairly ugly hack. esp since it requires that mining
template code be able to stuff the block if they just don't know
enough actual transactions-- which means having a pool of spendable
outputs in order to mine, managing private keys, etc... it also
requires downstream software not tinker with the transaction count
(which I wish it didn't but as of today it does). A factor of two
difference in capacity-- if you constrain to get the smallest possible
proof-- is pretty stark, optimal txn selection with this cardinality
constraint would be pretty weird. etc.

If the community considers tree depth for proofs like that to be such
a concern to take on technical debt for that structure, we should
probably be thinking about more drastic (incompatible) changes... but
I don't think it's actually that interesting.

> I don't think its fair to compare those that wish to implement thi= s proposal
> (and actually do the validation) to the legacy SPV software that to my=
> knowledge is all but abandoned. The project I work on that seeks to de= ploy

Yes, maybe it isn't.=C2=A0 But then that just means we don't have g= ood information.

When a lot of people were choosing electrum over SPV wallets when
those SPV wallets weren't abandoned, sync time was frequently cited as<= br> an actual reason. BIP158 makes that worse, not better.=C2=A0 =C2=A0So while= I'm
hopeful, I'm also somewhat sceptical.=C2=A0 Certainly things that reduc= e
the size of the 158 filters make them seem more likely to be a success
to me.

> too difficult to implement "full" validation, as they're= bitcoin developers
> with quite a bit of experience.

::shrugs:: Above you're also arguing against fetching down to the
coinbase transaction to save a couple hundred bytes a block, which
makes it impossible to validate a half dozen other things (including
as mentioned in the other threads depth fidelity of returned proofs).
There are a lot of reasons why things don't get implemented other than<= br> experience! :)
--00000000000086612b056e7aa67b--