Return-Path: Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id BE819C000B for ; Sun, 6 Mar 2022 14:25:56 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id A1F8640207 for ; Sun, 6 Mar 2022 14:25:56 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -2.098 X-Spam-Level: X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xln_YMkL_xnB for ; Sun, 6 Mar 2022 14:25:54 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com [IPv6:2a00:1450:4864:20::132]) by smtp2.osuosl.org (Postfix) with ESMTPS id 04BB04015A for ; Sun, 6 Mar 2022 14:25:53 +0000 (UTC) Received: by mail-lf1-x132.google.com with SMTP id 3so3459441lfr.7 for ; Sun, 06 Mar 2022 06:25:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ovb0CXNbdC3lFhIQIDFkLRwncjWrn7pe0jgMdCJt25k=; b=ikYrf2AMXnm4CJQ30xOGI65e9f2r0yGhY9yWOXUs2ygLcieO3tGOh67bDx+CzXCe+f zLl0DXmAiqNbzeJ0qNWE1tOu35xMPhLed2E6tFE75JOeOENvoTiIWS36YEG7uJSL/33Q N8i28AmVK3CiUQv0x/6Uw48btBTuMUwuDaiPPFlaKkBS3Edn+8W5VqkS8TZV1j0uFWSy +e7IR3QVfFDBoB7gIiYwjKZXoZ7Gh5LSv3B2OM6Y/5puiKlkCePi1i3yQBZa3IWntvqv M2CxZNuhPRJmwG+tmSk1B3MudEGBbzWAt5K03LQseIKPQW6JIexJKdu3B9HmQSUmLL8t RzHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ovb0CXNbdC3lFhIQIDFkLRwncjWrn7pe0jgMdCJt25k=; b=qy+WCfNjSCaREsSxHH16RNtK/gTUdx3TXpI7UILQ8tuMXprcy7dmmW/K3jZezgz5lD u0w/XuNr520QskX5p0nsgpHP3T4ylLgd6qT/8D07OWAlGd3v+A1LacnTKhAzTire4Swr 5I9Je7IabkrkQ49Tw/sB6FmNT3YI8+sw+6ZJQM8USYtJa9l+HyyvJFOyOZR2DI9KUnf5 Am/2O1v/j57q0Vd9tTHrvzM+5BU5ie2fiDe5Z6JxMbVIEtBmnG8PXoI1ahBIhDIkeZLE AIjuQ7cJVQtn/bDQvSjL20gOhZyq+cZQ1rbkUGlqVr/e39bwnR9J7DnfapyzcSt6ZWVO JzDA== X-Gm-Message-State: AOAM531iPZTSlSyUXss7rnpcgFBc/p7e7Hj7R9gQa8em64Kwb0TCC56T 73McgEe72KYTsWHJoEvPIH6OKCMSXYhWXWeECPU= X-Google-Smtp-Source: ABdhPJzz4t9I2TpmOvbyQbsib1QvIk2SOGg9qd+tZu3YtOb1YMURWymBpnqp0gDXj+6uJtyVCC35+IYWK/K9H/hbwdI= X-Received: by 2002:a05:6512:1288:b0:443:f15d:6a2e with SMTP id u8-20020a056512128800b00443f15d6a2emr5020842lfs.363.1646576751584; Sun, 06 Mar 2022 06:25:51 -0800 (PST) MIME-Version: 1.0 References: <20220305055924.GB5308@erisian.com.au> In-Reply-To: From: Jeremy Rubin Date: Sun, 6 Mar 2022 13:21:57 +0000 Message-ID: To: Christian Decker Content-Type: multipart/alternative; boundary="0000000000005fa8e505d98d86de" Cc: Bitcoin Protocol Discussion , Anthony Towns Subject: Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Mar 2022 14:25:56 -0000 --0000000000005fa8e505d98d86de Content-Type: text/plain; charset="UTF-8" Hi Christian, For that purpose I'd recommend having a checksig extra that is checksigextra that allows N extra data items on the stack in addition to the txn hash. This would allow signers to sign some addtl arguments, but would not be an annex since the values would not have any consensus meaning (whereas annex is designed to have one) I've previously discussed this for eltoo with giving signatures an explicit extra seqnum, but it can be generalized as above. W.r.t. pinning, if the annex is a pure function of the script execution, then there's no issue with letting it be mutable (e.g. for a validation cost hint). But permitting both validation cost commitments and stack readability is asking too much of the annex IMO. On Sun, Mar 6, 2022, 1:13 PM Christian Decker wrote: > One thing that we recently stumbled over was that we use CLTV in eltoo not > for timelock but to have a comparison between two committed numbers coming > from the spent and the spending transaction (ordering requirement of > states). We couldn't use a number on the stack of the scriptSig as the > signature doesn't commit to it, which is why we commandeered nLocktime > values that are already in the past. > > With the annex we could have a way to get a committed to number we can > pull onto the stack, and free the nLocktime for other uses again. It'd also > be less roundabout to explain in classes :-) > > An added benefit would be that update transactions, being singlesig, can > be combined into larger transactions by third parties or watchtowers to > amortize some of the fixed cost of getting them confirmed, allowing > on-path-aggregation basically (each node can group and aggregate > transactions as they forward them). This is currently not possible since > all the transactions that we'd like to batch would have to have the same > nLocktime at the moment. > > So I think it makes sense to partition the annex into a global annex > shared by the entire transaction, and one for each input. Not sure if one > for inputs would also make sense as it'd bloat the utxo set and could be > emulated by using the input that is spending it. > > Cheers, > Christian > > On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, < > bitcoin-dev@lists.linuxfoundation.org> wrote: > >> On Fri, Mar 04, 2022 at 11:21:41PM +0000, Jeremy Rubin via bitcoin-dev >> wrote: >> > I've seen some discussion of what the Annex can be used for in Bitcoin. >> >> >> https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html >> >> includes some discussion on that topic from the taproot review meetings. >> >> The difference between information in the annex and information in >> either a script (or the input data for the script that is the rest of >> the witness) is (in theory) that the annex can be analysed immediately >> and unconditionally, without necessarily even knowing anything about >> the utxo being spent. >> >> The idea is that we would define some simple way of encoding (multiple) >> entries into the annex -- perhaps a tag/length/value scheme like >> lightning uses; maybe if we add a lisp scripting language to consensus, >> we just reuse the list encoding from that? -- at which point we might >> use one tag to specify that a transaction uses advanced computation, and >> needs to be treated as having a heavier weight than its serialized size >> implies; but we could use another tag for per-input absolute locktimes; >> or another tag to commit to a past block height having a particular hash. >> >> It seems like a good place for optimising SIGHASH_GROUP (allowing a group >> of inputs to claim a group of outputs for signing, but not allowing inputs >> from different groups to ever claim the same output; so that each output >> is hashed at most once for this purpose) -- since each input's validity >> depends on the other inputs' state, it's better to be able to get at >> that state as easily as possible rather than having to actually execute >> other scripts before your can tell if your script is going to be valid. >> >> > The BIP is tight lipped about it's purpose >> >> BIP341 only reserves an area to put the annex; it doesn't define how >> it's used or why it should be used. >> >> > Essentially, I read this as saying: The annex is the ability to pad a >> > transaction with an additional string of 0's >> >> If you wanted to pad it directly, you can do that in script already >> with a PUSH/DROP combo. >> >> The point of doing it in the annex is you could have a short byte >> string, perhaps something like "0x010201a4" saying "tag 1, data length 2 >> bytes, value 420" and have the consensus intepretation of that be "this >> transaction should be treated as if it's 420 weight units more expensive >> than its serialized size", while only increasing its witness size by >> 6 bytes (annex length, annex flag, and the four bytes above). Adding 6 >> bytes for a 426 weight unit increase seems much better than adding 426 >> witness bytes. >> >> The example scenario is that if there was an opcode to verify a >> zero-knowledge proof, eg I think bulletproof range proofs are something >> like 10x longer than a signature, but require something like 400x the >> validation time. Since checksig has a validation weight of 50 units, >> a bulletproof verify might have a 400x greater validation weight, ie >> 20,000 units, while your witness data is only 650 bytes serialized. In >> that case, we'd need to artificially bump the weight of you transaction >> up by the missing 19,350 units, or else an attacker could fill a block >> with perhaps 6000 bulletproofs costing the equivalent of 120M signature >> operations, rather than the 80k sigops we currently expect as the maximum >> in a block. Seems better to just have "0x01024b96" stuck in the annex, >> than 19kB of zeroes. >> >> > Introducing OP_ANNEX: Suppose there were some sort of annex pushing >> opcode, >> > OP_ANNEX which puts the annex on the stack >> >> I think you'd want to have a way of accessing individual entries from >> the annex, rather than the annex as a single unit. >> >> > Now suppose that I have a computation that I am running in a script as >> > follows: >> > >> > OP_ANNEX >> > OP_IF >> > `some operation that requires annex to be <1>` >> > OP_ELSE >> > OP_SIZE >> > `some operation that requires annex to be len(annex) + 1 or does a >> > checksig` >> > OP_ENDIF >> > >> > Now every time you run this, >> >> You only run a script from a transaction once at which point its >> annex is known (a different annex gives a different wtxid and breaks >> any signatures), and can't reference previous or future transactions' >> annexes... >> >> > Because the Annex is signed, and must be the same, this can also be >> > inconvenient: >> >> The annex is committed to by signatures in the same way nVersion, >> nLockTime and nSequence are committed to by signatures; I think it helps >> to think about it in a similar way. >> >> > Suppose that you have a Miniscript that is something like: and(or(PK(A), >> > PK(A')), X, or(PK(B), PK(B'))). >> > >> > A or A' should sign with B or B'. X is some sort of fragment that might >> > require a value that is unknown (and maybe recursively defined?) so >> > therefore if we send the PSBT to A first, which commits to the annex, >> and >> > then X reads the annex and say it must be something else, A must sign >> > again. So you might say, run X first, and then sign with A and C or B. >> > However, what if the script somehow detects the bitstring WHICH_A >> WHICH_B >> > and has a different Annex per selection (e.g., interpret the bitstring >> as a >> > int and annex must == that int). Now, given and(or(K1, K1'),... or(Kn, >> > Kn')) we end up with needing to pre-sign 2**n annex values somehow... >> this >> > seems problematic theoretically. >> >> Note that you need to know what the annex will contain before you sign, >> since the annex is committed to via the signature. If "X" will need >> entries in the annex that aren't able to be calculated by the other >> parties, then they need to be the first to contribute to the PSBT, not A. >> >> I think the analogy to locktimes would be "I need the locktime to be at >> least block 900k, should I just sign that now, or check that nobody else >> is going to want it to be block 950k or something? Or should I just sign >> with nLockTime at 900k, 910k, 920k, 930k, etc and let someone else pick >> the right one?" The obvious solution is just to work out what the >> nLockTime should be first, then run signing rounds. Likewise, work out >> what the annex should be first, then run the signing rounds. >> >> CLTV also has the problem that if you have one script fragment with >> CLTV by time, and another with CLTV by height, you can't come up with >> an nLockTime that will ever satisfy both. If you somehow have script >> fragments that require incompatible interpretations of the annex, you're >> likewise going to be out of luck. >> >> Having a way of specifying locktimes in the annex can solve that >> particular problem with CLTV (different inputs can sign different >> locktimes, and you could have different tags for by-time/by-height so >> that even the same input can have different clauses requiring both), >> but the general problem still exists. >> >> (eg, you might have per-input by-height absolute locktimes as annex >> entry 3, and per-input by-time absolute locktimes as annex entry 4, >> so you might convert: >> >> "900e3 CLTV DROP" -> "900e3 3 PUSH_ANNEX_ENTRY GREATERTHANOREQUAL VERIFY" >> >> "500e6 CLTV DROP" -> "500e6 4 PUSH_ANNEX_ENTRY GREATERTHANOREQUAL VERIFY" >> >> for height/time locktime checks respectively) >> >> > Of course this wouldn't be miniscript then. Because miniscript is just >> for >> > the well behaved subset of script, and this seems ill behaved. So maybe >> > we're OK? >> >> The CLTV issue hit miniscript: >> >> https://medium.com/blockstream/dont-mix-your-timelocks-d9939b665094 >> >> > But I think the issue still arises where suppose I have a simple thing >> > like: and(COLD_LOGIC, HOT_LOGIC) where both contains a signature, if >> > COLD_LOGIC and HOT_LOGIC can both have different costs, I need to decide >> > what logic each satisfier for the branch is going to use in advance, or >> > sign all possible sums of both our annex costs? This could come up if >> > cold/hot e.g. use different numbers of signatures / use checksigCISAadd >> > which maybe requires an annex argument. >> >> Signatures pay for themselves -- every signature is 64 or 65 bytes, >> but only has 50 units of validation weight. (That is, a signature check >> is about 50x the cost of hashing 520 bytes of data, which is the next >> highest cost operation we have, and is treated as costing 1 unit, and >> immediately paid for by the 1 byte that writing OP_HASH256 takes up) >> >> That's why the "add cost" use of the annex is only talked about in >> hypotheticals, not specified -- for reasonable scripts with today's >> opcodes, it's not needed. >> >> If you're doing cross-input signature aggregation, everybody needs to >> agree on the message they're signing in the first place, so you definitely >> can't delay figuring out some bits of some annex until after signing. >> >> > It seems like one good option is if we just go on and banish the >> OP_ANNEX. >> > Maybe that solves some of this? I sort of think so. It definitely seems >> > like we're not supposed to access it via script, given the quote from >> above: >> >> How the annex works isn't defined, so it doesn't make any sense to >> access it from script. When how it works is defined, I expect it might >> well make sense to access it from script -- in a similar way that the >> CLTV and CSV opcodes allow accessing nLockTime and nSequence from script. >> >> To expand on that: the logic to prevent a transaction confirming too >> early occurs by looking at nLockTime and nSequence, but script can >> ensure that an attempt to use "bad" values for those can never be a >> valid transaction; likewise, consensus may look at the annex to enforce >> new conditions as to when a transaction might be valid (and can do so >> without needing to evaluate any scripts), but the individual scripts can >> make sure that the annex has been set to what the utxo owner considered >> to be reasonable values. >> >> > One solution would be to... just soft-fork it out. Always must be 0. >> When >> > we come up with a use case for something like an annex, we can find a >> way >> > to add it back. >> >> The point of reserving the annex the way it has been is exactly this -- >> it should not be used now, but when we agree on how it should be used, >> we have an area that's immediately ready to be used. >> >> (For the cases where you don't need script to enforce reasonable values, >> reserving it now means those new consensus rules can be used immediately >> with utxos that predate the new consensus rules -- so you could update >> offchain contracts from per-tx to per-input locktimes immediately without >> having to update the utxo on-chain first) >> >> Cheers, >> aj >> >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev@lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> > --0000000000005fa8e505d98d86de Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Christian,

For that purpose I'd recommend having a checksig extra that is=C2=A0<= /div>

<data> <n> &= lt;sig> <pk> checksigextra that allows N extra data items on the s= tack in addition to the txn hash. This would allow signers to sign some add= tl arguments, but would not be an annex since the values would not have any= consensus meaning (whereas annex is designed to have one)


I've pre= viously discussed this for eltoo with giving signatures an explicit extra s= eqnum, but it can be generalized as above.



= W.r.t. pinning, if the annex is a pure function of the script execution, th= en there's no issue with letting it be mutable (e.g. for a validation c= ost hint). But permitting both validation cost commitments and stack readab= ility is asking too much of the annex IMO.

On Sun, Mar 6, 2022, 1:13 P= M Christian Decker <decker= .christian@gmail.com> wrote:
One thing that we recently stumbled over was that we use= CLTV in eltoo not for timelock but to have a comparison between two commit= ted numbers coming from the spent and the spending transaction (ordering re= quirement of states). We couldn't use a number on the stack of the scri= ptSig as the signature doesn't commit to it, which is why we commandeer= ed nLocktime values that are already in the past.

With the annex we could have a way to get a committed t= o number we can pull onto the stack, and free the nLocktime for other uses = again. It'd also be less roundabout to explain in classes :-)

An added benefit would be that up= date transactions, being singlesig, can be combined into larger transaction= s by third parties or watchtowers to amortize some of the fixed cost of get= ting them confirmed, allowing on-path-aggregation basically (each node can = group and aggregate transactions as they forward them). This is currently n= ot possible since all the transactions that we'd like to batch would ha= ve to have the same nLocktime at the moment.

So I think it makes sense to partition the annex into = a global annex shared by the entire transaction, and one for each input. No= t sure if one for inputs would also make sense as it'd bloat the utxo s= et and could be emulated by using the input that is spending it.

Cheers,
Chr= istian

On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, <bitcoin-dev@lists.linuxfoundation.org> wrote:
On Fri, Mar 04, 2022 at 11:21:41PM +0000= , Jeremy Rubin via bitcoin-dev wrote:
> I've seen some discussion of what the Annex can be used for in Bit= coin.

https://www.erisian.com.au/meetbot/taproot-bip-rev= iew/2019/taproot-bip-review.2019-11-12-19.00.log.html

includes some discussion on that topic from the taproot review meetings.
The difference between information in the annex and information in
either a script (or the input data for the script that is the rest of
the witness) is (in theory) that the annex can be analysed immediately
and unconditionally, without necessarily even knowing anything about
the utxo being spent.

The idea is that we would define some simple way of encoding (multiple)
entries into the annex -- perhaps a tag/length/value scheme like
lightning uses; maybe if we add a lisp scripting language to consensus,
we just reuse the list encoding from that? -- at which point we might
use one tag to specify that a transaction uses advanced computation, and needs to be treated as having a heavier weight than its serialized size
implies; but we could use another tag for per-input absolute locktimes;
or another tag to commit to a past block height having a particular hash.
It seems like a good place for optimising SIGHASH_GROUP (allowing a group of inputs to claim a group of outputs for signing, but not allowing inputs<= br> from different groups to ever claim the same output; so that each output is hashed at most once for this purpose) -- since each input's validity=
depends on the other inputs' state, it's better to be able to get a= t
that state as easily as possible rather than having to actually execute
other scripts before your can tell if your script is going to be valid.

> The BIP is tight lipped about it's purpose

BIP341 only reserves an area to put the annex; it doesn't define how it's used or why it should be used.

> Essentially, I read this as saying: The annex is the ability to pad a<= br> > transaction with an additional string of 0's

If you wanted to pad it directly, you can do that in script already
with a PUSH/DROP combo.

The point of doing it in the annex is you could have a short byte
string, perhaps something like "0x010201a4" saying "tag 1, d= ata length 2
bytes, value 420" and have the consensus intepretation of that be &quo= t;this
transaction should be treated as if it's 420 weight units more expensiv= e
than its serialized size", while only increasing its witness size by 6 bytes (annex length, annex flag, and the four bytes above). Adding 6
bytes for a 426 weight unit increase seems much better than adding 426
witness bytes.

The example scenario is that if there was an opcode to verify a
zero-knowledge proof, eg I think bulletproof range proofs are something
like 10x longer than a signature, but require something like 400x the
validation time. Since checksig has a validation weight of 50 units,
a bulletproof verify might have a 400x greater validation weight, ie
20,000 units, while your witness data is only 650 bytes serialized. In
that case, we'd need to artificially bump the weight of you transaction=
up by the missing 19,350 units, or else an attacker could fill a block
with perhaps 6000 bulletproofs costing the equivalent of 120M signature
operations, rather than the 80k sigops we currently expect as the maximum in a block. Seems better to just have "0x01024b96" stuck in the a= nnex,
than 19kB of zeroes.

> Introducing OP_ANNEX: Suppose there were some sort of annex pushing op= code,
> OP_ANNEX which puts the annex on the stack

I think you'd want to have a way of accessing individual entries from the annex, rather than the annex as a single unit.

> Now suppose that I have a computation that I am running in a script as=
> follows:
>
> OP_ANNEX
> OP_IF
>=C2=A0 =C2=A0 =C2=A0`some operation that requires annex to be <1>= `
> OP_ELSE
>=C2=A0 =C2=A0 =C2=A0OP_SIZE
>=C2=A0 =C2=A0 =C2=A0`some operation that requires annex to be len(annex= ) + 1 or does a
> checksig`
> OP_ENDIF
>
> Now every time you run this,

You only run a script from a transaction once at which point its
annex is known (a different annex gives a different wtxid and breaks
any signatures), and can't reference previous or future transactions= 9;
annexes...

> Because the Annex is signed, and must be the same, this can also be > inconvenient:

The annex is committed to by signatures in the same way nVersion,
nLockTime and nSequence are committed to by signatures; I think it helps to think about it in a similar way.

> Suppose that you have a Miniscript that is something like: and(or(PK(A= ),
> PK(A')), X, or(PK(B), PK(B'))).
>
> A or A' should sign with B or B'. X is some sort of fragment t= hat might
> require a value that is unknown (and maybe recursively defined?) so > therefore if we send the PSBT to A first, which commits to the annex, = and
> then X reads the annex and say it must be something else, A must sign<= br> > again. So you might say, run X first, and then sign with A and C or B.=
> However, what if the script somehow detects the bitstring WHICH_A WHIC= H_B
> and has a different Annex per selection (e.g., interpret the bitstring= as a
> int and annex must =3D=3D that int). Now, given and(or(K1, K1'),..= . or(Kn,
> Kn')) we end up with needing to pre-sign 2**n annex values somehow= ... this
> seems problematic theoretically.

Note that you need to know what the annex will contain before you sign,
since the annex is committed to via the signature. If "X" will ne= ed
entries in the annex that aren't able to be calculated by the other
parties, then they need to be the first to contribute to the PSBT, not A.
I think the analogy to locktimes would be "I need the locktime to be a= t
least block 900k, should I just sign that now, or check that nobody else is going to want it to be block 950k or something? Or should I just sign with nLockTime at 900k, 910k, 920k, 930k, etc and let someone else pick
the right one?" The obvious solution is just to work out what the
nLockTime should be first, then run signing rounds. Likewise, work out
what the annex should be first, then run the signing rounds.

CLTV also has the problem that if you have one script fragment with
CLTV by time, and another with CLTV by height, you can't come up with an nLockTime that will ever satisfy both. If you somehow have script
fragments that require incompatible interpretations of the annex, you'r= e
likewise going to be out of luck.

Having a way of specifying locktimes in the annex can solve that
particular problem with CLTV (different inputs can sign different
locktimes, and you could have different tags for by-time/by-height so
that even the same input can have different clauses requiring both),
but the general problem still exists.

(eg, you might have per-input by-height absolute locktimes as annex
entry 3, and per-input by-time absolute locktimes as annex entry 4,
so you might convert:

=C2=A0"900e3 CLTV DROP" -> "900e3 3 PUSH_ANNEX_ENTRY GREA= TERTHANOREQUAL VERIFY"

=C2=A0"500e6 CLTV DROP" -> "500e6 4 PUSH_ANNEX_ENTRY GREA= TERTHANOREQUAL VERIFY"

for height/time locktime checks respectively)

> Of course this wouldn't be miniscript then. Because miniscript is = just for
> the well behaved subset of script, and this seems ill behaved. So mayb= e
> we're OK?

The CLTV issue hit miniscript:

https://med= ium.com/blockstream/dont-mix-your-timelocks-d9939b665094

> But I think the issue still arises where suppose I have a simple thing=
> like: and(COLD_LOGIC, HOT_LOGIC) where both contains a signature, if > COLD_LOGIC and HOT_LOGIC can both have different costs, I need to deci= de
> what logic each satisfier for the branch is going to use in advance, o= r
> sign all possible sums of both our annex costs? This could come up if<= br> > cold/hot e.g. use different numbers of signatures / use checksigCISAad= d
> which maybe requires an annex argument.

Signatures pay for themselves -- every signature is 64 or 65 bytes,
but only has 50 units of validation weight. (That is, a signature check
is about 50x the cost of hashing 520 bytes of data, which is the next
highest cost operation we have, and is treated as costing 1 unit, and
immediately paid for by the 1 byte that writing OP_HASH256 takes up)

That's why the "add cost" use of the annex is only talked abo= ut in
hypotheticals, not specified -- for reasonable scripts with today's
opcodes, it's not needed.

If you're doing cross-input signature aggregation, everybody needs to agree on the message they're signing in the first place, so you definit= ely
can't delay figuring out some bits of some annex until after signing.
> It seems like one good option is if we just go on and banish the OP_AN= NEX.
> Maybe that solves some of this? I sort of think so. It definitely seem= s
> like we're not supposed to access it via script, given the quote f= rom above:

How the annex works isn't defined, so it doesn't make any sense to<= br> access it from script. When how it works is defined, I expect it might
well make sense to access it from script -- in a similar way that the
CLTV and CSV opcodes allow accessing nLockTime and nSequence from script.
To expand on that: the logic to prevent a transaction confirming too
early occurs by looking at nLockTime and nSequence, but script can
ensure that an attempt to use "bad" values for those can never be= a
valid transaction; likewise, consensus may look at the annex to enforce
new conditions as to when a transaction might be valid (and can do so
without needing to evaluate any scripts), but the individual scripts can make sure that the annex has been set to what the utxo owner considered
to be reasonable values.

> One solution would be to... just soft-fork it out. Always must be 0. W= hen
> we come up with a use case for something like an annex, we can find a = way
> to add it back.

The point of reserving the annex the way it has been is exactly this --
it should not be used now, but when we agree on how it should be used,
we have an area that's immediately ready to be used.

(For the cases where you don't need script to enforce reasonable values= ,
reserving it now means those new consensus rules can be used immediately with utxos that predate the new consensus rules -- so you could update
offchain contracts from per-tx to per-input locktimes immediately without having to update the utxo on-chain first)

Cheers,
aj

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.li= nuxfoundation.org/mailman/listinfo/bitcoin-dev
--0000000000005fa8e505d98d86de--