Return-Path: Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) by lists.linuxfoundation.org (Postfix) with ESMTP id DB1C0C000B for ; Sun, 6 Mar 2022 13:13:06 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id C45E4605E8 for ; Sun, 6 Mar 2022 13:13:06 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -2.098 X-Spam-Level: X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp3.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id P_2ATdm6EkrA for ; Sun, 6 Mar 2022 13:13:04 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-vs1-xe35.google.com (mail-vs1-xe35.google.com [IPv6:2607:f8b0:4864:20::e35]) by smtp3.osuosl.org (Postfix) with ESMTPS id 85D2060A75 for ; Sun, 6 Mar 2022 13:13:04 +0000 (UTC) Received: by mail-vs1-xe35.google.com with SMTP id j128so2182774vsc.9 for ; Sun, 06 Mar 2022 05:13:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ujEzNY933pGf68YB03UF+h29Thwu7J6SLZM9DISLo18=; b=bFvYU3HqoftOb+a222GBcOWEOM5qQqiv5dM+mqTQrvi1ZVldo/g7LiyIZ3956h1qXO 0erQz3N6tBw9wLydLqYP5nrgptp4GHsgcuSxKRS0F4FjOiPBxAUef8Oq1Dem0IRuYJdT kxp/3g7MxFtCdEPco1QLGz1HQ/1QoBc5N3nPU8vudpjprRbrjjuojecAPW3Ni+JAv0TF UrbWH1D73z91KkeCZ20DeBPppYR7gpqfjdmHUXql4xiS2hRjK8+4x8Hp3CTQbKHKKvQG aIV9TPctbLNXe6ibD4a/KWRc2DkJ36SIOsodwBOmFvfb/BpYRVoLBelSYyusNVyNbe8F xNXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ujEzNY933pGf68YB03UF+h29Thwu7J6SLZM9DISLo18=; b=DSvT7zQkPgO8lbFg3mzbhOmohlJHFMIUZSLy61lgwl8dBg04sBukK49eINPM4tNr3z VEumoFyEWwB3VL4BBsQbxtbdlrX751Qb9sOurimZPZTSbjFgfISILJb1V4SNGbSFG2T+ cdYqHLSrZeN8YA1g5Dm5AyVcNXedAgvMrY9A3PujkdXYunfoWlZChToZVyyCvYNQ1Zbo P4CN6mV2roCzk8P2+3SgsUMWC9en6dCxIzyHI8i9TvGp0gWB95zl3tUx/Qpnsl4DRN5M GQq5L6mXszb60nFdPdLnKEOdfPLGNIS0YbZmSC91iOHOPDHEuKOs8tmHjreD5qgQxEP5 VumA== X-Gm-Message-State: AOAM5319ep7MlajVz7IxWyqK46rsrMUlLHc0CddqzbDl40223fMgtvT/ PxMoNDRRccukf0FNKm60q45yNQJuOJoQtNn52vw= X-Google-Smtp-Source: ABdhPJxGsRB3DjTufmS6oD8YPOIEVBxGFX7ZuOyj1XJptnv1TrD3EwJHZORrWekHmqeBP3G93D/v1gqhXbz0IB/ZEjI= X-Received: by 2002:a05:6102:2cb:b0:320:a581:b9e7 with SMTP id h11-20020a05610202cb00b00320a581b9e7mr900047vsh.17.1646572383185; Sun, 06 Mar 2022 05:13:03 -0800 (PST) MIME-Version: 1.0 References: <20220305055924.GB5308@erisian.com.au> In-Reply-To: <20220305055924.GB5308@erisian.com.au> From: Christian Decker Date: Sun, 6 Mar 2022 14:12:52 +0100 Message-ID: To: Anthony Towns , Bitcoin Protocol Discussion Content-Type: multipart/alternative; boundary="000000000000ff2eef05d98c8155" Subject: Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Mar 2022 13:13:07 -0000 --000000000000ff2eef05d98c8155 Content-Type: text/plain; charset="UTF-8" One thing that we recently stumbled over was that we use CLTV in eltoo not for timelock but to have a comparison between two committed numbers coming from the spent and the spending transaction (ordering requirement of states). We couldn't use a number on the stack of the scriptSig as the signature doesn't commit to it, which is why we commandeered nLocktime values that are already in the past. With the annex we could have a way to get a committed to number we can pull onto the stack, and free the nLocktime for other uses again. It'd also be less roundabout to explain in classes :-) An added benefit would be that update transactions, being singlesig, can be combined into larger transactions by third parties or watchtowers to amortize some of the fixed cost of getting them confirmed, allowing on-path-aggregation basically (each node can group and aggregate transactions as they forward them). This is currently not possible since all the transactions that we'd like to batch would have to have the same nLocktime at the moment. So I think it makes sense to partition the annex into a global annex shared by the entire transaction, and one for each input. Not sure if one for inputs would also make sense as it'd bloat the utxo set and could be emulated by using the input that is spending it. Cheers, Christian On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Fri, Mar 04, 2022 at 11:21:41PM +0000, Jeremy Rubin via bitcoin-dev > wrote: > > I've seen some discussion of what the Annex can be used for in Bitcoin. > > > https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html > > includes some discussion on that topic from the taproot review meetings. > > The difference between information in the annex and information in > either a script (or the input data for the script that is the rest of > the witness) is (in theory) that the annex can be analysed immediately > and unconditionally, without necessarily even knowing anything about > the utxo being spent. > > The idea is that we would define some simple way of encoding (multiple) > entries into the annex -- perhaps a tag/length/value scheme like > lightning uses; maybe if we add a lisp scripting language to consensus, > we just reuse the list encoding from that? -- at which point we might > use one tag to specify that a transaction uses advanced computation, and > needs to be treated as having a heavier weight than its serialized size > implies; but we could use another tag for per-input absolute locktimes; > or another tag to commit to a past block height having a particular hash. > > It seems like a good place for optimising SIGHASH_GROUP (allowing a group > of inputs to claim a group of outputs for signing, but not allowing inputs > from different groups to ever claim the same output; so that each output > is hashed at most once for this purpose) -- since each input's validity > depends on the other inputs' state, it's better to be able to get at > that state as easily as possible rather than having to actually execute > other scripts before your can tell if your script is going to be valid. > > > The BIP is tight lipped about it's purpose > > BIP341 only reserves an area to put the annex; it doesn't define how > it's used or why it should be used. > > > Essentially, I read this as saying: The annex is the ability to pad a > > transaction with an additional string of 0's > > If you wanted to pad it directly, you can do that in script already > with a PUSH/DROP combo. > > The point of doing it in the annex is you could have a short byte > string, perhaps something like "0x010201a4" saying "tag 1, data length 2 > bytes, value 420" and have the consensus intepretation of that be "this > transaction should be treated as if it's 420 weight units more expensive > than its serialized size", while only increasing its witness size by > 6 bytes (annex length, annex flag, and the four bytes above). Adding 6 > bytes for a 426 weight unit increase seems much better than adding 426 > witness bytes. > > The example scenario is that if there was an opcode to verify a > zero-knowledge proof, eg I think bulletproof range proofs are something > like 10x longer than a signature, but require something like 400x the > validation time. Since checksig has a validation weight of 50 units, > a bulletproof verify might have a 400x greater validation weight, ie > 20,000 units, while your witness data is only 650 bytes serialized. In > that case, we'd need to artificially bump the weight of you transaction > up by the missing 19,350 units, or else an attacker could fill a block > with perhaps 6000 bulletproofs costing the equivalent of 120M signature > operations, rather than the 80k sigops we currently expect as the maximum > in a block. Seems better to just have "0x01024b96" stuck in the annex, > than 19kB of zeroes. > > > Introducing OP_ANNEX: Suppose there were some sort of annex pushing > opcode, > > OP_ANNEX which puts the annex on the stack > > I think you'd want to have a way of accessing individual entries from > the annex, rather than the annex as a single unit. > > > Now suppose that I have a computation that I am running in a script as > > follows: > > > > OP_ANNEX > > OP_IF > > `some operation that requires annex to be <1>` > > OP_ELSE > > OP_SIZE > > `some operation that requires annex to be len(annex) + 1 or does a > > checksig` > > OP_ENDIF > > > > Now every time you run this, > > You only run a script from a transaction once at which point its > annex is known (a different annex gives a different wtxid and breaks > any signatures), and can't reference previous or future transactions' > annexes... > > > Because the Annex is signed, and must be the same, this can also be > > inconvenient: > > The annex is committed to by signatures in the same way nVersion, > nLockTime and nSequence are committed to by signatures; I think it helps > to think about it in a similar way. > > > Suppose that you have a Miniscript that is something like: and(or(PK(A), > > PK(A')), X, or(PK(B), PK(B'))). > > > > A or A' should sign with B or B'. X is some sort of fragment that might > > require a value that is unknown (and maybe recursively defined?) so > > therefore if we send the PSBT to A first, which commits to the annex, and > > then X reads the annex and say it must be something else, A must sign > > again. So you might say, run X first, and then sign with A and C or B. > > However, what if the script somehow detects the bitstring WHICH_A WHICH_B > > and has a different Annex per selection (e.g., interpret the bitstring > as a > > int and annex must == that int). Now, given and(or(K1, K1'),... or(Kn, > > Kn')) we end up with needing to pre-sign 2**n annex values somehow... > this > > seems problematic theoretically. > > Note that you need to know what the annex will contain before you sign, > since the annex is committed to via the signature. If "X" will need > entries in the annex that aren't able to be calculated by the other > parties, then they need to be the first to contribute to the PSBT, not A. > > I think the analogy to locktimes would be "I need the locktime to be at > least block 900k, should I just sign that now, or check that nobody else > is going to want it to be block 950k or something? Or should I just sign > with nLockTime at 900k, 910k, 920k, 930k, etc and let someone else pick > the right one?" The obvious solution is just to work out what the > nLockTime should be first, then run signing rounds. Likewise, work out > what the annex should be first, then run the signing rounds. > > CLTV also has the problem that if you have one script fragment with > CLTV by time, and another with CLTV by height, you can't come up with > an nLockTime that will ever satisfy both. If you somehow have script > fragments that require incompatible interpretations of the annex, you're > likewise going to be out of luck. > > Having a way of specifying locktimes in the annex can solve that > particular problem with CLTV (different inputs can sign different > locktimes, and you could have different tags for by-time/by-height so > that even the same input can have different clauses requiring both), > but the general problem still exists. > > (eg, you might have per-input by-height absolute locktimes as annex > entry 3, and per-input by-time absolute locktimes as annex entry 4, > so you might convert: > > "900e3 CLTV DROP" -> "900e3 3 PUSH_ANNEX_ENTRY GREATERTHANOREQUAL VERIFY" > > "500e6 CLTV DROP" -> "500e6 4 PUSH_ANNEX_ENTRY GREATERTHANOREQUAL VERIFY" > > for height/time locktime checks respectively) > > > Of course this wouldn't be miniscript then. Because miniscript is just > for > > the well behaved subset of script, and this seems ill behaved. So maybe > > we're OK? > > The CLTV issue hit miniscript: > > https://medium.com/blockstream/dont-mix-your-timelocks-d9939b665094 > > > But I think the issue still arises where suppose I have a simple thing > > like: and(COLD_LOGIC, HOT_LOGIC) where both contains a signature, if > > COLD_LOGIC and HOT_LOGIC can both have different costs, I need to decide > > what logic each satisfier for the branch is going to use in advance, or > > sign all possible sums of both our annex costs? This could come up if > > cold/hot e.g. use different numbers of signatures / use checksigCISAadd > > which maybe requires an annex argument. > > Signatures pay for themselves -- every signature is 64 or 65 bytes, > but only has 50 units of validation weight. (That is, a signature check > is about 50x the cost of hashing 520 bytes of data, which is the next > highest cost operation we have, and is treated as costing 1 unit, and > immediately paid for by the 1 byte that writing OP_HASH256 takes up) > > That's why the "add cost" use of the annex is only talked about in > hypotheticals, not specified -- for reasonable scripts with today's > opcodes, it's not needed. > > If you're doing cross-input signature aggregation, everybody needs to > agree on the message they're signing in the first place, so you definitely > can't delay figuring out some bits of some annex until after signing. > > > It seems like one good option is if we just go on and banish the > OP_ANNEX. > > Maybe that solves some of this? I sort of think so. It definitely seems > > like we're not supposed to access it via script, given the quote from > above: > > How the annex works isn't defined, so it doesn't make any sense to > access it from script. When how it works is defined, I expect it might > well make sense to access it from script -- in a similar way that the > CLTV and CSV opcodes allow accessing nLockTime and nSequence from script. > > To expand on that: the logic to prevent a transaction confirming too > early occurs by looking at nLockTime and nSequence, but script can > ensure that an attempt to use "bad" values for those can never be a > valid transaction; likewise, consensus may look at the annex to enforce > new conditions as to when a transaction might be valid (and can do so > without needing to evaluate any scripts), but the individual scripts can > make sure that the annex has been set to what the utxo owner considered > to be reasonable values. > > > One solution would be to... just soft-fork it out. Always must be 0. When > > we come up with a use case for something like an annex, we can find a way > > to add it back. > > The point of reserving the annex the way it has been is exactly this -- > it should not be used now, but when we agree on how it should be used, > we have an area that's immediately ready to be used. > > (For the cases where you don't need script to enforce reasonable values, > reserving it now means those new consensus rules can be used immediately > with utxos that predate the new consensus rules -- so you could update > offchain contracts from per-tx to per-input locktimes immediately without > having to update the utxo on-chain first) > > Cheers, > aj > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > --000000000000ff2eef05d98c8155 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
One thing that we recently stumbled over was that we use = CLTV in eltoo not for timelock but to have a comparison between two committ= ed numbers coming from the spent and the spending transaction (ordering req= uirement of states). We couldn't use a number on the stack of the scrip= tSig as the signature doesn't commit to it, which is why we commandeere= d nLocktime values that are already in the past.

With the annex we could have a way to get a committed to= number we can pull onto the stack, and free the nLocktime for other uses a= gain. It'd also be less roundabout to explain in classes :-)

An added benefit would be that upd= ate transactions, being singlesig, can be combined into larger transactions= by third parties or watchtowers to amortize some of the fixed cost of gett= ing them confirmed, allowing on-path-aggregation basically (each node can g= roup and aggregate transactions as they forward them). This is currently no= t possible since all the transactions that we'd like to batch would hav= e to have the same nLocktime at the moment.

So I think it makes sense to partition the annex into a= global annex shared by the entire transaction, and one for each input. Not= sure if one for inputs would also make sense as it'd bloat the utxo se= t and could be emulated by using the input that is spending it.

Cheers,
Chri= stian

On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, <bitcoin-dev@lists.lin= uxfoundation.org> wrote:
On = Fri, Mar 04, 2022 at 11:21:41PM +0000, Jeremy Rubin via bitcoin-dev wrote:<= br> > I've seen some discussion of what the Annex can be used for in Bit= coin.

https://www.erisian.com.au/meetbot/taproot-bip-review/2019/ta= proot-bip-review.2019-11-12-19.00.log.html

includes some discussion on that topic from the taproot review meetings.
The difference between information in the annex and information in
either a script (or the input data for the script that is the rest of
the witness) is (in theory) that the annex can be analysed immediately
and unconditionally, without necessarily even knowing anything about
the utxo being spent.

The idea is that we would define some simple way of encoding (multiple)
entries into the annex -- perhaps a tag/length/value scheme like
lightning uses; maybe if we add a lisp scripting language to consensus,
we just reuse the list encoding from that? -- at which point we might
use one tag to specify that a transaction uses advanced computation, and needs to be treated as having a heavier weight than its serialized size
implies; but we could use another tag for per-input absolute locktimes;
or another tag to commit to a past block height having a particular hash.
It seems like a good place for optimising SIGHASH_GROUP (allowing a group of inputs to claim a group of outputs for signing, but not allowing inputs<= br> from different groups to ever claim the same output; so that each output is hashed at most once for this purpose) -- since each input's validity=
depends on the other inputs' state, it's better to be able to get a= t
that state as easily as possible rather than having to actually execute
other scripts before your can tell if your script is going to be valid.

> The BIP is tight lipped about it's purpose

BIP341 only reserves an area to put the annex; it doesn't define how it's used or why it should be used.

> Essentially, I read this as saying: The annex is the ability to pad a<= br> > transaction with an additional string of 0's

If you wanted to pad it directly, you can do that in script already
with a PUSH/DROP combo.

The point of doing it in the annex is you could have a short byte
string, perhaps something like "0x010201a4" saying "tag 1, d= ata length 2
bytes, value 420" and have the consensus intepretation of that be &quo= t;this
transaction should be treated as if it's 420 weight units more expensiv= e
than its serialized size", while only increasing its witness size by 6 bytes (annex length, annex flag, and the four bytes above). Adding 6
bytes for a 426 weight unit increase seems much better than adding 426
witness bytes.

The example scenario is that if there was an opcode to verify a
zero-knowledge proof, eg I think bulletproof range proofs are something
like 10x longer than a signature, but require something like 400x the
validation time. Since checksig has a validation weight of 50 units,
a bulletproof verify might have a 400x greater validation weight, ie
20,000 units, while your witness data is only 650 bytes serialized. In
that case, we'd need to artificially bump the weight of you transaction=
up by the missing 19,350 units, or else an attacker could fill a block
with perhaps 6000 bulletproofs costing the equivalent of 120M signature
operations, rather than the 80k sigops we currently expect as the maximum in a block. Seems better to just have "0x01024b96" stuck in the a= nnex,
than 19kB of zeroes.

> Introducing OP_ANNEX: Suppose there were some sort of annex pushing op= code,
> OP_ANNEX which puts the annex on the stack

I think you'd want to have a way of accessing individual entries from the annex, rather than the annex as a single unit.

> Now suppose that I have a computation that I am running in a script as=
> follows:
>
> OP_ANNEX
> OP_IF
>=C2=A0 =C2=A0 =C2=A0`some operation that requires annex to be <1>= `
> OP_ELSE
>=C2=A0 =C2=A0 =C2=A0OP_SIZE
>=C2=A0 =C2=A0 =C2=A0`some operation that requires annex to be len(annex= ) + 1 or does a
> checksig`
> OP_ENDIF
>
> Now every time you run this,

You only run a script from a transaction once at which point its
annex is known (a different annex gives a different wtxid and breaks
any signatures), and can't reference previous or future transactions= 9;
annexes...

> Because the Annex is signed, and must be the same, this can also be > inconvenient:

The annex is committed to by signatures in the same way nVersion,
nLockTime and nSequence are committed to by signatures; I think it helps to think about it in a similar way.

> Suppose that you have a Miniscript that is something like: and(or(PK(A= ),
> PK(A')), X, or(PK(B), PK(B'))).
>
> A or A' should sign with B or B'. X is some sort of fragment t= hat might
> require a value that is unknown (and maybe recursively defined?) so > therefore if we send the PSBT to A first, which commits to the annex, = and
> then X reads the annex and say it must be something else, A must sign<= br> > again. So you might say, run X first, and then sign with A and C or B.=
> However, what if the script somehow detects the bitstring WHICH_A WHIC= H_B
> and has a different Annex per selection (e.g., interpret the bitstring= as a
> int and annex must =3D=3D that int). Now, given and(or(K1, K1'),..= . or(Kn,
> Kn')) we end up with needing to pre-sign 2**n annex values somehow= ... this
> seems problematic theoretically.

Note that you need to know what the annex will contain before you sign,
since the annex is committed to via the signature. If "X" will ne= ed
entries in the annex that aren't able to be calculated by the other
parties, then they need to be the first to contribute to the PSBT, not A.
I think the analogy to locktimes would be "I need the locktime to be a= t
least block 900k, should I just sign that now, or check that nobody else is going to want it to be block 950k or something? Or should I just sign with nLockTime at 900k, 910k, 920k, 930k, etc and let someone else pick
the right one?" The obvious solution is just to work out what the
nLockTime should be first, then run signing rounds. Likewise, work out
what the annex should be first, then run the signing rounds.

CLTV also has the problem that if you have one script fragment with
CLTV by time, and another with CLTV by height, you can't come up with an nLockTime that will ever satisfy both. If you somehow have script
fragments that require incompatible interpretations of the annex, you'r= e
likewise going to be out of luck.

Having a way of specifying locktimes in the annex can solve that
particular problem with CLTV (different inputs can sign different
locktimes, and you could have different tags for by-time/by-height so
that even the same input can have different clauses requiring both),
but the general problem still exists.

(eg, you might have per-input by-height absolute locktimes as annex
entry 3, and per-input by-time absolute locktimes as annex entry 4,
so you might convert:

=C2=A0"900e3 CLTV DROP" -> "900e3 3 PUSH_ANNEX_ENTRY GREA= TERTHANOREQUAL VERIFY"

=C2=A0"500e6 CLTV DROP" -> "500e6 4 PUSH_ANNEX_ENTRY GREA= TERTHANOREQUAL VERIFY"

for height/time locktime checks respectively)

> Of course this wouldn't be miniscript then. Because miniscript is = just for
> the well behaved subset of script, and this seems ill behaved. So mayb= e
> we're OK?

The CLTV issue hit miniscript:

https://medium.com/blo= ckstream/dont-mix-your-timelocks-d9939b665094

> But I think the issue still arises where suppose I have a simple thing=
> like: and(COLD_LOGIC, HOT_LOGIC) where both contains a signature, if > COLD_LOGIC and HOT_LOGIC can both have different costs, I need to deci= de
> what logic each satisfier for the branch is going to use in advance, o= r
> sign all possible sums of both our annex costs? This could come up if<= br> > cold/hot e.g. use different numbers of signatures / use checksigCISAad= d
> which maybe requires an annex argument.

Signatures pay for themselves -- every signature is 64 or 65 bytes,
but only has 50 units of validation weight. (That is, a signature check
is about 50x the cost of hashing 520 bytes of data, which is the next
highest cost operation we have, and is treated as costing 1 unit, and
immediately paid for by the 1 byte that writing OP_HASH256 takes up)

That's why the "add cost" use of the annex is only talked abo= ut in
hypotheticals, not specified -- for reasonable scripts with today's
opcodes, it's not needed.

If you're doing cross-input signature aggregation, everybody needs to agree on the message they're signing in the first place, so you definit= ely
can't delay figuring out some bits of some annex until after signing.
> It seems like one good option is if we just go on and banish the OP_AN= NEX.
> Maybe that solves some of this? I sort of think so. It definitely seem= s
> like we're not supposed to access it via script, given the quote f= rom above:

How the annex works isn't defined, so it doesn't make any sense to<= br> access it from script. When how it works is defined, I expect it might
well make sense to access it from script -- in a similar way that the
CLTV and CSV opcodes allow accessing nLockTime and nSequence from script.
To expand on that: the logic to prevent a transaction confirming too
early occurs by looking at nLockTime and nSequence, but script can
ensure that an attempt to use "bad" values for those can never be= a
valid transaction; likewise, consensus may look at the annex to enforce
new conditions as to when a transaction might be valid (and can do so
without needing to evaluate any scripts), but the individual scripts can make sure that the annex has been set to what the utxo owner considered
to be reasonable values.

> One solution would be to... just soft-fork it out. Always must be 0. W= hen
> we come up with a use case for something like an annex, we can find a = way
> to add it back.

The point of reserving the annex the way it has been is exactly this --
it should not be used now, but when we agree on how it should be used,
we have an area that's immediately ready to be used.

(For the cases where you don't need script to enforce reasonable values= ,
reserving it now means those new consensus rules can be used immediately with utxos that predate the new consensus rules -- so you could update
offchain contracts from per-tx to per-input locktimes immediately without having to update the utxo on-chain first)

Cheers,
aj

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundati= on.org/mailman/listinfo/bitcoin-dev
--000000000000ff2eef05d98c8155--