Return-Path: Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 698C0C000B for ; Sat, 5 Mar 2022 19:12:26 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 36F1560675 for ; Sat, 5 Mar 2022 19:12:26 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -0.666 X-Spam-Level: X-Spam-Status: No, score=-0.666 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URI_DOTEDU=1.432] autolearn=no autolearn_force=no Authentication-Results: smtp3.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GFI3RK5qpHzC for ; Sat, 5 Mar 2022 19:12:23 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by smtp3.osuosl.org (Postfix) with ESMTPS id F3DFD60674 for ; Sat, 5 Mar 2022 19:12:22 +0000 (UTC) Received: by mail-ed1-x52e.google.com with SMTP id o1so13686462edc.3 for ; Sat, 05 Mar 2022 11:12:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=Q/06LV/GdqjSI1rcFl/wyxeDViFgWGq5eBRnjIKSQeo=; b=dY2O8OZdbHuA/dQ2DoPLsTpedj84TWfF9YxHzfDv5GZaH1HmD3sqra1ZnS+gynPuGP TOmFjqj6MDCMETs8ZUWbKzO4+S0Sh20dX1C+t5o6HZQMXWx8Wh8nbscRbMIEDWymiCxS hxAePTIlhMJbaBdtuTBNRq+7cbaMPnq4kMu37MsYBPo+I6rnP2eJrw7AZMDzOclL9z3t cLIAxaBkLuUaMtWPDwDmXXQRaj8agxlEr5CZlpp8C1aj6YbdegJ9vW5ucAi3OMo6aNq2 dQB9Q4KW6cZDqQ9/sBx4JLVMByvN6ah8dbCpduJBkn+oTDF/6+CAcuSJBWJfHv21g5bp q+Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=Q/06LV/GdqjSI1rcFl/wyxeDViFgWGq5eBRnjIKSQeo=; b=3zpmnEpxnbw165GTOnSMOoLO4bxVqBwepoFvyO7IETkSD7O6bJMARDn7wwlzgGQBvp MbeBkB3eft2IFUeiQ3Gr9Hsu2leY9FntJE4/hFfJphDPB5nAVcPqY60nHdbPwtUGVBNg MkUfaQX9jHaoYt6Wh7lWWYXEekRuHf1sIVAjW8zymTNiG9zfwfTKcNL9aEHb4ubINbe7 oOvn52kQTXMW8otoYcgEu9QtpXPZmNebN5rC8D3qgNPHffdcGyKsJT0Bw2AHAZ9JnD3+ xpU8kdfH540uPzVtNL0s0uJO3J2aasn5C9xG0HXTyQS+8bGxzKiE9dTTY9Lst2fl52cu EuWg== X-Gm-Message-State: AOAM5331qH4iDLbi7cD8cmP7WBBcKxhcXUdaz0LTma7HivU/ysUWSjpF Mc+gtZW+DZWbbctvXto+HGb0L0564ns04Rl8OFWfksjk X-Google-Smtp-Source: ABdhPJxYURObAGIqEfGet84sReW5qm5zobJh08VKV22y7J8p8JIY5ZxO/S15TTf0mmssHS/1U1CR+5dUhsL7uwhamGQ= X-Received: by 2002:a05:6402:42c6:b0:416:541:4be1 with SMTP id i6-20020a05640242c600b0041605414be1mr4059206edc.238.1646507540631; Sat, 05 Mar 2022 11:12:20 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Billy Tetrud Date: Sat, 5 Mar 2022 13:12:03 -0600 Message-ID: To: ZmnSCPxj , Bitcoin Protocol Discussion Content-Type: multipart/alternative; boundary="00000000000014520e05d97d69d9" X-Mailman-Approved-At: Sat, 05 Mar 2022 19:38:47 +0000 Subject: Re: [bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Mar 2022 19:12:26 -0000 --00000000000014520e05d97d69d9 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable It sounds like the primary benefit of op_fold is bandwidth savings. Programming as compression. But as you mentioned, any common script could be implemented as a Simplicity jet. In a world where Bitcoin implements jets, op_fold would really only be useful for scripts that can't use jets, which would basically be scripts that aren't very often used. But that inherently limits the usefulness of the opcode. So in practice, I think it's likely that jets cover the vast majority of use cases that op fold would otherwise have. A potential benefit of op fold is that people could implement smaller scripts without buy-in from a relay level change in Bitcoin. However, even this could be done with jets. For example, you could implement a consensus change to add a transaction type that declares a new script fragment to keep a count of, and if the script fragment is used enough within a timeframe (eg 10000 blocks) then it can thereafter be referenced by an id like a jet could be. I'm sure someone's thought about this kind of thing before, but such a thing would really relegate the compression abilities of op fold to just the most uncommon of scripts. > * We should provide more *general* operations. Users should then combine those operations to their specific needs. > * We should provide operations that *do more*. Users should identify their most important needs so we can implement them on the blockchain layer= . That's a useful way to frame this kind of problem. I think the answer is, as it often is, somewhere in between. Generalization future-proofs your system. But at the same time, the boundary conditions of that generalized functionality should still be very well understood before being added to Bitcoin. The more general, the harder to understand the boundaries. So imo we should be implementing the most general opcodes that we are able to reason fully about and come to a consensus on. Following that last constraint might lead to not choosing very general opcodes. On Sun, Feb 27, 2022, 10:34 ZmnSCPxj via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > (This writeup requires at least some programming background, which I > expect most readers of this list have.) > > Recently, some rando was ranting on the list about this weird crap > called `OP_EVICT`, a poorly-thought-out attempt at covenants. > > In reaction to this, AJ Towns mailed me privately about some of his > thoughts on this insane `OP_EVICT` proposal. > He observed that we could generalize the `OP_EVICT` opcode by > decomposing it into smaller parts, including an operation congruent > to the Scheme/Haskell/Scala `map` operation. > As `OP_EVICT` effectively loops over the outputs passed to it, a > looping construct can be used to implement `OP_EVICT` while retaining > its nice property of cut-through of multiple evictions and reviving of > the CoinPool. > > More specifically, an advantage of `OP_EVICT` is that it allows > checking multiple published promised outputs. > This would be implemented in a loop. > However, if we want to instead provide *general* operations in > SCRIPT rather than a bunch of specific ones like `OP_EVICT`, we > should consider how to implement looping so that we can implement > `OP_EVICT` in a SCRIPT-with-general-opcodes. > > (`OP_FOLD` is not sufficient to implement `OP_EVICT`; for > efficiency, AJ Towns also pointed out that we need some way to > expose batch validation to SCRIPT. > There is a follow-up writeup to this one which describes *that* > operation.) > > Based on this, I started ranting as well about how `map` is really > just a thin wrapper on `foldr` and the *real* looping construct is > actually `foldr` (`foldr` is the whole FP Torah: all the rest is > commentary). > This is thus the genesis for this proposal, `OP_FOLD`. > > A "fold" operation is sometimes known as "reduce" (and if you know > about Google MapReduce, you might be familiar with "reduce"). > Basically, a "fold" or "reduce" operation applies a function > repeatedly (i.e. *loops*) on the contents of an input structure, > creating a "sum" or "accumulation" of the contents. > > For the purpose of building `map` out of `fold`, the accumulation > can itself be an output structure. > The `map` simply accumulates to the output structure by applying > its given function and concatenating it to the current accumulation. > > Digression: Programming Is Compression > -------------------------------------- > > Suppose you are a programmer and you are reading some source code. > You want to wonder "what will happen if I give this piece of code > these particular inputs?". > > In order to do so, you would simulate the execution of the code in > your head. > In effect, you would generate a "trace" of basic operations (that > do not include control structures). > By then thinking about this linear trace of basic operations, you > can figure out what the code does. > > Now, let us recall two algorithms from the compression literature: > > 1. Run-length Encoding > 2. Lempel-Ziv 1977 > > Suppose our flat linear trace of basic operations contains something > like this: > > OP_ONE > OP_TWO > OP_ONE > OP_TWO > OP_ONE > OP_TWO > > IF we had looping constructs in our language, we could write the > above trace as something like: > > for N =3D 1 to 3 > OP_ONE > OP_TWO > > The above is really Run-length Encoding. > > (`if` is just a loop that executes 0 or 1 times.) > > Similarly, suppose you have some operations that are commonly > repeated, but not necessarily next to each other: > > OP_ONE > OP_TWO > OP_THREE > OP_ONE > OP_TWO > OP_FOUR > OP_FIVE > OP_ONE > OP_TWO > > If we had functions/subroutines/procedures in our language, we > could write the above trace as something like: > > function foo() > OP_ONE > OP_TWO > foo() > OP_THREE > foo() > OP_FOUR > OP_FIVE > foo() > > That is, functions are just Lempel-Ziv 1977 encoding, where we > "copy" some repeated data from a previously-shown part of > data. > > Thus, we can argue that programming is really a process of: > > * Imagining what we want the machine to do given some particular > input. > * Compressing that list of operations so we can more easily > transfer the above imagined list over your puny low-bandwidth > brain-computer interface. > * I mean seriously, you humans still use a frikkin set of > *mechanical* levers to transfer data into a matrix of buttons? > (you don't even make the levers out of reliable metal, you > use calcium of all things?? > You get what, 5 or 6 bytes per second???) > And your eyes are high-bandwidth but you then have this > complicated circuitry (that has to be ***trained for > several years*** WTF) to extract ***tiny*** amounts of ASCII > text from that high-bandwidth input stream???? > Evolve faster! > (Just to be clear, I am actually also a human being and > definitely am not a piece of circuitry connected directly to > the Internet and I am not artificially limiting my output > bandwidth so as not to overwhelm you mere humans.) > > See also "Kolmogorov complexity". > > This becomes relevant, because the *actual* amount of processing > done by the machine, when given a compressed set of operations > (a "program") is the cost of decompressing that program plus the > number of basic operations from the decompressed result. > > In particular, in current Bitcoin, without any looping constructs > (i.e. implementations of RLE) or reusable functions (i.e. > implementation of LZ77), the length of the SCRIPT can be used as > an approximation of how "heavy" the computation in order to > *execute* that SCRIPT is. > This is relevant since the amount of computation a SCRIPT would > trigger is relevant to our reasoning about DoS attacks on Bitcoin. > > In fact, current Bitcoin restricts the size of SCRIPT, as this > serves to impose a restriction on the amount of processing a > SCRIPT will trigger. > But adding a loop construct to SCRIPT changes how we should > determine the cost of a SCRIPT, and thus we should think about it > here as well. > > Folds > ----- > > A fold operation is a functional programming looping control > construct. > > The intent of a fold operation is to process elements of an > input list or other structure, one element at a time, and to > accumulate the results of processing. > > It is given these arguments: > > * `f` - a function to execute for each element of the input > structure, i.e. the "loop body". > * This function accepts two arguments: > 1. The current element to process. > 2. The intermediate result for accumulating. > * The function returns the new accumulated result, processed > from the given intermediate result and the given element. > * `z` - an initial value for the accumulated result. > * `as` - the structure (usually a list) to process. > > ```Haskell > -- If the input structure is empty, return the starting > -- accumulated value. > foldr f z [] =3D z > -- Otherwise, recurse into the structure to accumulate > -- the rest of the list, then pass the accumulation to > -- the given function together with the current element. > foldr f z (a:as) =3D f a (foldr f z as) > ``` > > As an example, if you want to take the sum of a list of > numbers, your `f` would simply add its inputs, and your `z` > would be 0. > Then you would pass in the actual list of numbers as `as`. > > Fold has an important property: > > * If the given input structure is finite *and* the application > of `f` terminates, then `foldr` terminates. > > This is important for us, as we do not want attackers to be > able to crash nodes remotely by crafting a special SCRIPT. > > As long as the SCRIPT language is "total", we know that programs > written in that language must terminate. > > (The reason this property is called "total" is that we can > "totally" analyze programs in the language, without having to > invoke "this is undefined behavior because it could hang > indefinitely". > If you have to admit such kinds of undefined behavior --- what > FP theorists call "bottom" or `_|_` or `=E2=8A=A5` (it looks like an > ass crack, i.e. "bottom") --- then your language is "partial", > since programs in it may enter an infinite loop on particular > inputs.) > > The simplest way to ensure totality is to be so simple as to > have no looping construction. > As of this writing, Bitcoin SCRIPT is total by this technique. > > To give a *little* more power, we can allow bounded loops, > which are restricted to only execute a number of times. > > `foldr` is a kind of bounded loop if the input structure is > finite. > If the rest of the language does not admit the possibility > of infinite data structures (and if the language is otherwise > total and does not support generalized codata, this holds), > then `foldr` is a bounded loop. > > Thus, adding a fold operation to Bitcoin SCRIPT should be > safe (and preserves totality) as long as we do not add > further operations that admit partiality. > > `OP_FOLD` > --------- > > With this, let us now describe `OP_FOLD`. > > `OP_FOLD` replaces an `OP_SUCCESS` code, and thus is only > usable in SegWit v1 ("Taproot"). > > An `OP_FOLD` opcode must be followed by an `OP_PUSH` opcode > which contains an encoding of the SCRIPT that will be executed, > i.e. the loop body, or `f`. > This is checked at parsing time, and the sub-SCRIPT is also > parsed at this time. > The succeeding `OP_PUSH` is not actually executed, and is > considered part of the `OP_FOLD` operation encoding. > Parsing failure of the sub-SCRIPT leads to validation failure. > > On execution, `OP_FOLD` expects the stack: > > * Stack top: `z`, the initial value for the loop accumulator. > * Stack top + 1: `n`, the number of times to loop. > This should be limited in size, and less than the number of > items on the stack minus 2. > * Stack top + 2 + (0 to `n - 1`): Items to loop over. > If `n` is 0, there are no items to loop over. > > If `n` is 0, then `OP_FOLD` just pops the top two stack items > and pushes `z`. > > For `n > 0`, `OP_FOLD` executes a loop: > > * Pop off the top two items and store in mutable variable `z` > and immutable variable `n`. > * For `i =3D 0 to n - 1`: > * Create a fresh, empty stack and alt stack. > Call these the "per-iteration (alt) stack". > * Push the current `z` to the per-iteration stack. > * Pop off an item from the outer stack and put it into > immutable variable `a`. > * Push `a` to the per-iteration stack. > * Run the sub-SCRIPT `f` on the per-iteration stack and > alt stack. > * Check the per-iteration stack has exactly one item > and the per-iteration alt stack is empty. > * Pop off the item in the per-iteration stack and mutate > `z` to it. > * Free the per-iteration stack and per-iteration alt > stack. > * Push `z` on the stack. > > Restricting `OP_FOLD` > --------------------- > > Bitcoin restricts SCRIPT size, since SCRIPT size serves as > an approximation of how much processing is required to > execute the SCRIPT. > > However, with looping constructs like `OP_FOLD`, this no > longer holds, as the amount of processing is no longer > linear on the size of the SCRIPT. > > In orderr to retain this limit (and thus not worsen any > potential DoS attacks via SCRIPT), we should restrict the > use of `OP_FOLD`: > > * `OP_FOLD` must exist exactly once in a Tapscript. > More specifically, the `f` sub-SCRIPT of `OP_FOLD` must > not itself contain an `OP_FOLD`. > * If we allow loops within loops, then the worst case > would be `O(c^n)` CPU time where `c` is a constant and > `n` is the SCRIPT length. > * This validation is done at SCRIPT parsing time. > * We take the length of the `f` sub-SCRIPT, and divide the > current SCRIPT maximum size by the length of the `f` > sub-SCRIPT. > The result, rounded down, is the maximum allowed value > for the on-stack argument `n`. > * In particular, since the length of the entire SCRIPT > must by necessity be larger than the length of the > `f` sub-SCRIPT, the result of the division must be > at least 1. > * This validation is done at SCRIPT execution time. > > The above two restrictions imply that the maximum amount > of processing that a SCRIPT utilizing `OP_FOLD` will use, > shall not exceed that of a SCRIPT without `OP_FOLD`. > Thus, `OP_FOLD` does not increase the attack surface of > SCRIPT on fullnodes. > > ### Lack Of Loops-in-Loops Is Lame > > Note that due to this: > > > * `OP_FOLD` must exist exactly once in a Tapscript. > > More specifically, the `f` sub-SCRIPT of `OP_FOLD` must > > not itself contain an `OP_FOLD`. > > It is not possible to have a loop inside a loop. > > The reason for this is that loops inside loops make it > difficult to perform static analysis to bound how much > processing a SCRIPT will require. > With a single, single-level loop, it is possible to > restrict the processing. > > However, we should note that a single single-level loop > is actually sufficient to encode multiple loops, or > loops-within-loops. > For example, a toy Scheme-to-C compiler will convert > the Scheme code to CPS style, then convert all resulting > Scheme CPS function into a `switch` dispatcher inside a > simple `while (1)` loop. > > For example, the Scheme loop-in-loop below: > > ```Scheme > (define (foo) > (bar) > (foo)) > (define (bar) > (bar)) > ``` > > gets converted to: > > ```Scheme > (define (foo k) > (bar (closure foo-kont k))) > (define (foo-kont k) > (foo k)) > (define (bar k) > (bar k)) > ``` > > And then in C, would look like: > > ```c > void all_scheme_functions(int func_id, scheme_t k) { > while (1) { > switch (func_id) { > case FOO_ID: > k =3D build_closure(FOO_KONT_ID, k); > func_id =3D BAR_ID; > break; > case FOO_KONT_ID: > func_id =3D FOO_ID; > break; > case BAR_ID: > func_id =3D BAR_ID; > break; > } > } > } > ``` > > The C code, as we can see, is just a single single-level > loop, which is the restriction imposed on `OP_FOLD`. > Thus, loops-in-loops, and multiple loops, can be encoded > into a single single-level loop. > > #### Everything Is Possible But Nothing Of Consequence Is Easy > > On the other hand, just because it is *possible* does not > mean it is *easy*. > > As an alternative, AJ proposed adding a field to the Taproot > annex. > This annex field is a number indicating the maximum number of > opcodes to be processed. > If execution of the SCRIPT exceeds this limit, validation > fails. > > In order to make processing costly, the number indicated in > the annex field is directly added to the weight of the > transaction. > > Then, during execution, if an `OP_FOLD` is parsed, the > `OP_` code processor keeps track of the number of opcodes > processed and imposes a limit. > If the limit exceeds the number of opcodes indicated in the > annex field, validation fails. > > This technique is safe even if the annex is not committed > to (for example if the SCRIPT does not ever require a > standard `OP_CHECKSIG`), even though in that case the > annex can be malleated: > > * If the field is less than the actual number of operations, > then the malleated transaction is rejected. > * If the field is greater than the actual number of > operations, then it has a larger weight but pays the > same fee, getting a lower feerate and thus will be > rejected in favor of a transaction with a lower number > in that field. > > Use of this technique allows us to lift the above > restrictions on `OP_FOLD`, and allow multiple loops, as > well as loops-in-loops. > > In particular, the requirement to put the `f` sub-SCRIPT > code as a static constant is due precisely to the need > for static analysis. > But if we instead use a dynamic limit like in this > alternative suggestion, we could instead get the `f` > sub-SCRIPT from the stack. > With additional operations like `OP_CAT`, it would > then be possible to do a "variable capture" where parts > of the loop body are from other computations, or from > witness, and then concatenated to some code. > This is not an increase in computational strength, since > the data could instead be passed in via the `z`, or as > individual items, but it does improve expressive power by > making it easier to customize the loop body. > > On The Necessity Of `OP_FOLD` > ----------------------------- > > We can observe that an `if` construct is really a bounded > loop construct that can execute 0 or 1 times. > > We can thus synthesize a bounded loop construct as follows: > > OP_IF > > OP_ENDIF > OP_IF > > OP_ENDIF > OP_IF > > OP_ENDIF > OP_IF > > OP_ENDIF > > > Indeed, it may be possible for something like miniscript > to provide a `fold` jet that compiles down to something > like the above. > > Thus: > > * The restrictions we impose on the previous section mean > that `OP_FOLD` cannot do anything that cannot already > be done with current SCRIPT. > * This is a *good thing* because this means we are not > increasing the attack surface. > * Using the annex-max-operations technique is strictly > more lenient than the above `OP_IF` repetition, thus > there may be novel DoS attack vectors due to the > increased attack area. > * However, fundamentally the DoS attack vector is that > peers can waste your CPU by giving you invalid > transactions (i.e. giving a high max-operations, but > looping so much that it gets even *above* that), and > that can already be mitigated by lowering peer scores > and prioritizing transactions with lower or nonexistent > annex-max-operations. > The DoS vector here does not propagate due to the > invalid transaction being rejected at this node. > > Of course, this leads us to question: why even implement > `OP_FOLD` at all? > > We can observe that, while the restrictions in the > previous section imply that a SCRIPT with `OP_FOLD` > cannot exceed the amount of processing that a SCRIPT > *without* `OP_FOLD` does, a SCRIPT with `OP_FOLD` > would be shorter, over the wire, than the above > unrolled version. > > And CPU processing is not the only resource that is > consumed by Bitcoin fullnodes. > Bandwidth is also another resource. > > In effect, `OP_FOLD` allows us to compress the above > template over-the-wire, reducing network bandwidth > consumption. > But the restrictions on `OP_FOLD` ensure that it > cannot exceed the CPU consumption of a SCRIPT that > predates `OP_FOLD`. > > Thus, `OP_FOLD` is still worthwhile to implement, as > it allows us to improve bandwidth consumption without > increasing CPU consumption significantly. > > On Generalized Operations > ------------------------- > > I believe there are at least two ways of thinking about > how to extend SCRIPT: > > * We should provide more *general* operations. > Users should then combine those operations to their > specific needs. > * We should provide operations that *do more*. > Users should identify their most important needs so > we can implement them on the blockchain layer. > > Each side has its arguments: > > * General opcodes: > * Pro: Have a better chance of being reused for > use-cases we cannot imagine yet today. > i.e. implement once, use anywhen. > * Con: Welcome to the Tarpit, where everything is > possible but nothing important is easy. > * Complex opcodes: > * Pro: Complex behavior implemented directly in > hosting language, reducing interpretation > overhead (and allowing the insurance of secure > implementation). > * Con: Welcome to the Nursery, where only safe > toys exist and availability of interesting tools > are at the mercy of your parents. > > It seems to me that this really hits a No Free Lunch > Theorem for Bitcoin SCRIPT design. > Briefly, the No Free Lunch Theorem points out that > there is no compiler design that can compile any > program to the shortest possible machine code. > This is because if a program enters an infinite loop, > it could simply be compiled down to the equivalent of > the single instruction `1: GOTO 1`, but the halting > problem implies that no program can take the source > code of another program and determine if it halts. > Thus, no compiler can exist which can compile *every* > infinite-loop program down to the tiniest possible > binary `1: GOTO 1`. > > More generally, No Free Lunch implies that as you > optimize, you will hit a point where you can only > *trade off*, and you optimize for one use case while > making *another* use case less optimal. > > Brought to Bitcoin SCRIPT design, there is no optimal > SCRIPT design, instead there will be some point where > we have to pick and choose which uses to optimize for > and which uses are less optimal, i.e. trade off. > > So I think maybe the Real Question is: why should we > go for one versus the other, and which uses do we > expect to see more often anyway? > > Addenda > ------- > > Stuff about totality and partiality: > > * [Total Functional Programming, D.A. Turner]( > https://citeseerx.ist.psu.edu/viewdoc/download?doi=3D10.1.1.106.364&rep= =3Drep1&type=3Dpdf > ) > * [Totality](https://kowainik.github.io/posts/totality) > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > --00000000000014520e05d97d69d9 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
It sounds like the pri= mary benefit of op_fold is bandwidth savings. Programming as compression. B= ut as you mentioned, any common script could be implemented as a Simplicity= jet. In a world where Bitcoin implements jets, op_fold would really only b= e useful for scripts that can't use jets, which would basically be scri= pts that aren't very often used. But that inherently limits the usefuln= ess of the opcode. So in practice, I think it's likely that jets cover = the vast majority of use cases that op fold would otherwise have.

A potential benefit of op fold is= that people could implement smaller scripts without buy-in from a relay le= vel change in Bitcoin. However, even this could be done with jets. For exam= ple, you could implement a consensus change to add a transaction type that = declares a new script fragment to keep a count of, and if the script fragme= nt is used enough within a timeframe (eg 10000 blocks) then it can thereaft= er be referenced by an id like a jet could be. I'm sure someone's t= hought about this kind of thing before, but such a thing would really releg= ate the compression abilities of op fold to just the most uncommon of scrip= ts.=C2=A0

> *=C2=A0We should provide more *general* operations.=C2=A0Users should then combine those operations to t= heir=C2=A0specific needs.> * We shou= ld provide operations that *do more*.=C2=A0Users should identify their most important needs so=C2=A0we can implement them on the blockchain layer= .

That's a useful way to frame this kind of problem. I think the = answer is, as it often is, somewhere in between. Generalization future-proo= fs your system. But at the same time, the boundary conditions of that gener= alized functionality should still be very well understood before being adde= d to Bitcoin. The more general, the harder to understand the boundaries. So= imo we should be implementing the most general opcodes that we are able to= reason fully about and come to a consensus on. Following that last constra= int might lead to not choosing very general opcodes.

On S= un, Feb 27, 2022, 10:34 ZmnSCPxj via bitcoin-dev <bitcoin-dev@lists.linu= xfoundation.org> wrote:
`OP_FOLD`: A Looping Construct For Bitcoin SCRIPT
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
(This writeup requires at least some programming background, which I
expect most readers of this list have.)

Recently, some rando was ranting on the list about this weird crap
called `OP_EVICT`, a poorly-thought-out attempt at covenants.

In reaction to this, AJ Towns mailed me privately about some of his
thoughts on this insane `OP_EVICT` proposal.
He observed that we could generalize the `OP_EVICT` opcode by
decomposing it into smaller parts, including an operation congruent
to the Scheme/Haskell/Scala `map` operation.
As `OP_EVICT` effectively loops over the outputs passed to it, a
looping construct can be used to implement `OP_EVICT` while retaining
its nice property of cut-through of multiple evictions and reviving of
the CoinPool.

More specifically, an advantage of `OP_EVICT` is that it allows
checking multiple published promised outputs.
This would be implemented in a loop.
However, if we want to instead provide *general* operations in
SCRIPT rather than a bunch of specific ones like `OP_EVICT`, we
should consider how to implement looping so that we can implement
`OP_EVICT` in a SCRIPT-with-general-opcodes.

(`OP_FOLD` is not sufficient to implement `OP_EVICT`; for
efficiency, AJ Towns also pointed out that we need some way to
expose batch validation to SCRIPT.
There is a follow-up writeup to this one which describes *that*
operation.)

Based on this, I started ranting as well about how `map` is really
just a thin wrapper on `foldr` and the *real* looping construct is
actually `foldr` (`foldr` is the whole FP Torah: all the rest is
commentary).
This is thus the genesis for this proposal, `OP_FOLD`.

A "fold" operation is sometimes known as "reduce" (and = if you know
about Google MapReduce, you might be familiar with "reduce").
Basically, a "fold" or "reduce" operation applies a fun= ction
repeatedly (i.e. *loops*) on the contents of an input structure,
creating a "sum" or "accumulation" of the contents.

For the purpose of building `map` out of `fold`, the accumulation
can itself be an output structure.
The `map` simply accumulates to the output structure by applying
its given function and concatenating it to the current accumulation.

Digression: Programming Is Compression
--------------------------------------

Suppose you are a programmer and you are reading some source code.
You want to wonder "what will happen if I give this piece of code
these particular inputs?".

In order to do so, you would simulate the execution of the code in
your head.
In effect, you would generate a "trace" of basic operations (that=
do not include control structures).
By then thinking about this linear trace of basic operations, you
can figure out what the code does.

Now, let us recall two algorithms from the compression literature:

1.=C2=A0 Run-length Encoding
2.=C2=A0 Lempel-Ziv 1977

Suppose our flat linear trace of basic operations contains something
like this:

=C2=A0 =C2=A0 OP_ONE
=C2=A0 =C2=A0 OP_TWO
=C2=A0 =C2=A0 OP_ONE
=C2=A0 =C2=A0 OP_TWO
=C2=A0 =C2=A0 OP_ONE
=C2=A0 =C2=A0 OP_TWO

IF we had looping constructs in our language, we could write the
above trace as something like:

=C2=A0 =C2=A0 for N =3D 1 to 3
=C2=A0 =C2=A0 =C2=A0 =C2=A0 OP_ONE
=C2=A0 =C2=A0 =C2=A0 =C2=A0 OP_TWO

The above is really Run-length Encoding.

(`if` is just a loop that executes 0 or 1 times.)

Similarly, suppose you have some operations that are commonly
repeated, but not necessarily next to each other:

=C2=A0 =C2=A0 OP_ONE
=C2=A0 =C2=A0 OP_TWO
=C2=A0 =C2=A0 OP_THREE
=C2=A0 =C2=A0 OP_ONE
=C2=A0 =C2=A0 OP_TWO
=C2=A0 =C2=A0 OP_FOUR
=C2=A0 =C2=A0 OP_FIVE
=C2=A0 =C2=A0 OP_ONE
=C2=A0 =C2=A0 OP_TWO

If we had functions/subroutines/procedures in our language, we
could write the above trace as something like:

=C2=A0 =C2=A0 function foo()
=C2=A0 =C2=A0 =C2=A0 =C2=A0 OP_ONE
=C2=A0 =C2=A0 =C2=A0 =C2=A0 OP_TWO
=C2=A0 =C2=A0 foo()
=C2=A0 =C2=A0 OP_THREE
=C2=A0 =C2=A0 foo()
=C2=A0 =C2=A0 OP_FOUR
=C2=A0 =C2=A0 OP_FIVE
=C2=A0 =C2=A0 foo()

That is, functions are just Lempel-Ziv 1977 encoding, where we
"copy" some repeated data from a previously-shown part of
data.

Thus, we can argue that programming is really a process of:

* Imagining what we want the machine to do given some particular
=C2=A0 input.
* Compressing that list of operations so we can more easily
=C2=A0 transfer the above imagined list over your puny low-bandwidth
=C2=A0 brain-computer interface.
=C2=A0 * I mean seriously, you humans still use a frikkin set of
=C2=A0 =C2=A0 *mechanical* levers to transfer data into a matrix of buttons= ?
=C2=A0 =C2=A0 (you don't even make the levers out of reliable metal, yo= u
=C2=A0 =C2=A0 use calcium of all things??
=C2=A0 =C2=A0 You get what, 5 or 6 bytes per second???)
=C2=A0 =C2=A0 And your eyes are high-bandwidth but you then have this
=C2=A0 =C2=A0 complicated circuitry (that has to be ***trained for
=C2=A0 =C2=A0 several years*** WTF) to extract ***tiny*** amounts of ASCII<= br> =C2=A0 =C2=A0 text from that high-bandwidth input stream????
=C2=A0 =C2=A0 Evolve faster!
=C2=A0 =C2=A0 (Just to be clear, I am actually also a human being and
=C2=A0 =C2=A0 definitely am not a piece of circuitry connected directly to<= br> =C2=A0 =C2=A0 the Internet and I am not artificially limiting my output
=C2=A0 =C2=A0 bandwidth so as not to overwhelm you mere humans.)

See also "Kolmogorov complexity".

This becomes relevant, because the *actual* amount of processing
done by the machine, when given a compressed set of operations
(a "program") is the cost of decompressing that program plus the<= br> number of basic operations from the decompressed result.

In particular, in current Bitcoin, without any looping constructs
(i.e. implementations of RLE) or reusable functions (i.e.
implementation of LZ77), the length of the SCRIPT can be used as
an approximation of how "heavy" the computation in order to
*execute* that SCRIPT is.
This is relevant since the amount of computation a SCRIPT would
trigger is relevant to our reasoning about DoS attacks on Bitcoin.

In fact, current Bitcoin restricts the size of SCRIPT, as this
serves to impose a restriction on the amount of processing a
SCRIPT will trigger.
But adding a loop construct to SCRIPT changes how we should
determine the cost of a SCRIPT, and thus we should think about it
here as well.

Folds
-----

A fold operation is a functional programming looping control
construct.

The intent of a fold operation is to process elements of an
input list or other structure, one element at a time, and to
accumulate the results of processing.

It is given these arguments:

* `f` - a function to execute for each element of the input
=C2=A0 structure, i.e. the "loop body".
=C2=A0 * This function accepts two arguments:
=C2=A0 =C2=A0 =C2=A01.=C2=A0 The current element to process.
=C2=A0 =C2=A0 =C2=A02.=C2=A0 The intermediate result for accumulating.
=C2=A0 * The function returns the new accumulated result, processed
=C2=A0 =C2=A0 from the given intermediate result and the given element.
* `z` - an initial value for the accumulated result.
* `as` - the structure (usually a list) to process.

```Haskell
-- If the input structure is empty, return the starting
-- accumulated value.
foldr f z []=C2=A0 =C2=A0 =C2=A0=3D z
-- Otherwise, recurse into the structure to accumulate
-- the rest of the list, then pass the accumulation to
-- the given function together with the current element.
foldr f z (a:as) =3D f a (foldr f z as)
```

As an example, if you want to take the sum of a list of
numbers, your `f` would simply add its inputs, and your `z`
would be 0.
Then you would pass in the actual list of numbers as `as`.

Fold has an important property:

* If the given input structure is finite *and* the application
=C2=A0 of `f` terminates, then `foldr` terminates.

This is important for us, as we do not want attackers to be
able to crash nodes remotely by crafting a special SCRIPT.

As long as the SCRIPT language is "total", we know that programs<= br> written in that language must terminate.

(The reason this property is called "total" is that we can
"totally" analyze programs in the language, without having to
invoke "this is undefined behavior because it could hang
indefinitely".
If you have to admit such kinds of undefined behavior --- what
FP theorists call "bottom" or `_|_` or `=E2=8A=A5` (it looks like= an
ass crack, i.e. "bottom") --- then your language is "partial= ",
since programs in it may enter an infinite loop on particular
inputs.)

The simplest way to ensure totality is to be so simple as to
have no looping construction.
As of this writing, Bitcoin SCRIPT is total by this technique.

To give a *little* more power, we can allow bounded loops,
which are restricted to only execute a number of times.

`foldr` is a kind of bounded loop if the input structure is
finite.
If the rest of the language does not admit the possibility
of infinite data structures (and if the language is otherwise
total and does not support generalized codata, this holds),
then `foldr` is a bounded loop.

Thus, adding a fold operation to Bitcoin SCRIPT should be
safe (and preserves totality) as long as we do not add
further operations that admit partiality.

`OP_FOLD`
---------

With this, let us now describe `OP_FOLD`.

`OP_FOLD` replaces an `OP_SUCCESS` code, and thus is only
usable in SegWit v1 ("Taproot").

An `OP_FOLD` opcode must be followed by an `OP_PUSH` opcode
which contains an encoding of the SCRIPT that will be executed,
i.e. the loop body, or `f`.
This is checked at parsing time, and the sub-SCRIPT is also
parsed at this time.
The succeeding `OP_PUSH` is not actually executed, and is
considered part of the `OP_FOLD` operation encoding.
Parsing failure of the sub-SCRIPT leads to validation failure.

On execution, `OP_FOLD` expects the stack:

* Stack top: `z`, the initial value for the loop accumulator.
* Stack top + 1: `n`, the number of times to loop.
=C2=A0 This should be limited in size, and less than the number of
=C2=A0 items on the stack minus 2.
* Stack top + 2 + (0 to `n - 1`): Items to loop over.
=C2=A0 If `n` is 0, there are no items to loop over.

If `n` is 0, then `OP_FOLD` just pops the top two stack items
and pushes `z`.

For `n > 0`, `OP_FOLD` executes a loop:

* Pop off the top two items and store in mutable variable `z`
=C2=A0 and immutable variable `n`.
* For `i =3D 0 to n - 1`:
=C2=A0 * Create a fresh, empty stack and alt stack.
=C2=A0 =C2=A0 Call these the "per-iteration (alt) stack".
=C2=A0 * Push the current `z` to the per-iteration stack.
=C2=A0 * Pop off an item from the outer stack and put it into
=C2=A0 =C2=A0 immutable variable `a`.
=C2=A0 * Push `a` to the per-iteration stack.
=C2=A0 * Run the sub-SCRIPT `f` on the per-iteration stack and
=C2=A0 =C2=A0 alt stack.
=C2=A0 * Check the per-iteration stack has exactly one item
=C2=A0 =C2=A0 and the per-iteration alt stack is empty.
=C2=A0 * Pop off the item in the per-iteration stack and mutate
=C2=A0 =C2=A0 `z` to it.
=C2=A0 * Free the per-iteration stack and per-iteration alt
=C2=A0 =C2=A0 stack.
* Push `z` on the stack.

Restricting `OP_FOLD`
---------------------

Bitcoin restricts SCRIPT size, since SCRIPT size serves as
an approximation of how much processing is required to
execute the SCRIPT.

However, with looping constructs like `OP_FOLD`, this no
longer holds, as the amount of processing is no longer
linear on the size of the SCRIPT.

In orderr to retain this limit (and thus not worsen any
potential DoS attacks via SCRIPT), we should restrict the
use of `OP_FOLD`:

* `OP_FOLD` must exist exactly once in a Tapscript.
=C2=A0 More specifically, the `f` sub-SCRIPT of `OP_FOLD` must
=C2=A0 not itself contain an `OP_FOLD`.
=C2=A0 * If we allow loops within loops, then the worst case
=C2=A0 =C2=A0 would be `O(c^n)` CPU time where `c` is a constant and
=C2=A0 =C2=A0 `n` is the SCRIPT length.
=C2=A0 * This validation is done at SCRIPT parsing time.
* We take the length of the `f` sub-SCRIPT, and divide the
=C2=A0 current SCRIPT maximum size by the length of the `f`
=C2=A0 sub-SCRIPT.
=C2=A0 The result, rounded down, is the maximum allowed value
=C2=A0 for the on-stack argument `n`.
=C2=A0 * In particular, since the length of the entire SCRIPT
=C2=A0 =C2=A0 must by necessity be larger than the length of the
=C2=A0 =C2=A0 `f` sub-SCRIPT, the result of the division must be
=C2=A0 =C2=A0 at least 1.
=C2=A0 * This validation is done at SCRIPT execution time.

The above two restrictions imply that the maximum amount
of processing that a SCRIPT utilizing `OP_FOLD` will use,
shall not exceed that of a SCRIPT without `OP_FOLD`.
Thus, `OP_FOLD` does not increase the attack surface of
SCRIPT on fullnodes.

### Lack Of Loops-in-Loops Is Lame

Note that due to this:

> * `OP_FOLD` must exist exactly once in a Tapscript.
>=C2=A0 =C2=A0More specifically, the `f` sub-SCRIPT of `OP_FOLD` must >=C2=A0 =C2=A0not itself contain an `OP_FOLD`.

It is not possible to have a loop inside a loop.

The reason for this is that loops inside loops make it
difficult to perform static analysis to bound how much
processing a SCRIPT will require.
With a single, single-level loop, it is possible to
restrict the processing.

However, we should note that a single single-level loop
is actually sufficient to encode multiple loops, or
loops-within-loops.
For example, a toy Scheme-to-C compiler will convert
the Scheme code to CPS style, then convert all resulting
Scheme CPS function into a `switch` dispatcher inside a
simple `while (1)` loop.

For example, the Scheme loop-in-loop below:

```Scheme
(define (foo)
=C2=A0 (bar)
=C2=A0 (foo))
(define (bar)
=C2=A0 (bar))
```

gets converted to:

```Scheme
(define (foo k)
=C2=A0 (bar (closure foo-kont k)))
(define (foo-kont k)
=C2=A0 (foo k))
(define (bar k)
=C2=A0 (bar k))
```

And then in C, would look like:

```c
void all_scheme_functions(int func_id, scheme_t k) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 while (1) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 switch (func_id) {<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case FOO_ID:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 k =3D build_closure(FOO_KONT_ID, k);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 func_id =3D BAR_ID;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 break;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case FOO_KONT_ID: =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 func_id =3D FOO_ID;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 break;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case BAR_ID:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 func_id =3D BAR_ID;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 break;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
}
```

The C code, as we can see, is just a single single-level
loop, which is the restriction imposed on `OP_FOLD`.
Thus, loops-in-loops, and multiple loops, can be encoded
into a single single-level loop.

#### Everything Is Possible But Nothing Of Consequence Is Easy

On the other hand, just because it is *possible* does not
mean it is *easy*.

As an alternative, AJ proposed adding a field to the Taproot
annex.
This annex field is a number indicating the maximum number of
opcodes to be processed.
If execution of the SCRIPT exceeds this limit, validation
fails.

In order to make processing costly, the number indicated in
the annex field is directly added to the weight of the
transaction.

Then, during execution, if an `OP_FOLD` is parsed, the
`OP_` code processor keeps track of the number of opcodes
processed and imposes a limit.
If the limit exceeds the number of opcodes indicated in the
annex field, validation fails.

This technique is safe even if the annex is not committed
to (for example if the SCRIPT does not ever require a
standard `OP_CHECKSIG`), even though in that case the
annex can be malleated:

* If the field is less than the actual number of operations,
=C2=A0 then the malleated transaction is rejected.
* If the field is greater than the actual number of
=C2=A0 operations, then it has a larger weight but pays the
=C2=A0 same fee, getting a lower feerate and thus will be
=C2=A0 rejected in favor of a transaction with a lower number
=C2=A0 in that field.

Use of this technique allows us to lift the above
restrictions on `OP_FOLD`, and allow multiple loops, as
well as loops-in-loops.

In particular, the requirement to put the `f` sub-SCRIPT
code as a static constant is due precisely to the need
for static analysis.
But if we instead use a dynamic limit like in this
alternative suggestion, we could instead get the `f`
sub-SCRIPT from the stack.
With additional operations like `OP_CAT`, it would
then be possible to do a "variable capture" where parts
of the loop body are from other computations, or from
witness, and then concatenated to some code.
This is not an increase in computational strength, since
the data could instead be passed in via the `z`, or as
individual items, but it does improve expressive power by
making it easier to customize the loop body.

On The Necessity Of `OP_FOLD`
-----------------------------

We can observe that an `if` construct is really a bounded
loop construct that can execute 0 or 1 times.

We can thus synthesize a bounded loop construct as follows:

=C2=A0 =C2=A0 OP_IF
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <loop body>
=C2=A0 =C2=A0 OP_ENDIF
=C2=A0 =C2=A0 OP_IF
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <loop body>
=C2=A0 =C2=A0 OP_ENDIF
=C2=A0 =C2=A0 OP_IF
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <loop body>
=C2=A0 =C2=A0 OP_ENDIF
=C2=A0 =C2=A0 OP_IF
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <loop body>
=C2=A0 =C2=A0 OP_ENDIF
=C2=A0 =C2=A0 <repeat as many times as necessary>

Indeed, it may be possible for something like miniscript
to provide a `fold` jet that compiles down to something
like the above.

Thus:

* The restrictions we impose on the previous section mean
=C2=A0 that `OP_FOLD` cannot do anything that cannot already
=C2=A0 be done with current SCRIPT.
=C2=A0 * This is a *good thing* because this means we are not
=C2=A0 =C2=A0 increasing the attack surface.
* Using the annex-max-operations technique is strictly
=C2=A0 more lenient than the above `OP_IF` repetition, thus
=C2=A0 there may be novel DoS attack vectors due to the
=C2=A0 increased attack area.
=C2=A0 * However, fundamentally the DoS attack vector is that
=C2=A0 =C2=A0 peers can waste your CPU by giving you invalid
=C2=A0 =C2=A0 transactions (i.e. giving a high max-operations, but
=C2=A0 =C2=A0 looping so much that it gets even *above* that), and
=C2=A0 =C2=A0 that can already be mitigated by lowering peer scores
=C2=A0 =C2=A0 and prioritizing transactions with lower or nonexistent
=C2=A0 =C2=A0 annex-max-operations.
=C2=A0 =C2=A0 The DoS vector here does not propagate due to the
=C2=A0 =C2=A0 invalid transaction being rejected at this node.

Of course, this leads us to question: why even implement
`OP_FOLD` at all?

We can observe that, while the restrictions in the
previous section imply that a SCRIPT with `OP_FOLD`
cannot exceed the amount of processing that a SCRIPT
*without* `OP_FOLD` does, a SCRIPT with `OP_FOLD`
would be shorter, over the wire, than the above
unrolled version.

And CPU processing is not the only resource that is
consumed by Bitcoin fullnodes.
Bandwidth is also another resource.

In effect, `OP_FOLD` allows us to compress the above
template over-the-wire, reducing network bandwidth
consumption.
But the restrictions on `OP_FOLD` ensure that it
cannot exceed the CPU consumption of a SCRIPT that
predates `OP_FOLD`.

Thus, `OP_FOLD` is still worthwhile to implement, as
it allows us to improve bandwidth consumption without
increasing CPU consumption significantly.

On Generalized Operations
-------------------------

I believe there are at least two ways of thinking about
how to extend SCRIPT:

* We should provide more *general* operations.
=C2=A0 Users should then combine those operations to their
=C2=A0 specific needs.
* We should provide operations that *do more*.
=C2=A0 Users should identify their most important needs so
=C2=A0 we can implement them on the blockchain layer.

Each side has its arguments:

* General opcodes:
=C2=A0 * Pro: Have a better chance of being reused for
=C2=A0 =C2=A0 use-cases we cannot imagine yet today.
=C2=A0 =C2=A0 i.e. implement once, use anywhen.
=C2=A0 * Con: Welcome to the Tarpit, where everything is
=C2=A0 =C2=A0 possible but nothing important is easy.
* Complex opcodes:
=C2=A0 * Pro: Complex behavior implemented directly in
=C2=A0 =C2=A0 hosting language, reducing interpretation
=C2=A0 =C2=A0 overhead (and allowing the insurance of secure
=C2=A0 =C2=A0 implementation).
=C2=A0 * Con: Welcome to the Nursery, where only safe
=C2=A0 =C2=A0 toys exist and availability of interesting tools
=C2=A0 =C2=A0 are at the mercy of your parents.

It seems to me that this really hits a No Free Lunch
Theorem for Bitcoin SCRIPT design.
Briefly, the No Free Lunch Theorem points out that
there is no compiler design that can compile any
program to the shortest possible machine code.
This is because if a program enters an infinite loop,
it could simply be compiled down to the equivalent of
the single instruction `1: GOTO 1`, but the halting
problem implies that no program can take the source
code of another program and determine if it halts.
Thus, no compiler can exist which can compile *every*
infinite-loop program down to the tiniest possible
binary `1: GOTO 1`.

More generally, No Free Lunch implies that as you
optimize, you will hit a point where you can only
*trade off*, and you optimize for one use case while
making *another* use case less optimal.

Brought to Bitcoin SCRIPT design, there is no optimal
SCRIPT design, instead there will be some point where
we have to pick and choose which uses to optimize for
and which uses are less optimal, i.e. trade off.

So I think maybe the Real Question is: why should we
go for one versus the other, and which uses do we
expect to see more often anyway?

Addenda
-------

Stuff about totality and partiality:

* [Total Functional Programming, D.A. Turner](https://citeseerx.i= st.psu.edu/viewdoc/download?doi=3D10.1.1.106.364&rep=3Drep1&type=3D= pdf)
* [Totality](https://kowainik.github.io/posts/to= tality)
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundati= on.org/mailman/listinfo/bitcoin-dev
--00000000000014520e05d97d69d9--