Return-Path: Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 64535C001A for ; Sun, 27 Feb 2022 16:34:48 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 4396140278 for ; Sun, 27 Feb 2022 16:34:48 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -1.601 X-Spam-Level: X-Spam-Status: No, score=-1.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, FROM_LOCAL_NOVOWEL=0.5, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=protonmail.com Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SDN6ib4dFFXm for ; Sun, 27 Feb 2022 16:34:46 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from mail-40138.protonmail.ch (mail-40138.protonmail.ch [185.70.40.138]) by smtp2.osuosl.org (Postfix) with ESMTPS id C00664017B for ; Sun, 27 Feb 2022 16:34:45 +0000 (UTC) Date: Sun, 27 Feb 2022 16:34:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=protonmail3; t=1645979681; bh=oRF9hxGNmANGT574cEN6NtKUxBlLjI1PeuMr9ecpgm4=; h=Date:To:From:Reply-To:Subject:Message-ID:From:To:Cc:Date:Subject: Reply-To:Feedback-ID:Message-ID; b=o9cLhBbGDWoFr1/9HhMKtIBY3QcV96OWBmhhPa+DmYxGPWK36IAhO72ALp2WbrXrT V5kKOB+/KniZGSj2+jI+fAbjiyqG1WImEiyCuZG/g8ITrFfs6I6B9xZXgj2LqOOWIw w8j2PlLW7x5oYCcMiKrWS0Gui/eUXT0bMSY2aWQM43HDmPwFuqYTckwjC6jzGNtTab rRGBVAFcCQ5E6Nazu3alzANWI8cU/gntVJUoYa3SZrKSHBnYWlDnxb5jwp2AvKqE5P ctzr6ErykpmJ2O1eXO7lqcTI12Dq1SYshqA59/epikuACP/Hj4orfnJCpiIxNlt6Ix EFt6uJP3+MVhQ== To: bitcoin-dev From: ZmnSCPxj Reply-To: ZmnSCPxj Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: [bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Feb 2022 16:34:48 -0000 `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D (This writeup requires at least some programming background, which I expect most readers of this list have.) Recently, some rando was ranting on the list about this weird crap called `OP_EVICT`, a poorly-thought-out attempt at covenants. In reaction to this, AJ Towns mailed me privately about some of his thoughts on this insane `OP_EVICT` proposal. He observed that we could generalize the `OP_EVICT` opcode by decomposing it into smaller parts, including an operation congruent to the Scheme/Haskell/Scala `map` operation. As `OP_EVICT` effectively loops over the outputs passed to it, a looping construct can be used to implement `OP_EVICT` while retaining its nice property of cut-through of multiple evictions and reviving of the CoinPool. More specifically, an advantage of `OP_EVICT` is that it allows checking multiple published promised outputs. This would be implemented in a loop. However, if we want to instead provide *general* operations in SCRIPT rather than a bunch of specific ones like `OP_EVICT`, we should consider how to implement looping so that we can implement `OP_EVICT` in a SCRIPT-with-general-opcodes. (`OP_FOLD` is not sufficient to implement `OP_EVICT`; for efficiency, AJ Towns also pointed out that we need some way to expose batch validation to SCRIPT. There is a follow-up writeup to this one which describes *that* operation.) Based on this, I started ranting as well about how `map` is really just a thin wrapper on `foldr` and the *real* looping construct is actually `foldr` (`foldr` is the whole FP Torah: all the rest is commentary). This is thus the genesis for this proposal, `OP_FOLD`. A "fold" operation is sometimes known as "reduce" (and if you know about Google MapReduce, you might be familiar with "reduce"). Basically, a "fold" or "reduce" operation applies a function repeatedly (i.e. *loops*) on the contents of an input structure, creating a "sum" or "accumulation" of the contents. For the purpose of building `map` out of `fold`, the accumulation can itself be an output structure. The `map` simply accumulates to the output structure by applying its given function and concatenating it to the current accumulation. Digression: Programming Is Compression -------------------------------------- Suppose you are a programmer and you are reading some source code. You want to wonder "what will happen if I give this piece of code these particular inputs?". In order to do so, you would simulate the execution of the code in your head. In effect, you would generate a "trace" of basic operations (that do not include control structures). By then thinking about this linear trace of basic operations, you can figure out what the code does. Now, let us recall two algorithms from the compression literature: 1. Run-length Encoding 2. Lempel-Ziv 1977 Suppose our flat linear trace of basic operations contains something like this: OP_ONE OP_TWO OP_ONE OP_TWO OP_ONE OP_TWO IF we had looping constructs in our language, we could write the above trace as something like: for N =3D 1 to 3 OP_ONE OP_TWO The above is really Run-length Encoding. (`if` is just a loop that executes 0 or 1 times.) Similarly, suppose you have some operations that are commonly repeated, but not necessarily next to each other: OP_ONE OP_TWO OP_THREE OP_ONE OP_TWO OP_FOUR OP_FIVE OP_ONE OP_TWO If we had functions/subroutines/procedures in our language, we could write the above trace as something like: function foo() OP_ONE OP_TWO foo() OP_THREE foo() OP_FOUR OP_FIVE foo() That is, functions are just Lempel-Ziv 1977 encoding, where we "copy" some repeated data from a previously-shown part of data. Thus, we can argue that programming is really a process of: * Imagining what we want the machine to do given some particular input. * Compressing that list of operations so we can more easily transfer the above imagined list over your puny low-bandwidth brain-computer interface. * I mean seriously, you humans still use a frikkin set of *mechanical* levers to transfer data into a matrix of buttons? (you don't even make the levers out of reliable metal, you use calcium of all things?? You get what, 5 or 6 bytes per second???) And your eyes are high-bandwidth but you then have this complicated circuitry (that has to be ***trained for several years*** WTF) to extract ***tiny*** amounts of ASCII text from that high-bandwidth input stream???? Evolve faster! (Just to be clear, I am actually also a human being and definitely am not a piece of circuitry connected directly to the Internet and I am not artificially limiting my output bandwidth so as not to overwhelm you mere humans.) See also "Kolmogorov complexity". This becomes relevant, because the *actual* amount of processing done by the machine, when given a compressed set of operations (a "program") is the cost of decompressing that program plus the number of basic operations from the decompressed result. In particular, in current Bitcoin, without any looping constructs (i.e. implementations of RLE) or reusable functions (i.e. implementation of LZ77), the length of the SCRIPT can be used as an approximation of how "heavy" the computation in order to *execute* that SCRIPT is. This is relevant since the amount of computation a SCRIPT would trigger is relevant to our reasoning about DoS attacks on Bitcoin. In fact, current Bitcoin restricts the size of SCRIPT, as this serves to impose a restriction on the amount of processing a SCRIPT will trigger. But adding a loop construct to SCRIPT changes how we should determine the cost of a SCRIPT, and thus we should think about it here as well. Folds ----- A fold operation is a functional programming looping control construct. The intent of a fold operation is to process elements of an input list or other structure, one element at a time, and to accumulate the results of processing. It is given these arguments: * `f` - a function to execute for each element of the input structure, i.e. the "loop body". * This function accepts two arguments: 1. The current element to process. 2. The intermediate result for accumulating. * The function returns the new accumulated result, processed from the given intermediate result and the given element. * `z` - an initial value for the accumulated result. * `as` - the structure (usually a list) to process. ```Haskell -- If the input structure is empty, return the starting -- accumulated value. foldr f z [] =3D z -- Otherwise, recurse into the structure to accumulate -- the rest of the list, then pass the accumulation to -- the given function together with the current element. foldr f z (a:as) =3D f a (foldr f z as) ``` As an example, if you want to take the sum of a list of numbers, your `f` would simply add its inputs, and your `z` would be 0. Then you would pass in the actual list of numbers as `as`. Fold has an important property: * If the given input structure is finite *and* the application of `f` terminates, then `foldr` terminates. This is important for us, as we do not want attackers to be able to crash nodes remotely by crafting a special SCRIPT. As long as the SCRIPT language is "total", we know that programs written in that language must terminate. (The reason this property is called "total" is that we can "totally" analyze programs in the language, without having to invoke "this is undefined behavior because it could hang indefinitely". If you have to admit such kinds of undefined behavior --- what FP theorists call "bottom" or `_|_` or `=E2=8A=A5` (it looks like an ass crack, i.e. "bottom") --- then your language is "partial", since programs in it may enter an infinite loop on particular inputs.) The simplest way to ensure totality is to be so simple as to have no looping construction. As of this writing, Bitcoin SCRIPT is total by this technique. To give a *little* more power, we can allow bounded loops, which are restricted to only execute a number of times. `foldr` is a kind of bounded loop if the input structure is finite. If the rest of the language does not admit the possibility of infinite data structures (and if the language is otherwise total and does not support generalized codata, this holds), then `foldr` is a bounded loop. Thus, adding a fold operation to Bitcoin SCRIPT should be safe (and preserves totality) as long as we do not add further operations that admit partiality. `OP_FOLD` --------- With this, let us now describe `OP_FOLD`. `OP_FOLD` replaces an `OP_SUCCESS` code, and thus is only usable in SegWit v1 ("Taproot"). An `OP_FOLD` opcode must be followed by an `OP_PUSH` opcode which contains an encoding of the SCRIPT that will be executed, i.e. the loop body, or `f`. This is checked at parsing time, and the sub-SCRIPT is also parsed at this time. The succeeding `OP_PUSH` is not actually executed, and is considered part of the `OP_FOLD` operation encoding. Parsing failure of the sub-SCRIPT leads to validation failure. On execution, `OP_FOLD` expects the stack: * Stack top: `z`, the initial value for the loop accumulator. * Stack top + 1: `n`, the number of times to loop. This should be limited in size, and less than the number of items on the stack minus 2. * Stack top + 2 + (0 to `n - 1`): Items to loop over. If `n` is 0, there are no items to loop over. If `n` is 0, then `OP_FOLD` just pops the top two stack items and pushes `z`. For `n > 0`, `OP_FOLD` executes a loop: * Pop off the top two items and store in mutable variable `z` and immutable variable `n`. * For `i =3D 0 to n - 1`: * Create a fresh, empty stack and alt stack. Call these the "per-iteration (alt) stack". * Push the current `z` to the per-iteration stack. * Pop off an item from the outer stack and put it into immutable variable `a`. * Push `a` to the per-iteration stack. * Run the sub-SCRIPT `f` on the per-iteration stack and alt stack. * Check the per-iteration stack has exactly one item and the per-iteration alt stack is empty. * Pop off the item in the per-iteration stack and mutate `z` to it. * Free the per-iteration stack and per-iteration alt stack. * Push `z` on the stack. Restricting `OP_FOLD` --------------------- Bitcoin restricts SCRIPT size, since SCRIPT size serves as an approximation of how much processing is required to execute the SCRIPT. However, with looping constructs like `OP_FOLD`, this no longer holds, as the amount of processing is no longer linear on the size of the SCRIPT. In orderr to retain this limit (and thus not worsen any potential DoS attacks via SCRIPT), we should restrict the use of `OP_FOLD`: * `OP_FOLD` must exist exactly once in a Tapscript. More specifically, the `f` sub-SCRIPT of `OP_FOLD` must not itself contain an `OP_FOLD`. * If we allow loops within loops, then the worst case would be `O(c^n)` CPU time where `c` is a constant and `n` is the SCRIPT length. * This validation is done at SCRIPT parsing time. * We take the length of the `f` sub-SCRIPT, and divide the current SCRIPT maximum size by the length of the `f` sub-SCRIPT. The result, rounded down, is the maximum allowed value for the on-stack argument `n`. * In particular, since the length of the entire SCRIPT must by necessity be larger than the length of the `f` sub-SCRIPT, the result of the division must be at least 1. * This validation is done at SCRIPT execution time. The above two restrictions imply that the maximum amount of processing that a SCRIPT utilizing `OP_FOLD` will use, shall not exceed that of a SCRIPT without `OP_FOLD`. Thus, `OP_FOLD` does not increase the attack surface of SCRIPT on fullnodes. ### Lack Of Loops-in-Loops Is Lame Note that due to this: > * `OP_FOLD` must exist exactly once in a Tapscript. > More specifically, the `f` sub-SCRIPT of `OP_FOLD` must > not itself contain an `OP_FOLD`. It is not possible to have a loop inside a loop. The reason for this is that loops inside loops make it difficult to perform static analysis to bound how much processing a SCRIPT will require. With a single, single-level loop, it is possible to restrict the processing. However, we should note that a single single-level loop is actually sufficient to encode multiple loops, or loops-within-loops. For example, a toy Scheme-to-C compiler will convert the Scheme code to CPS style, then convert all resulting Scheme CPS function into a `switch` dispatcher inside a simple `while (1)` loop. For example, the Scheme loop-in-loop below: ```Scheme (define (foo) (bar) (foo)) (define (bar) (bar)) ``` gets converted to: ```Scheme (define (foo k) (bar (closure foo-kont k))) (define (foo-kont k) (foo k)) (define (bar k) (bar k)) ``` And then in C, would look like: ```c void all_scheme_functions(int func_id, scheme_t k) { =09while (1) { =09=09switch (func_id) { =09=09case FOO_ID: =09=09=09k =3D build_closure(FOO_KONT_ID, k); =09=09=09func_id =3D BAR_ID; =09=09=09break; =09=09case FOO_KONT_ID: =09=09=09func_id =3D FOO_ID; =09=09=09break; =09=09case BAR_ID: =09=09=09func_id =3D BAR_ID; =09=09=09break; =09=09} =09} } ``` The C code, as we can see, is just a single single-level loop, which is the restriction imposed on `OP_FOLD`. Thus, loops-in-loops, and multiple loops, can be encoded into a single single-level loop. #### Everything Is Possible But Nothing Of Consequence Is Easy On the other hand, just because it is *possible* does not mean it is *easy*. As an alternative, AJ proposed adding a field to the Taproot annex. This annex field is a number indicating the maximum number of opcodes to be processed. If execution of the SCRIPT exceeds this limit, validation fails. In order to make processing costly, the number indicated in the annex field is directly added to the weight of the transaction. Then, during execution, if an `OP_FOLD` is parsed, the `OP_` code processor keeps track of the number of opcodes processed and imposes a limit. If the limit exceeds the number of opcodes indicated in the annex field, validation fails. This technique is safe even if the annex is not committed to (for example if the SCRIPT does not ever require a standard `OP_CHECKSIG`), even though in that case the annex can be malleated: * If the field is less than the actual number of operations, then the malleated transaction is rejected. * If the field is greater than the actual number of operations, then it has a larger weight but pays the same fee, getting a lower feerate and thus will be rejected in favor of a transaction with a lower number in that field. Use of this technique allows us to lift the above restrictions on `OP_FOLD`, and allow multiple loops, as well as loops-in-loops. In particular, the requirement to put the `f` sub-SCRIPT code as a static constant is due precisely to the need for static analysis. But if we instead use a dynamic limit like in this alternative suggestion, we could instead get the `f` sub-SCRIPT from the stack. With additional operations like `OP_CAT`, it would then be possible to do a "variable capture" where parts of the loop body are from other computations, or from witness, and then concatenated to some code. This is not an increase in computational strength, since the data could instead be passed in via the `z`, or as individual items, but it does improve expressive power by making it easier to customize the loop body. On The Necessity Of `OP_FOLD` ----------------------------- We can observe that an `if` construct is really a bounded loop construct that can execute 0 or 1 times. We can thus synthesize a bounded loop construct as follows: OP_IF OP_ENDIF OP_IF OP_ENDIF OP_IF OP_ENDIF OP_IF OP_ENDIF Indeed, it may be possible for something like miniscript to provide a `fold` jet that compiles down to something like the above. Thus: * The restrictions we impose on the previous section mean that `OP_FOLD` cannot do anything that cannot already be done with current SCRIPT. * This is a *good thing* because this means we are not increasing the attack surface. * Using the annex-max-operations technique is strictly more lenient than the above `OP_IF` repetition, thus there may be novel DoS attack vectors due to the increased attack area. * However, fundamentally the DoS attack vector is that peers can waste your CPU by giving you invalid transactions (i.e. giving a high max-operations, but looping so much that it gets even *above* that), and that can already be mitigated by lowering peer scores and prioritizing transactions with lower or nonexistent annex-max-operations. The DoS vector here does not propagate due to the invalid transaction being rejected at this node. Of course, this leads us to question: why even implement `OP_FOLD` at all? We can observe that, while the restrictions in the previous section imply that a SCRIPT with `OP_FOLD` cannot exceed the amount of processing that a SCRIPT *without* `OP_FOLD` does, a SCRIPT with `OP_FOLD` would be shorter, over the wire, than the above unrolled version. And CPU processing is not the only resource that is consumed by Bitcoin fullnodes. Bandwidth is also another resource. In effect, `OP_FOLD` allows us to compress the above template over-the-wire, reducing network bandwidth consumption. But the restrictions on `OP_FOLD` ensure that it cannot exceed the CPU consumption of a SCRIPT that predates `OP_FOLD`. Thus, `OP_FOLD` is still worthwhile to implement, as it allows us to improve bandwidth consumption without increasing CPU consumption significantly. On Generalized Operations ------------------------- I believe there are at least two ways of thinking about how to extend SCRIPT: * We should provide more *general* operations. Users should then combine those operations to their specific needs. * We should provide operations that *do more*. Users should identify their most important needs so we can implement them on the blockchain layer. Each side has its arguments: * General opcodes: * Pro: Have a better chance of being reused for use-cases we cannot imagine yet today. i.e. implement once, use anywhen. * Con: Welcome to the Tarpit, where everything is possible but nothing important is easy. * Complex opcodes: * Pro: Complex behavior implemented directly in hosting language, reducing interpretation overhead (and allowing the insurance of secure implementation). * Con: Welcome to the Nursery, where only safe toys exist and availability of interesting tools are at the mercy of your parents. It seems to me that this really hits a No Free Lunch Theorem for Bitcoin SCRIPT design. Briefly, the No Free Lunch Theorem points out that there is no compiler design that can compile any program to the shortest possible machine code. This is because if a program enters an infinite loop, it could simply be compiled down to the equivalent of the single instruction `1: GOTO 1`, but the halting problem implies that no program can take the source code of another program and determine if it halts. Thus, no compiler can exist which can compile *every* infinite-loop program down to the tiniest possible binary `1: GOTO 1`. More generally, No Free Lunch implies that as you optimize, you will hit a point where you can only *trade off*, and you optimize for one use case while making *another* use case less optimal. Brought to Bitcoin SCRIPT design, there is no optimal SCRIPT design, instead there will be some point where we have to pick and choose which uses to optimize for and which uses are less optimal, i.e. trade off. So I think maybe the Real Question is: why should we go for one versus the other, and which uses do we expect to see more often anyway? Addenda ------- Stuff about totality and partiality: * [Total Functional Programming, D.A. Turner](https://citeseerx.ist.psu.edu= /viewdoc/download?doi=3D10.1.1.106.364&rep=3Drep1&type=3Dpdf) * [Totality](https://kowainik.github.io/posts/totality)