summaryrefslogtreecommitdiff
path: root/40/f35a00f0bb05764b75b45ccd1464a4e9191895
blob: 573794264a5b62901e3bd576b47ffe6961575c95 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
Return-Path: <ZmnSCPxj@protonmail.com>
Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133])
 by lists.linuxfoundation.org (Postfix) with ESMTP id CB7F9C000B
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Tue, 22 Mar 2022 05:37:14 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by smtp2.osuosl.org (Postfix) with ESMTP id AEED640A17
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Tue, 22 Mar 2022 05:37:14 +0000 (UTC)
X-Virus-Scanned: amavisd-new at osuosl.org
X-Spam-Flag: NO
X-Spam-Score: -1.599
X-Spam-Level: 
X-Spam-Status: No, score=-1.599 tagged_above=-999 required=5
 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1,
 DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001,
 FROM_LOCAL_NOVOWEL=0.5, RCVD_IN_MSPIKE_H5=0.001,
 RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001]
 autolearn=ham autolearn_force=no
Authentication-Results: smtp2.osuosl.org (amavisd-new);
 dkim=pass (2048-bit key) header.d=protonmail.com
Received: from smtp2.osuosl.org ([127.0.0.1])
 by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id drahio0iAQT8
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Tue, 22 Mar 2022 05:37:13 +0000 (UTC)
X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0
Received: from mail-4324.protonmail.ch (mail-4324.protonmail.ch [185.70.43.24])
 by smtp2.osuosl.org (Postfix) with ESMTPS id C70D6400B8
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Tue, 22 Mar 2022 05:37:12 +0000 (UTC)
Date: Tue, 22 Mar 2022 05:37:03 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
 s=protonmail3; t=1647927429;
 bh=DO9OKnwd/HYaPJEFEVp/0DkZJ1ll8dHDGuRC6TeL7Jw=;
 h=Date:To:From:Reply-To:Subject:Message-ID:From:To:Cc:Date:Subject:
 Reply-To:Feedback-ID:Message-ID;
 b=m2K6usUkBt8N+B+AvtdoNS0qaAx62rAur8LYaiV9n8/oR6LN+PUm0BCpSGYvcFTfB
 0tJWouddWjbCcSxlje5MOtxvwQmVQW5XYPb5Scl05I/ylTi2QWmqOnobhzXkl+JadO
 UdbFFRF3bhHCvSjvrmr1Gaz2TGpVpe/bngWsuUGcBhUuInDVXZcqXLp7jfXz7oCe3k
 C18h4rd1cp+UAambvicCUboPMcMqjGXR+NLILgPiphM5e3XuxaoyahgdHRXrdw4GyW
 5k476ljVYSan2pc/ukIy49HPO/d8Gc5o6UUCBvLRxA1H7mOMeIG3muFddK/z5C+Iyi
 qTCKIre2fcPIw==
To: bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org>
From: ZmnSCPxj <ZmnSCPxj@protonmail.com>
Reply-To: ZmnSCPxj <ZmnSCPxj@protonmail.com>
Message-ID: <NGFW5p2Gl4t6AqL2E29THMT5DbppMJlB6bdUE6nxAdMajxeFcoRNdt5axNLql08EoyIMsBgZHHHYt_MiITZwzyGZIz0iFX4vaKIYrVV2QhU=@protonmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Subject: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets
	Without Softforks
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>, 
 <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>, 
 <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Tue, 22 Mar 2022 05:37:14 -0000

Good morning list,

It is entirely possible that I have gotten into the deep end and am now dro=
wning in insanity, but here goes....

Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

Introduction
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

Recent (Early 2022) discussions on the bitcoin-dev mailing
list have largely focused on new constructs that enable new
functionality.

One general idea can be summarized this way:

* We should provide a very general language.
  * Then later, once we have learned how to use this language,
    we can softfork in new opcodes that compress sections of
    programs written in this general language.

There are two arguments against this style:

1.  One of the most powerful arguments the "general" side of
    the "general v specific" debate is that softforks are
    painful because people are going to keep reiterating the
    activation parameters debate in a memoryless process, so
    we want to keep the number of softforks low.
    * So, we should just provide a very general language and
      never softfork in any other change ever again.
2.  One of the most powerful arguments the "general" side of
    the "general v specific" debate is that softforks are
    painful because people are going to keep reiterating the
    activation parameters debate in a memoryless process, so
    we want to keep the number of softforks low.
    * So, we should just skip over the initial very general
      language and individually activate small, specific
      constructs, reducing the needed softforks by one.

By taking a page from microprocessor design, it seems to me
that we can use the same above general idea (a general base
language where we later "bless" some sequence of operations)
while avoiding some of the arguments against it.

Digression: Microcodes In CISC Microprocessors
----------------------------------------------

In the 1980s and 1990s, two competing microprocessor design
paradigms arose:

* Complex Instruction Set Computing (CISC)
  - Few registers, many addressing/indexing modes, variable
    instruction length, many obscure instructions.
* Reduced Instruction Set Computing (RISC)
  - Many registers, usually only immediate and indexed
    addressing modes, fixed instruction length, few
    instructions.

In CISC, the microprocessor provides very application-specific
instructions, often with a small number of registers with
specific uses.
The instruction set was complicated, and often required
multiple specific circuits for each application-specific
instruction.
Instructions had varying sizes and varying number of cycles.

In RISC, the micrprocessor provides fewer instructions, and
programmers (or compilers) are supposed to generate the code
for all application-specific needs.
The processor provided large register banks which could be
used very generically and interchangeably.
Instructions had the same size and every instruction took a
fixed number of cycles.

In CISC you usually had shorter code which could be written
by human programmers in assembly language or machine language.
In RISC, you generally had longer code, often difficult for
human programmers to write, and you *needed* a compiler to
generate it (unless you were very careful, or insane enough
you could scroll over multiple pages of instructions without
becoming more insane), or else you might forget about stuff
like jump slots.

For the most part, RISC lost, since most modern processors
today are x86 or x86-64, an instruction set with varying
instruction sizes, varying number of cycles per instruction,
and complex instructions with application-specific uses.

Or at least, it *looks like* RISC lost.
In the 90s, Intel was struggling since their big beefy CISC
designs were becoming too complicated.
Bugs got past testing and into mass-produced silicon.
RISC processors were beating the pants off 386s in terms of
raw number of computations per second.

RISC processors had the major advantage that they were
inherently simpler, due to having fewer specific circuits
and filling up their silicon with general-purpose registers
(which are large but very simple circuits) to compensate.
This meant that processor designers could fit more of the
design in their merely human meat brains, and were less
likely to make mistakes.
The fixed number of cycles per instruction made it trivial
to create a fixed-length pipeline for instruction processing,
and practical RISC processors could deliver one instruction
per clock cycle.
Worse, the simplicity of RISC meant that smaller and less
experienced teams could produce viable competitors to the
Intel x86s.

So what Intel did was to use a RISC processor, and add a
special Instruction Decoder unit.
The Instruction Decoder would take the CISC instruction
stream accepted by classic Intel x86 processors, and emit
RISC instructions for the internal RISC processor.
CISC instructions might be variable length and have variable
number of cycles, but the emitted RISC instructions were
individually fixed length and fixed number of cycles.
A CISC instruction might be equivalent to a single RISC
instruction, or several.

With this technique, Intel could deliver performance
approaching their RISC-only competition, while retaining
back-compatibility with existing software written for their
classic CISC processors.

At its core, the Instruction Decoder was a table-driven
parser.
This lookup table could be stored into on-chip flash memory.
This had the advantage that the on-chip flash memory could be
updated in case of bugs in the implementation of CISC
instructions.
This on-chip flash memory was then termed "microcode".

Important advantages of this "microcode" technique were:

* Back-compatibility with existing instruction sets.
* Easier and more scalable underlying design due to ability
  to use RISC techniques while still supporting CISC instruction
  sets.
* Possible to fix bugs in implementations of complex CISC
  instructions by uploading new microcode.

(Obviously I have elided a bunch of stuff, but the above
rough sketch should be sufficient as introduction.)

Bitcoin Consensus Layer As Hardware
-----------------------------------

While Bitcoin fullnode implementations are software, because
of the need for consensus, this software is not actually very
"soft".
One can consider that, just as it would take a long time for
new hardware to be designed with a changed instruction set,
it is similarly taking a long time to change Bitcoin to
support changed feature sets.

Thus, we should really consider the Bitcoin consensus layer,
and its SCRIPT, as hardware that other Bitcoin software and
layers run on top of.

This thus opens up the thought of using techniques that were
useful in hardware design.
Such as microcode: a translation layer from "old" instruction
sets to "new" instruction sets, with the ability to modify this
mapping.

Microcode For Bitcoin SCRIPT
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D

I propose:

* Define a generic, low-level language (the "RISC language").
* Define a mapping from a specific, high-level language to
  the above language (the microcode).
* Allow users to sacrifice Bitcoins to define a new microcode.
* Have users indicate the microcode they wish to use to
  interpret their Tapscripts.

As a concrete example, let us consider the current Bitcoin
SCRIPT as the "CISC" language.

We can then support a "RISC" language that is composed of
general instructions, such as arithmetic, SECP256K1 scalar
and point math, bytevector concatenation, sha256 midstates,
bytevector bit manipulation, transaction introspection, and
so on.
This "RISC" language would also be stack-based.
As the "RISC" language would have more possible opcodes,
we may need to use 2-byte opcodes for the "RISC" language
instead of 1-byte opcodes.
Let us call this "RISC" language the micro-opcode language.

Then, the "microcode" simply maps the existing Bitcoin
SCRIPT `OP_` codes to one or more `UOP_` micro-opcodes.

An interesting fact is that stack-based languages have
automatic referential transparency; that is, if I define
some new word in a stack-based language and use that word,
I can replace verbatim the text of the new word in that
place without issue.
Compare this to a language like C, where macro authors
have to be very careful about inadvertent variable
capture, wrapping `do { ... } while(0)` to avoid problems
with `if` and multiple statements, multiple execution, and
so on.

Thus, a sequence of `OP_` opcodes can be mapped to a
sequence of equivalent `UOP_` micro-opcodes without
changing the interpretation of the source language, an
important property when considering such a "compiled"
language.

We start with a default microcode which is equivalent
to the current Bitcoin language.
When users want to define a new microcode to implement
new `OP_` codes or change existing `OP_` codes, they
can refer to a "base" microcode, and only have to
provide the new mappings.

A microcode is fundamentally just a mapping from an
`OP_` code to a variable-length sequence of `UOP_`
micro-opcodes.

```Haskell
import Data.Map
-- type Opcode
-- type UOpcode
newtype Microcode =3D Microcode (Map.Map Opcode [UOpcode])
```

Semantically, the SCRIPT interpreter processes `UOP_`
micro-opcodes.

```Haskell
-- instance Monad Interpreter -- can `fail`.
interpreter :: Transaction -> TxInput -> [UOpcode] -> Interpreter ()
```

Example
-------

Suppose a user wants to re-enable `OP_CAT`, and nothing
else.

That user creates a microcode, referring to the current
default Bitcoin SCRIPT microcode as the "base".
The base microcode defines `OP_CAT` as equal to the
sequence `UOP_FAIL` i.e. a micro-opcode that always fails.
However, the new microcode will instead redefine the
`OP_CAT` as the micro-opcode sequence `UOP_CAT`.

Microcodes then have a standard way of being represented
as a byte sequence.
The user serializes their new microcode as a byte
sequence.

Then, the user creates a new transaction where one of
the outputs contains, say, 1.0 Bitcoins (exact required
value TBD), and has the `scriptPubKey` of
`OP_TRUE OP_RETURN <serialized_microcode>`.
This output is a "microcode introduction output", which
is provably unspendable, thus burning the Bitcoins.

(It need not be a single user, multiple users can
coordinate by signing a single transaction that commits
their funds to the microcode introduction.)

Once the above transaction has been deeply confirmed,
the user can then take the hash of the microcode
serialization.
Then the user can use a SCRIPT with `OP_CAT` enabled,
by using a Tapscript with, say, version `0xce`, and
with the SCRIPT having the microcode hash as its first
bytes, followed by the `OP_` codes.

Fullnodes will then process recognized microcode
introduction outputs and store mappings from their
hashes to the microcodes in a new microcodes index.
Fullnodes can then process version-`0xce` Tapscripts
by checking if the microcodes index has the indicated
microcode hash.

Semantically, fullnodes take the SCRIPT, and for each
`OP_` code in it, expands it to a sequence of `UOP_`
micro-opcodes, then concatenates each such sequence.
Then, the SCRIPT interpreter operates over a sequence
of `UOP_` micro-opcodes.

Optimizing Microcodes
---------------------

Suppose there is some new microcode that users have
published onchain.

We want to be able to execute the defined microcode
faster than expanding an `OP_`-code SCRIPT to a
`UOP_`-code SCRIPT and having an interpreter loop
over the `UOP_`-code SCRIPT.

We can use LLVM.

WARNING: LLVM might not be appropriate for
network-facing security-sensitive applications.
In particular, LLVM bugs. especially nondeterminism
bugs, can lead to consensus divergence and disastrous
chainsplits!
On the other hand, LLVM bugs are compiler bugs and
the same bugs can hit the static compiler `cc`, too,
since the same LLVM code runs in both JIT and static
compilation, so this risk already exists for Bitcoin.
(i.e. we already rely on LLVM not being buggy enough
to trigger Bitcoin consensus divergence, else we would
have written Bitcoin Core SCRIPT interpreter in
assembly.)

Each `UOP_`-code has an equivalent tree of LLVM code.
For each `Opcode` in the microcode, we take its
sequence of `UOpcode`s and expand them to this tree,
concatenating the equivalent trees for each `UOpcode`
in the sequence.
Then we ask LLVM to JIT-compile this code to a new
function, running LLVM-provided optimizers.
Then we put a pointer to this compiled function to a
256-long array of functions, where the array index is
the `OP_` code.

The SCRIPT interpreter then simply iterates over the
`OP_` code SCRIPT and calls each of the JIT-compiled
functions.
This reduces much of the overhead of the `UOP_` layer
and makes it approach the current performance of the
existing `OP_` interpreter.

For the default Bitcoin SCRIPT, the opcodes array
contains pointers to statically-compiled functions.
A microcode that is based on the default Bitcoin
SCRIPT copies this opcodes array, then overwrites
the entries.

Future versions of Bitcoin Core can "bless"
particular microcodes by providing statically-compiled
functions for those microcodes.
This leads to even better performance (there is
no need to recompile ancient onchain microcodes each
time Bitcoin Core starts) without any consensus
divergence.
It is a pure optimization and does not imply a
tightening of rules, and is thus not a softfork.

(To reduce the chance of network faults being used
to poke into `W|X` memory (since `W|X` memory is
needed in order to actually JIT compile) we can
isolate the SCRIPT interpreter into its own process
separate from the network-facing code.
This does imply additional overhead in serializing
transactions we want to ask the SCRIPT interpreter
to validate.)

Comparison To Jets
------------------

This technique allows users to define "jets", i.e.
sequences of low-level general operations that users
have determined are common enough they should just
be implemented as faster code that is executed
directly by the underlying hardware processor rather
than via a software interpreter.
Basically, each redefined `OP_` code is a jet of a
sequence of `UOP_` micro-opcodes.

We implement this by dynamically JIT-compiling the
proposed jets, as described above.
SCRIPTs using jetted code remain smaller, as the
jet definition is done in a previous transaction and
does not require copy-pasta (Do Not Repeat Yourself!).
At the same time, jettification is not tied to
developers, thus removing the need to keep softforking
new features --- we only need define a sufficiently
general language and then we can implement pretty much
anything worth implementing (and a bunch of other things
that should not be implemented, but hey, users gonna
use...).

Bugs in existing microcodes can be fixed by basing a
new microcode from the existing microcode, and
redefining the buggy implementation.
Existing Tapscripts need to be re-spent to point to
the new bugfixed microcode, but if you used the
point-spend branch as an N-of-N of all participants
you have an upgrade mechanism for free.

In order to ensure that the JIT-compilation of new
microcodes is not triggered trivially, we require
that users petitioning for the jettification of some
operations (i.e. introducing a new microcode) must
sacrifice Bitcoins.

Burning Bitcoins is better than increasing the weight
of microcode introduction outputs; all fullnodes are
affected by the need to JIT-compile the new microcode,
so they benefit from the reduction in supply, thus
getting compensated for the work of JIT-compiling the
new microcode.
Ohter mechanisms for making microcode introduction
outputs expensive are also possible.

Nothing really requires that we use a stack-based
language for this; any sufficiently FP language
should allow referential transparency.