Bulletproofs

Andrew Poelstra (andytoshi)

2018-02-02

49th Milan Bitcoin meetup

video: https://www.pscp.tv/w/1mnxerNaNkLKX

https://twitter.com/kanzure/status/962326126969442304

https://www.reddit.com/r/Bitcoin/comments/7w72pq/bulletproofs_presentation_at_feb_2_milan_meetup/

Introduction

Alright. Thank you for that introduction, Giacamo. I am here to talk about bulletproofs. This is different from proof-of-bullets, which is what fiat currencies use. In bulletproofs, we use a zero-knowledge proof which has nothing to do with consensus at all, but it has a lot of exciting applications. The original motivation to create bulletproofs was for range proofs which are used in confidential transactions. I'll go over what those transactions are and why we care about them.

Confidential transactions

Confidential transactions are a crypto system designed primarily by Greg Maxwell in 2016. Confidential transactions are a way of creating bitcoin-like transactions where the amounts of BTC on the inputs and the outputs are completely hidden. He does this using something called a pedersen commitment, which is a homomorphic cryptographic commitment. The word commitment means that it binds to the value. You can't change the value once you commit to it. And the word homomorphic means that you can add them up.

For verifiers of the blockchain, they can take all of the commitments on the inputs of the transaction and add those up, to get a new commitment to however much is going into the transaction, and then they take all of the commitments for the outputs of the transaction and add them up, and they'll get a new commitment for the outputs. The verifiers only have to check whether the input amount is equal to the output amount. This is how they learn that the transaction balances don't create anything or destroy anything even though they don't know anything about the actual amounts involved. These pedersen commitments represent amounts as integers modulo some large prime .... ordinary numbers, the way that actual money works. For example, if you have say, 2256 coins represented in this way, then .... to having nearly zero coins. There's this wrap around that happens around this large number 2256. So you can effectively represent negative values by taking something very close to the prime and taking something that is just a small number, add it together, and they wrap around to zero. In more conventional terms, you make a transaction with no inputs, and you make an output with negative one coins which is prime number minus one, an output with a positive one coin, and of course negative one plus one is zero, so inputs equal outputs and all that's good. However, if I was to do something like this, then I would have created coins out of thin air because I didn't put any coins into the transaction but I got a bitcoin out and a negative bitcoin which I'll throw away. To prevent this situation, we use something called a range proof.

Rangeproofs

A rangeproof prevents any numbers coming anywhere near the magnitude of that large prime. They can't act like negative numbers and they can't overflow and they can't do anything else that's funny. But, as we will see, the rangeproofs that we used in the initial implementation of confidential transactions were not as efficient as they would need to be for a serious system on the scale that bitcoin is, at least if we want to continue to allow people to join the system without spending thousands of dollars on validation hardware.

The original rangeproofs that we developed were, at the time, in 2016, the best rangeproofs for these small like zero to the 2254 amounts, that existed at the time. But they still weren't very fast. Here are the numbers. They were about 80 bytes per bit, kilobyte per 64 bits. They would take 4, 5, 6 milliseconds to verify one of these rangeproofs. The way that these original rangeproofs worked was conceptually very simple. What we would do is, we would take an amount that was hopefully between 0 and 264 to prove that it was actually between these values we would split it into bits. We would say it's a 6-bit number and it's easy to see this is a 6-bit number if you split it into 6 bits. You provide 6 things, 6 commitments to the bits, one commitment per bit, and you prove that each one of them is a bit. You do this with something called a ring signature, which is a cryptograpihc tool that allows you to sign with one of a number of public keys. The way that these bits are represented is such that if you have a bit- either the commitment to the bit is a public key, or the commitment to the bit minus some random curve point is a public key, and if one of these two things is true then (A) you have a bit and (B) you can produce a ring signature. One or both of these things is always true, and as a result you can produce a ring signature as a proof of bit. As a result, you need separate ring signatures for every single bit, and we can combine them with something called borromean ring signatures. Ultimately you still have a separate signature on every bit, giving you 80 bytes per bit. The more data that you want to prove, the more data you need, and the more work the verifier has to do. It works, but it's not as good as it could be.

Zero-knowledge arithmetic circuit proofs using only pedersen commitments

http://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/bulletproofs/

https://crypto.stanford.edu/bulletproofs

In 2016, Jonathan Bootle among others, published a paper about doing zero-knowledge arithmetic circuit proofs using only pedersen commitments and discrete logs and public keys and all these things that we already like to play with in bitcoin. Bootle found a way to do very general proofs, which is cool. In particular he did this using something called an "efficient inner product proof" where you have two lists of numbers and you multiply each of the numbers pairwise such that you get some specified result. This was published in 2016. Over the next year or so, he and Benedikt Bunz and Dan Boneh both at Stanford worked on extending this work to something called bulletproofs by taking the existing Bootle 2016 inner product argument, simplifying it, the original Bootle paper allowed you to split a number into arbitrary factorizations and then you would use various polynomials based on the size of the different factors and it was quite an event to understand this... Anyway, they simplified this to only work with powers of 2, and they also made it smaller, in a way to drop it from group elements per bit, down to 2, so it was a 3x space savings. And in general, he found a way to do general zero-knowledge proofs, which is what Bootle did, but using inputs from pedersen commitments. You could take a confidential transaction output, which is some homomorphically committed value that you are already adding up to prove that the transaction is valid, and you can prove arbitrary things about these committed values without having to teach your zero-knowledge system how to deal with pedersen commitments. To try to explain how difficult that is, typically to work with pedersen commitments at all, you have to do elliptic curve arithmetic, you have to add these various commitments together, compare the added commitments, and each of these additions is some complicated production of polynomials. To do something in a general zero-knowledge proof systems, you have to implement all of these fractions of polynomials in your zero-knowledge proof system which is very expensive and slow. But in bulletproofs, pedersen commitments work natively and there are no extra operations- you just take the pedersen commitments, you multiply that by some random scalar, and then they fit into your proof just like any other secret input. I'll expand on this a little bit more, but not too much more, because I really want to explain what the performance numbers are for this for rangeproofs and how this generalizes because there's a lot of really cool stuff you can do with bulletproofs. Even though this was developed as a way to do efficient rangeproofs, we can actually do arbitrary computations on committed values in confidential transaction outputs, which is very exciting. Not only that, but public keys can be thought of as being pedersen commitments in a way, and this means you can put public keys into bulletproofs and prove arbitrary things about secret keys without revealing anything about what they are. There's some cool stuff we can do there.

Bulletproofs size performance

All of this is based on this inner product argument that Bootle developed. This is a recursive argument. The way that these inner product arguments are generated is that it's just a list of data, you go through a number of iterations and each time you iterate you also halve the amount of data that you're working with. So if you're producing a proof on 64 bits of data, you go through one step of the proof, turning 64 into 32, you generate 64 bytes of data, you do another step 32 becomes 16 becomes 8 becomes 4 becomes 2 becomes 1. After a few steps, you have a lot of data, but less than before. You're taking the logarithm of 64, and your proof is proportional in size to that rather than being proportional to the number of bits. This is already an exciting development because the previous implementation of rangeproofs became more expensive as you increased the number of bits. Instead, these grow in size much more slowly than the old proofs. They grow slower the more bits you add.

As a consequence of this, it's possible to aggregate these range proofs. Say you're creating a transaction with many outputs, like 10 outputs, rather than creating multiple range proofs (one per output) where you make many individual commitments to 64 bit numbers, you could instead create one aggregate rangeproof proving that all ten of them are 64-bits at once. Because of the logarithmic scaling, the resulting proof will barely be larger at all than the original proof. Specifically, I guess, if you're multiplying by 10, it's 2, roughly 4 times, so about 256 bytes. You can see from this chart, this is, maybe you can't see in the back, this is the number of bytes it takes to prove some number of rangeproofs. On the bottom, it's the number of rangeproofs proven. For most values, I'm stuck between 1kb and 2kb. My x-axis, how many proofs I'm proving, it's increasingly enormously, up to 1000 there. A transaction with 1000 outputs can be aggregated into a single rangeproof, so it would be less than 20 bytes per range. This is about 2 kb.

Provisions solvency proof using bulletproofs

https://eprint.iacr.org/2015/1008

This might not be so realistic for a regular blockchain transaction that you would see in Elements or monero or mimblewimble or grin or whatever various things use confidential transactions in the wild. But you might see this in off-chain protocols, such as provisions which is a proof-of-solvency mechanism. "Provisions" is a way for some bitcoin company such as a bank or an exchange holding a bunch of bitcoins to prove that they hold some certain amount of bitcoins that they are actual coins on the blockchain without revealing how many coins they have or which coins specifically they are. There's a way to do this, using the Provisions paper, which uses confidential transactions to hide the amounts, and then uses a zero-knowledge proof to prove that the amounts are really on the blockchain. However, if provisions solvency proof was instead to use bulletproofs to aggregate all of the different rangeproofs on each of their outputs then they would be able to reduce the size of their proofs from a number of gigabytes down to only a few kilobytes. Having said that, there's a reason why they might not want to aggregate, namely the verification time.

Bulletproofs verification time

There's a very cool property of bulletproofs different from the old rangeproofs. I think this is really the most exciting thing. The size numbers are very impressive, you can see that we're within about 2 kilobytes no matter the number of rangeproofs we're trying to aggregate. However, the really exciting part is what happens when you have many proofs and you're trying to verify them all at once. You would do this in a blockchain context when you receive a new transaction and you want to check the outputs, or especially if you're doing initial block download when you're first syncing to the blockchain network and you're downloading a whole bunch of blocks that exist and you haven't seen them yet because you're just now joining the system or you were offline for a while. You see all these new blocks, all these new transactions, all these new rangeproofs, and you want to validate them all. What happens is, so it's difficult to see on this graph, but the very first proof that you validate costs a fair bit of time, probably 2 or 3 milliseconds which to be clear is already less than the old rangeproof. Every additional rangeproof after that can take the same time using a trick called batch validation where you essentially combine the different rangeproofs after the fact..... randomization step to prevent them from interacting with each other, and then verify them all at the same time in one shot. When you do that, you will find that the additional rangeproofs only cost a couple hundred microseconds to verify. We're down from 5-6 milliseconds to a couple hundred microseconds. This is already a 10x improvement for each one. But if you combine this with the aggregation feature, it gets better: the time it takes to batch validation an aggregate rangeproof grows logarithmically with the size of the aggregate.

On the graph, this is the top line here, this is the validation time for the old signatures.......... and this is just for.. how many proofs you're validating, how quickly does the validation cost grow. And then the very bottom line here is the same number but for bulletproofs. For validating a whole bunch of individual bulletproofs at once, it grows much more slowly, 3 seconds for 500 proofs, versus something like 100 milliseconds for bulletproofs. If you look at the next line, it shows what if you're validating aggregates of 2 rangeproofs at once, so these are 2 rangeproofs combined into one, which is much smaller than two separate rangeproofs. The line shows aggregates of 4, 16, 32, 64, etc. The next line here, almost to the top, that's the line for how much it costs to validate batches of 64 ranges each aggregated into a single bulletproof and you can see that after validating 500 proofs it takes about half as much time to validate 64 aggregate bulletproofs-- half the time to do 64x more work to validate bulletproofs than the old rangeproofs. At these scales, like in proof of solvency or even in a lbockchain for initial block download, this is over 100x improvement and this gets better at higher scales. This is really the exciting part for bulletproofs. This validation times grows so little.

That's just the story about rangeproofs. It's interesting to see this, because at some point, these validation times grow so slowly that if you look at individual rangeproofs in bulletproofs, they start costing as little as 30-40 microseconds, which is significantly slower-- less time than it takes to validate individual signatures. If you're not batch validating signatures, then the rangeproofs are going to be cheaper than the rest of the transaction, which is kind of cool.

Bulletproofs for beyond rangeproofs

I am going to move on from rangeproofs to talk about the more general things. The inner product argument, which is really what these bulletproofs are- they are not just rangeproofs, they are general ways to prove knowledge about multiplying things to some other things. There's a class of problems you can describe as things you're multiplying into other things, is pretty much all the problems. Any algorithm where you know the running time before hand, such as anything that you might run on a blockchain. As my colleague Russell O'Connor would say, this is a-- either a delta zero language or a sigma zero language but I can't remember which. What he would be referring to, is the fact that if you were validating a correct execution of some program, no matter how complicated or loopy or arbitrary this program might be, if all you're doing is validating a trace of its correct execution then that's very limited in the scope and that will always have a runtime that you can predict beforehand that will always be something you could put in a blockchain that you could at least bound the effort that a verifier would have to expend to look at. You could verify any program at all, that can be expressed in a sigma zero language, that can be expressed as an arithmetic circuit.

This is the principle behind all sorts of zero-knowledge systems that have been in the news for the past few years. SNARKs is a big one, and STARKs from Eli Ben-Sasson, and this other thing called zk-boo which I don't think has got as much media attention as it should have. Bulletproofs join this family of distinct zero-knowledge proof systems. Let me compare it to some of these other systems.

SNARKs

I think SNARKs are the most famous zero-knowledge system at least in the cryptocurrency space. zcash uses SNARKs to create what they call "shielded transactions"- transactions where, rather than showing a bunch of inputs and outputs and saying you're allowed to spend coins and create new coins, you provide a zero-knowledge proof that you're picking some coins that previously existed and you're putting them into your transaction, and zcash does this with SNARKs. They are very cool- they have constant verification time. To validate a SNARK on a modern computer, would probably take you about 5-6 milliseconds and it doesn't grow no matter how complex the problem you're proving. You can prove the complex zcash thing and the cost of validators doesn't grow fast.

There are two big problems with SNARKs. The time it takes to prove something grows very quickly with the complexity of the problem being proved. I'll show you a comparison for proving sha256 in these different systems. It shows that not all problems are equally well solved in the different proof systems. The proving time increases to many minutes or maybe even 10 minutes, in the way that zcash is currently implemented. Much of the research these days is for finding more efficient hash functions and find functions that can be proven in a reasonable amount of time to create a system that people can actually use. Right now, barely anyone in zcash uses shielded transactions because they are so expensive to produce.

Another problem, sometimes much more serious in nature, is that SNARKs have a trusted setup. The parameters of the system must be decided in advance by some group of people who work together to build the parameters of the system and they in producing those parameters need to compute some secret data ("toxic waste") that is basically secret keys that could in principle be used to create completely false proofs. If you have the secret knowledge, then you can mint new coins. The way that zcash was setup was a group of 4-5 people who all went to various places and did a multi-party computation together to produce the parameters of the system. And if it were possible to get the secret data from these participants, then you could mint as much zcash as you want and nobody would be able to tell, even in principle. Just the way that the cryptography is setup, it's undetectable. That's unfortunate.

There are other systems out there that do not have this trusted setup requirement. There's STARKs and zk-boo.

STARKs

https://cyber.stanford.edu/sites/default/files/elibensasson.pdf

STARKs are a new research effort by Eli Ben-Sasson's group in Israel. STARKs are exciting. They are an extension of something called the PCP theorem, which is an academic cryptography theorem that shows it's possible in principle to create zero-knowledge proofs of things merely using hashes and randomization without using any weird cryptography assumptions, without using the discrete logarithm assumption which is used in modern-day digital signatures. You don't even need the discrete logarithm problem. You just need the assumption that it's possible for computers to generate random numbers, and that they are secure. This is cool.

The original PCP theorem was a mathematical existence theorem that gave a proof that it was possible in principle, if you were willing to compute for longer than the age of the universe, using more data storage than you could produce using all the atoms in the universe, you could produce such a proof-- which academics get very excited about I don't know.

STARKs take this from the realm of "literally impossible" to "merely impractical". It's possible to produce STARKs. They take many hours, they take several gigabytes of RAM, and verification is not so fast, and the proof sizes are not so small but they exist. I understand that, you can validate STARKs inside of STARKs, and it's faster than just validating the first STARK. Maybe if you keep STARKing enough times then it would become efficient to verify? I don't know, I haven't explored this as much as I would like to. Regardless of how small you can get the verification time, the memory time is still very time consuming and memory consuming. This is not yet a practical system.

zk-boo

On the other hand, zk-boo is somewhat practical. I think of all the zero-knowledge systems that I'm aware of, this is the first one that was practical and you could use it for proving ordinary things. In zk-boo, you take a boolean circuit- something that people are probably more familiar with- it's just a bunch of logic gates, and you could produce a zero-knowledge proof using hashes and symmetric cryptography and random selection. It has similar security assumptions to STARKs. It's not just hashes and random numbers, you also need some encryption and stuff. These proofs are very fast to generate, fast to verify, but they are a bit large, they are going to be 100s of kb or maybe even megabytes for non-trivial circuits. They do not have native support for pedersen commitments, so if you wanted to use these for confidential transactions or if you wanted to use these in a blockchain context where you already have committed numbers then they are not the most efficient thing to use. They are fast to produce proofs for, the proofs are going to be large, and each proof is going to include a proof details of all sorts of elliptic curve operations inside, which is very frustrating and it really slows down what you could do.

Bulletproofs vs other systems

Versus bulletproofs, which like SNARKs, are very small in size, only a couple kilobytes in many of these contexts, although not always so small--- eventually it becomes as large as you want, they grow logarithmically in size. If you had a circuit with 2 to the million gates, I mean you couldn't fit something that big in any part of the observable universe, but in principle you could do it and then you would get a megabyte bulletproof. SNARKs, on the other hand, are always 200 bytes, no matter how far beyond the limits of the universe you want to imagine. They are small. For rangeproofs, they are less than a kilobyte, and for the kinds of circuits I've been looking at, they are 1 to 2 kilobytes. The verification time, for at least small problems, is reasonably fast. For rangeproofs, it grows very slowly, and the numbers are practical to verify, for rangeproofs and hashes. SNARKs are always constant verification time like 5-7 milliseconds. I haven't benchmarked on modern hardware. The original SNARK paper was in 2013, they used a 2 GHz system or whatever, and they came out at about 10.5 milliseconds. zk-boo has comparable verification time, has large proofs, and zk-boo doesn't support pedersen commitments. So that's where we fit-- specifically in the context of confidential transactions and elliptic curve keys, we're the fastest smallest thing around.

What do these numbers look like for circuits? The classic benchmark is sha256.... familiar with, this is the kind of thing where zk-boo shines, it's described in terms of a whole bunch of boolean logic gates, it's all boolean arithmetic. But for arithmetic circuits, sha256 is kind of a pain to implement, it's going to be 20,000-30,000 gates. .... For something like bulletproofs, STARKs, or SNARKs, it's not the best hash to use. Instead, I am going to use a pedersen hash, which is similar to zcash's jugjug curve hash. Instead, you take your secret inputs, you break it up into bits, to each bit you assign some random curve point, and you have a running sum. Whenever you have a 1 bit, you add a corresponding curve point. Your hash is the sum of the random curve points. I have implemented htis for 768 bits, I ran this on my laptop, it took about 1.25 seconds to prove a preimage of this hash. I used 768 bits instead of 512 bits because, in my implementation it's just as fast to verify from 512 bits to 768 bits so I just did as a high as I can- in the future, it will be faster for 512 bits. It took me 1.3 seconds to prove, verification is 72 milliseconds which is not great, 72 milliseconds for one hash- that's not great, you need multiple hashes and so on. Using batch verification trick that I showed you for rangeproofs, this applies to circuits too, and my pedersen hashes are only 5 ms per additional hash. 72 milliseconds for the first one, and the rest will be just 5 milliseconds for each one. As I mentioned earlier, the size of both of these proofs is about 2 kb. The pedersen hash one is under 2, and the sha256 one is....

Other applications of bulletproofs

.. difference in space. So I'm going give you guys these numbers for context to see that this is a practical zero-knowledge proof system. I am going to end by giving a list of applications. We have a paper submitted to the IEEE Security and Privacy conference. The final version of the paper is due in some number of hours. This is very new stuff. All I can do is try to share my excitement.

Rangeproofs is one application I have talked about. Merkle proofs. Proofs of solvency are a specialized application of that; you have a bunch of coins, you put them into a giant merkle tree, and then you prove that you have some outputs that are in this giant merkle tree and also those outputs add up to some amount that you're claiming to have control over. You could use this in principle to implement what zcash does-- verifiers have an accumulator, and you use bulletproofs to produce proofs that you're spending real coins that spend the amount that you're claiming that you're spending. I can put this with the circuits I've implemented now-- SNARKs are a few milliseconds, it would be several seconds to verify these, 1-2 seconds, and with batch verification it's down to a couple hundred milliseconds but that's not practical yet. For off-chain applications, a few hundred milliseconds is no big deal.

Multisignatures with deterministic nonces

Another fun application is multi-signatures with deterministic nonces. Here I have to get a bit more technical than I wanted, but this is an important application ....... In a lot of different interactive multisignature protocols, there is a requirement that every party generates a secret nonce which is an ephemeral key pair they are using just for building this signature, and their signature is generated using some nonce and the actual public key, and with whatever message they are signing. This extra public key is used to blind the original public key so that when they do computations with their secret keys........ one equation and two unknowns, you can't solve it, this prevents anyone from learning anything. It's critical that nobody ever uses the same nonce in two different signatures. If they were to use one, then rather than using a new variable for every equation, they suddenly have a solvable system and they could solve for all of the nonces and the private key and now game-over. In multisignatures, every party does this, and there's an actual attack where one party does the protocol, plays it forward until the other party produces a nonce and does their part and then goes offline, they come back and try to restart the protocol, and there's a standard protocol for generating nonces called rfc6979 in which case the honest person will generate the same nonce twice but because they are working with some other counterparty that is contributing stuff to the message and some other nonces.. rather than generating the same nonce twice for the smae message (which is okay because it's the same outcomes)... but otherwise... revealing the private key to the person who is attacking. To prevent this, both parties need to always produce random nonces, or always produce deterministic nonces. The attack works when one party is allowed to vary and one party is able to not vary. There are many problems with random nonces, which is why people like rfc6979 thing, because you need a reliable source of randomness, which needs to not be biased, you need to source entropy from somewhere; people like deterministic nonces, you only need randomness in your private key, and then you're good forever. If there was a way that every party could prove that they are generating their nonces deterministically, and that when they come back from offline.... rfc6979... everyone proves they are acting honestly, which solves the attack. But you can't prove that you're generating your secret nonce honestly, because you can't reveal any properties about it, because any properties that someone can verify from the public data, would be enough for them to.... With bulletproofs or zero-knowledge proof, we can solv ethis, you can prove that your nonce was generated deterministically. You could run all of this through a sha256 circuit in a deterministic way, and show that you generated a de-randomized nonce generated deterministic. Every participant can prove that to every participant, and everyone knows that even if they just woke up from one of Nick Bostrom's thought experiments, and even if they have no memory of interacting with the parties they were previously interacting with, they know that those people are acting exactly as they did every other time- there's no possibility for parties to be acting one way now and one other way previously, due to these zero-knowledge proofs.

Scriptless scripts

There's another technology that bulletproofs can be used with, called scriptless scripts. This is a way to do all sorts of cool smart contracts using only digital signatures. Scriptless scripts exploits the linear property of Schnorr signatures. Nobody can tell details about these scripts. This is using an older form of zero-knowledge proofs called a sigma protocol which is inflexible and requires specialized cryptography development work. But you could do this now with bulletproofs, which could be extended to allow weird assets that are functions of other assets-- crypto derivatives perhaps. That should be the slogo for bulletproofs, really. That's it. That's all. Thank you.