Mining, block size, etc.

gmaxwell

2015-11-09

https://www.youtube.com/watch?v=RguZ0_nmSPw

So uh I am really excited to announce or introduce our next speaker, Greg Maxwell. Greg, well, there are people who talk and there's people who do, and Greg is definitely a doer. He is definitely one of the most accomplished if not most helpful, one of the most active people, not just in terms of commits that we see, but also things we don't see, like input regarding bugs and to other developers and being a great voice in this industry. Also, he is with Blockstream, one of the most exciting companies employing several Bitcoin developers. Everyone is a volunteer. His company has recently had a major product announcement. He is taking his time to just help the space and we really appreciate this and thank you very much for being here. (applause)

Great, thank you. And thank you for crediting the work I do, but the work I do is only possible because I have a lot of help. I think one of the most exciting things of working in the Bitcoin ecosystem is the tremendous diversity of work we can do, the number of things that need to be done, the kinds of things that need to be done, and the fact that there's a chance to take some of the most bleeding edge technology and putting it out in to use where it will have a real impact on people. This has been a challenge in areas like academic cryptography. There has been decades of work inventing interesting powerful protocols that do unique things, which have not been turned into usable software. Prior to the invent of cryptocurrency like Bitcoin, there wasn't much of an application. If there is something you can do with fancy crypto, but you can also do it with a trusted server, well if you already need a trusted server to do what you're doing, why use the fancy crypto? In Bitcoin, we have this environment where the value comes from mitigating trust systems, so now this fancy crypto has an application. So me and other people can go in and have a lot of fun taking real science and putting these results into the world so that others can use them.

Today I want to talk about some of the interesting technical things I have been working on. Bitcoin has a huge scope of things to work on. It's challening for me to figure out what areas people are going to be most interested in, and what I could best be spending my time focusing on today. So I wanted to cover four or five things. I am fully willing to take interruptions so if people want to raise hands and give questions, that's fine. I don't have a fixed agenda, so a tangent is fine if we have a good discussion.

You can work at a very fine scale in Bitcoin and do valuable work, or you can step back and do longer-term work, or you can do very-long-term and think about economics and incentives and the wider ecosystem. I try to spread my time load-balancing across these areas so that I can contribute to progress in each of these areas. The notes that I have for today are ordered in that way, talking about shorter-term sfirst and then longer-term, unless stray hands show up.

I have been working on Bitcoin very extensively for over 4 almost 5 years now I guess. For me today especially this Bitcoin is especially beyond a full-time job, I work 7 days a week and every waking hour. The reason why I am able to do this and still enjoy it is because of the big diversity of things to do.

There is a new update out for Bitcoin 0.11.1 and backwards to older version that fixes an urgent security vulnterability. The headline reason for the release was tihs upnp vulnerability. You need to either upgrade or turn off upnp, it's a pretty nasty vulnerability. There are some other things that changed in these latest updates, some of these things are responses to ongoing attacks.

There's an interesting history with Bitcoin in that we have been tremendously fortunate that the levels of attacks on the network are far lower than they could be. Historically we could have had more attacks. A lot of the people that would be most able to attack the Bitcoin system and the mos table, look at Bitcoin and they find it really cool and decide not to attack it. So we have seen some benefits from this, we have received security tips from really blackhatty people, they are out there maybe blowing up the altcoin software, but they are willing to give tips to the Bitcoin developers. We have done well, there have been attacks but we've done well. Recently there have been a real increase in attacks. Most of them have been ineffective. One of the ones recently has been a new round of malleability attacks. There is an unfortunate property of Bitcoin is that the txid of a transaction can be changed by a third-party after the transaction has been broadcast. The reason for this is that the txid is a cryptographic hash of the entire transaction, and that includes the signatures, and it means that people can add bytes to the signature to change the hash. Many other cryptographic systems are vulnerable to this kind of malleability but it doesn't really matter for them. In Bitcoin, changing hte txid after announcement is that it really confuses users. It confuses some wallet software, when it thinks it made a payment and then another one shows up. Wallet authors have gotten more mature about malleability, but it's still a nuisance.

There's about a dozen different ways for a transaction to be modified in the basic design of Bitcoin. As they were discovered, we closed them off, so that nodes would not relay or mine the transactions that were using flexibility in the encoding that they didn't really need ot have. There was one malleability found by Sergio Lerner I believe which wasn't about serialization or bits encoding, but an intrinsic algebraic malleability in ECDSA. If you have a valid ECDSA signature, there is another valid signature you can convert it into using some algebra on the values. And this is another way that people could modify transactions. The problem is that wallets on the network, when this was discovered, were producing random forms of this output, sometimes the two possible values they would produce one and sometimes the other. So we couldn't just block the excess flexibility on that. Fortunately nobody was attacking using that particular malleability, it's very slightly tricker to actually modify the transactions with it, because you have to do some algebra to make the change. So what we did in Bitcoin Core, this was about 2 years ago ish, we standardized on one of the two possible signatures for every case, which would be the valid one, whichever of the two were smaller, and we started producing transactions that met that criteria, and we encouraged other wallet developers to do that. Unfortunately the network is still widely using the other kind. In the last several weeks, someone had been performing malleability attacks using this method. I went and ran statistics on this and found that roughly 95% of these transactions on the netowrk were complying with this low-s anti-malleability rule already, so this opens up the possibility of going and requiring transactions for relay to meet this criteria. The downside of this would block the 5% of transactions which are non-conforming, which would give them a reason to upgrade their software. Now this is a little unfortunate and I would rather not do that, but if the choice is between allowing 95% of the users to be disrupted versus causing a disruption for 5%, that seems like a no-brainer, so Bitcoin Core in 0.11.1 and friends, do enforce this low-s anti-malleability rule. Once this is widely deployed, the nuisance malleability will be gone for good. We're not aware of other kinds of nuisance malleability at the moment.

There are other kinds of malleability, like for smart contracts. BIP62 was an attempt to write a spec to limit that. There are many rules, it's difficult to get right. But I'd be happy to get nuisance-malleability out of the network.

A few months ago, BIP66 activated on the network. It was a change that required strict encoding for signatures, related to malleability. It removes several of the malleability vectors, not just for random attackers but also for miners. The real motivation which wasn't disclosed for BIP66 was that it removes a consensus vulnerability between basically every implementation of the Bitcoin protocol was vulnerable to disagreeing with the other implementations as a result of the discrepancies between signature parsing. OpenSSL was the implementation of signature validation, and it was inconsistent with itself depending on what platforms it was running on for 64-bit platforms versus 32-bit platforms. So BIP66 required a very narrow encoding for signatures and removed these problems.

When BIP66 activated on the network, we had an unwelcome discovery. There were some forks that formed on the network as a result of half or more likely more than half of the network hashrate power were mining in some conditions without actually validating blocks. They were signalling that they were going to support BIP66, but not only were they not validating BIP66, but they were actually not validating anything at all. This is severely undermining the Bitcoin protocol. They didn't do this always. They basically only did this when they thought another miner was ahead of them on the network when there was a competing chain or a chain that they hadn't synced up to yet because it was invalid, and they were aware of it and then would extend it. This seemed absolutely crazy to me. It's incredibly risky to every user of the system. I wanted to understand more about why they were doing this. They actually had to write software to do this sort of "validationless mining" (SPV mining). Getting people to upgrade software is a challenge, so the fact that they wrote the software to do this suggests that they really had a reason to do so. Initially I believed that their reasoning was historic, if you look at how long it takes for blocks to propagate, in the past it's been quite long at time, but newer technology like BlueMatt's relay network allows much faster network propagation. I was going to go convince them that they didn't need to do "SPV mining" anymore. I went to the data. A friend of mine has been collecting information on mining pool activities for some time now. He has been connecting ot every available mining pool, asking them for which block they have been working on, and logging this. Using this data, you can see pool A is working on building a block after block X, and then you can see pool B is working on after block X, and now you know how long it took pool A to know about block x versus pool B to know about block x, and you can use this to figure out how long blocks propagate. The data I have goes back 100 days, it has 22 mining pools under observation, just looking back at the last 35 days of the data, it's about 1500 blocks. There's some interesting results in it. If you take whenever a block is first seen on the network by a pool, you can measure how long until half of the pools have seen it, versus how long until all of the pools. The time that I am observing from the first pool seeing it to half of the pools seeing it has a median time of 5.72 seconds. That's actually really long. The pools at the median are guaranteed an orphan rate of at least 1%, and the pools past the median point have an even higher orphan rate. The data has a lot of variance. It goes anywhere from time to reach half of the pools from anywhere from 300 miliseconds, that would be a case where a pool that is not being observed by this system found it first, and broadcasted it and they all found it at roughly the same time, to a maximum of 226 seconds which is maybe cases where there is some disruption on the internet. The mean is around 8.8 seconds for time to reach half of the 22 pools. If instead you take a look at how long to get to the last pool or second to last pool. The time to get to second to last pool is 31 seconds on a median. And I am using the second to last because there's often a pool that is broken, usually a different pool. The number for last is like twice that, and I don't think that's a good metric. So 30.9 seconds translates to an orphan rate of over 5%. That's obvious justification of why miners would care about bypassing the validation and working off of extending other miner's blocks.

What's interesting about this data is that because it's measured from actual miner output, it excludes the base latency caused by Bitcoin Core, because that would have been experienced by the first pool as well. This is all latency from the network and other parts of the stack. My next step in this research is to go and figure out what parts of the network can be optimized in the stack.

mining work delay graph:

Q: (inaudible)

A: So, it depends, the question was what is the estimated time to validate a block compared to propagate. In Bitcoin Core, we use extensive caching. If all of the transactions have already been seen, then it's sub-second to validate a block, because we don't have to do any ECDSA. We have done a bunch of work to minimize that latency. One of the implications of these delays is that the more latency in the mining process means that more progress mining has, which creates an unequal advantage to larger miners. If you are the largest miner you are at an advantage because others are waiting to receive your blocks. So we optimize this to reduce centralization pressures. This is especially large when you consider Matt's relay protocol, which allows most blocks where transactions are all known, to have a single block sent across the internet in a single packet. It's not the time to transmit the blocks. Interestingly, the data shows a significant dependency on block size and these latency numbers. Even with the block relay protocol, there is size-dependent latency here, such as taking longer time to validate the block the bigger it gets.

I fit a linear model to the data, I intercept a constant delay and then a slope. How does the block size effect the delay? You can take the slope of that and convert it into an effective bandwidth. The slope is very fat high, the size is substantially effecting delay, then it implies low bandwidth. The linear model shows that for the median time, that 5.72 second number, the effective bandwidth this way is only 723 kilobits/second. That sounds surprisingly low, except that it is not measuring bandwidth but also all of the computational time and things like that. If you have ever watched a download on your computer, you see it starts off slow and the TCP window opens and it gets faster. We are always in that sort of "slow start" period, not technically slow start because the connection has been open for a while, but it's the case that we don't get the benefit of all of the available benefit. For the second to last miner, that effective bandwidth is 212 kilobits/second.

So I have analyzed the data in a bunch of ways, I have analyzed it per pool, there's a spread per pool. This is not concentrated at a single pool. I don't know all the causes. Figuring this out has a major effect on fairness. The more latency, the more pressure there is to consolidate mining, and we have seen tremendous mining centralization and consolidation pressure. So it's good to cut that out...

Talking more about long-term performance, I want to give an update on libsecp256k1. This is a library that is being created by a Bitcoin Core team, mostly the work of Pieter Wuille and the BouncyCastle library person and myself and a host of other people. This library implements elliptic curve cryptography used in Bitcoin Core, an oddball curve that was standardized years ago but not widely used. This started in 2013, based on work by Hal Finney, who was experimenting with faster validation. On 64-bit hosts, depending on which options are enabled, we are on the order of 5-8x faster than openssl at verifying. This is a direct improvement to both propagation times to the extent that validation matters and it's not pre-cached, but also to initial block download.

We have also used libsecp256k1 as a testbed to experiment with new ideas around making high-assurance software in this space. We have done the basic stuff that every cryptography library should do, but also some other interesting techniques. We implement constant-time sidechannel attack resistance signing, and this library is used for signing-only in Bitcoin Core because it is, it has been, the only strongly-sidechannel-attack-resistant implementation of this curve. The trezor library finally has some sidechannel-attack-resistance but it is not as hardened as libsecp256k1. So we use it in Bitcoin Core but only for signing and we also double check all of its reuslts as well. So we were very comfortable deploying it in Bitcoin Core while we were developing it. We have tried new mechanisms in software validation and testing and experimented with things that we want to use with Bitcoin consensus code in the future. It's a small library, so it's good for this testing. It's only 22 kLOC, and about 20% of this is tests and assertions. If you go through the source code history, it's basically been rewritten twice in its history, it's had a lot of time to mature and the architecture is pretty good. The testing in this is unusually strong. The automated testing that runs whenever you build it, achieves effectively complete line coverage of the code about 99.1%, it also achieves 87.1% condition-decision branch coverage which is mostly unseen outside of aerospace applications where things are tested to that level.

There is an industry standard of MISRA, originally an automative standard, but now it's targeted for life-safety critical standards. We strive for MISRA standards, we're not quite there ,but we're getting close.

We have computer validation that the source code does what we intend it to do. It is difficult to apply this to whole programs, but we have applied this to segments of the source code. It is the only implementation of the secp256k1 curve that has any formal verification, that I'm aware of.

So we're moving towards doing a finally-versioned release, where we say that people should be using tihs. Even though the README says don't use it, a lot of people are using it. It really is the best implementation already, but we wanted formal algebraic validation of our ECC group law, and that was holding us up for a while. Over the past couple of weeks, we have succeeded in doing that. We have three separate strategies for proving the group law correct. We are planning to run a bounty, but not an ordinary bug bounty. The library is very heavily reviewed, very heavily tested, people might find bugs, but it's not a good use of time to go try to find a bug in something that probably doesn't have any. So instead, I want people to add bugs, a plausible bug that the tests doesn't find. If the tests pass when you add a bug, I would consider that a bug. I think that's a fair kind of bounty, and I'm not aware of anyone trying this strategy before.

Our testing also validates against OpenSSL, and we have found two major bugs in OpenSSL. We have plans to have a release prior to Bitcoin Core 0.12, and as a result it will speed up initial block download. It is at the moment held up by validation. So we are seeing taking down initial block download from 1 day to about 3 hours, which is pretty cool. That's neat, but it's also frightening to me. If you look through the history of Bitcoin Core history, you can see the statements about reducing initial block history validation, we see this several time, we did it in 4.3, 5.0, 6.0, 8, 10... not all of them were quite as large of that, but it was 4 or 5 times from a day to 4 or 5 hours. Back in 2011 and 2012, lots of ideas came up about how to make the system much more faster, and over time they were implemented and deployed into the network. It's not that it took that many years to implement them, but rather that there's always so many things to do in Bitcoin that sometimes taking 3 hours is not a priority, but taking a day- sure. We have a few more ideas to speed up initial block download, but most of them start moving beyond a factor of 2 or so, they have to start making tradeoffs about changing security in significant or sometimes insignificant ways. In the future, it will be different regimes. This is the last of the low-hanging fruit for initial block validation speedups.

Stepping back again a little more broadly, I have been working a lot trying to expand the ecosystem of the people working on this. We need more people working on this. There's no one answer to that. There is an IRC channel called #bitcoin-wizards on freenode IRC. Wizards was created as an off-spin of the #bitcoin-dev IRC channel because petertodd and I were flooding the channel with long-term scalability thoughts, so we moved it into a separate public channel to get it out of the day-to-day channels. Bryan Bishop (kanzure) has gone through this and made this epic collection of thousands of links cataloging to technical proposals in the Bitcoin space. Some of them are hair-brained but others are brilliant. It's quite humbling to find that awesome idea I had was actually invented twice, both by other people that never talked with me. Sometimes it was because they needed more hands; it's good to have that collection building up and it would help to synchronize things. Ther'es often a gap when talking with the academic community because they don't know what's important to the industrial ecosystem, and we don't know what's important to academia. So having these tools is important.

Q: So this is a question about bringing in home-grown developers in Africa, and how do you grow Bitcoin Core developers?

A: I can't say that I have a lot of experience with that. Not enough of it is going on. You don't need tremendously powerful hardware to work on Bitcoin Core. You can use a 20-year-old computer, syncing to the network wont be quick, but it will work. Limited resource parties are often the first to report scaling limits. If it's really slow for someone on a slow machine, it's going to be even slower at 10x the scale for everyone. There are no commercial barriers preventing people from getting into Bitcoin Core. There's just the matter of having the spare time to wade through the huge amounts of information and diving in and make use of it. I'm definitely interested in efforts to expand the community in those ways.

So expanding the scope yet again, I thought I would give an update about what has been going on with sidechains. So sidechains are a proposal that grew out of from my perspective particularly pegged sidechains grew out of some mailing list traffic, as many things do in this space, where Adam Back basically said what would it take to have a testing bitcoin network. One of the challenges in this space is that you don't want to risk or disrupt a huge billion-dollar economy to try out some experimenta ltechnology. You also don't want to have to get people to buy into technology to try it out.. so how can you have a testing network without disrupting stuff? Well, one answer is to use an altcoin. One of the problems with altcoins, the one that causes me the most concern, the primary interest has been speculation, people looking for the value to go up. Well this isn't really compatible with lots of technical innovation and technical risk. The speculation side likes announcements more than coding. So the altcoin space isn't really a good space for technical innovation. Litecoin is easily one of the most successful altcoins, and coblee showed you today the entire difference between litecoin and bitcoin on the screen earlier today. As a technology play, it's not that exciting to me. So Adam didn't think that altcoins were a good answer for this problem, because introducing altcoins if they are successful, it disrupts the bitcoin network effect. Why do people think bitcoin is valuable? Well it's because other people accept it, and you don't want to disrupt the network effect. So how can you get innovation in bitcoin? So adam3us proposed the idea of a one-way peg where you start up a separate system where you provably burn bitcoin in the bitcoin chain, and then you have programmatic rules to get it to peer in the second chain, so you could get people to migrate into a new system with new features, they can do so voluntarily on their own time frame. The problem is that you can't go back. So you can move into one ofthese systems, find that it's not the winner, but then you're stuck in that system. The only way to get out is to get people to buy your coins there, and it's not a good model. So afte rthat, I came up with a proposal to use a fancy cryptographic protocol to go the other direction, to move altcoins into and move them back. This became the sidechains whitepaper. Myself and a number of other developers formed a company to make this a reality, because it required far more development efforts than we had available amongst ourselves. There is a sidechain running today, Elements Alpha, it's' a 2-way peg to testnet. It's been out for 3 or 4 months now. It's based on Bitcoin Core. It implements this 2-way peg mechanism from the sidechains whitepaper. It does so strangely, it's a hybrid implementation where the sidechain uses the cryptographic protocol to move coins from Bitcoin into it, but to move coins back, it uses mult-sig trusted federation. There are a few different trusted federation, if there is 5-of-7 signing the coins can move back. We didn't have to change testnet to make this possible. Inside elements alpha sidechain, we went wild with a bunch of cool technology. I can talk all day about all the things in there. I don't think people have fully appreciated everything in there, maybe we put in too much. We did a lot of stuff that we want in Bitcoin that were too complicated or too disruptive to deploy. Elements alpha has a robust and deep malleability fix that fixes all kinds of malleability, not just nuisance but also the kinds of malleability that mess up smart contracts. There's also a reduced security initial sync, without transmitting signatures if you're not going to trust signatures in historic blocks. So that's a cool thing, but it requires restructuring how the merkle tree commitments work in blocks and how the txids are computed. So it's difficult to figure out how to deploy that in Bitcoin proper.

In elements alpha, we deployed an upgraded smart contract system. We turned on all of the originally-disabled opcodes. There were considerably tested to check safety, and then fixed when unsafe. With those disabled opcodes, many more things were possible. After elements alpha was released, Pieter used this to develop a key tree signature multisignature scheme, allowing for 2-of-1000 signatures efficiently. It was constructed not as built in to elements, but just using Script.

We fixed to the more flexible Schnorr signature type, which allows for efficient batch validation. We fixed nits like the signatures covering the values of outputs explicitly, allowing for easier HSM implementations. CLTV and CSV are now on the road to being deployed in Bitcoin mainnet, but we were able to try them out inside elements which was neat. Implemented for elements is support for native assets, like making colored coins that aren't a hack, where the system tracks different asset types and keeps them apart and do cool things with them.

Each one of these things that I have mentioned was primarily work by other people. The area where I worked the most was confidential transactions, which is a feature that makes the values of the transactions private, so that the value or amount is only known to the target of the transaction. This is a privacy technique that is very different from other cryptocurrency techniques, which usually try to hide the transaction graph, but that's kind of odd because it's hiding the metadata of the transaction, but when we try to hide things we usually try to hide the payload, so that's what CT does. Unlike every other cryptographic privacy scheme for cryptocurrencies, it doesn't have a fundamental change to the scalability of the system. Most of the other systems that try to hide the transaction graph result in ever-growing accumulators for all time. But in CT, since you know which coins were spent, you don't get an ever-growing database. It's slow to validate, makes transactions larger. To pull this off, every CT transaction has a zero-knowledge range proof that shows that the blinded values the encrypted values in the transaction add up and they don't overflow, that you haven't made an output so big that it wraps around and goes negative. This adds about 2 kilobytes to the size of every output in the transaction. Signature data, so doesn't go into the UTXO set, but it's a cost. My software validates around 1000 transactions/sec on my old desktop, well my new desktop has 24 cores so it's much faster there. I wrote a writeup called confidential_transactions.txt which you can find somewhere online. The most complimentary thing I have heard in years is that others say that this has helped them understand zero-knowledge range proofs, and multiple others have said this so maybe it's even true. So if you're interested you should check that out.

One of the major obstructions with Bitcoin technology when talking with traditional institutions, has been a privacy issue. You don't want your employees to see your payroll, you don't want your competitors to see your balances. It's private-by-default in traditional finance, and CT addresses this. It has some specific applications, by hiding values you can reduce the ability for individuals to use front-running. Just announced this week, my company has announced Liquid, which is a production deployment of this technology.

Q: ...

A: Right, so, I was just expecting people would just appreciate that people aren't aware of it. The whole point of CT is that it's not ready for prime-time. Elements Alpha means, it's just alpha really, it's not alpha software, we recognized that people would see alpha and think alpha software and we thought well that's fine, it's test software even though tested still no guarantees with it. It's basically panning out just as I expected, I'd love to see more people picking this up like the increased smart contract functionality, all of that can be readily backported into Bitcoin mainnet, but actually doing it would require having use-cases though, so if people want to use this technology and have a use-case, yeah that's an argument for taking this technology and moving it into Bitcoin mainnet. You can install elements alpha, there's a #sidechains-dev IRC channel on freenode IRC, you can start another sidechain if you want, we could make that easier I guess. You can transact with people, make fancy smart contracts, the people who wrote this software are lurking in the IRC channel and are happy to help people.

Liquid is an application of this technology to do fast inter-exchange settlement betwee nexchanges, for helping to mitigate trust between exchanges. They could accept zero-conf between each other, but if they do so they increase the risk that it isn't one exchange that fails but it's all of them... So liquid allows exchanges to deposit coins on the sidechain, they can rapidly transact amongst each other, and the values are all hidden by confidential transactions. When they are tired of moving coins in the system they can move them out of the system again. It's a very narrow application.

Q: ...

A: So one of the things that elements alpha has, it replaces the code in Bitcoin Core that checks proof-of-work, with Bitcoin script. You could use Bitcoin script to reimplement proof-of-work check, but in Elements and Liquid, we are replacing it with multisignature, so instead of the normal poisson probabilistic mining process, the same parties that sign-off for the federated 2-way peg also sign-off on blocks. So since it's not using this randomized block process, the interblock time is controlled by the signers, they can make a block as fast as they come to agreement. Since there's a fixed set of them, like 5-of-7, they can come to agreement as fast as their internet connections allow. It's an advantage of a design which is not heavily decentralized. Since you know the participants in it, you can converge much faster.

Q: ....

A: So right now it's a chain-wide setting in Liquid. One of the problems of talking about sidechain technology, you have the principle of the 2-way peg, the things that the system can do is more limited by "why would you do that" rather than "what can you do". Yes you can do that, but why?

Q: .... Some of the exchanges, one of the problems, is .. they give some of their friends and families .. large amounts of BTC ... so their friends have this benefit knowing large buys(?).... CTs nobody able to front-run.. but as I mentioned earlier, CT allows you to... so.. am I right to.. exact same risk and doesn't solve the problem?

A: It doesn't solve the problem of exchanges doing front-running. There are some soft-tools for that. It's not a decentralized orderbook today. That kind of technology could be built into a sidechain in the future. Doing batch trades for avoiding frontrunning is sort of tricky even with advanced cryptography. CT only means they can't see what other third parties are transacting, but they can see their own transactions' values.

Q: So there's this federated multisig for the peg to Bitcoin, so that's a compromise obviously. If there is a change in Bitcoin, soft-fork or hard-fork...?

A: So the change to allow Bitcoin to do the verification of the decentralized peg, and not require the federated one, is in Elements Alpha, because we use it in the one direction, so you can look at that and find that it's not very complicated, it's about 50 lines of code in Script functionality. It's just a script change, so it would just be a soft-fork. There are many different ways to do it in Bitcoin Core, so if Bitcoin Script was more computationally powerful, you could do the decentralized 2-way peg, without having any 2-way-peg-specific mechanism in Bitcoin Core. There's no BIP for this right now. I would have liked to do this about a month ago, but with all of the things scaling bitcoin and such, it has been lower priority. I would expect a BIP to be out in the next couple months, and then it would take time. One of the cool things about having it in the sidechain already, we can experiment with it and find what we like about the design and what we don't.

A: ....

Q: Yeah, although Liquid, because Liquid has this requirement for very-fast confirmations in the system, the easiest ways to get very-fast confirmations are also the ways that involve centralization, so we want to get that stronger. I am running out of time, but I'll take another question.

A: What have you learned while working on sidechains?

Q: Every application reimplements the bitcoin protocol. It's a bunch of redundant work that goes on out there. When you try to make radical changes in alpha, it breaks compatibility with everything right away. Even to have a block explorer or whatever for the sidechain, it's just a huge amount of work. Making libconsensus will help with reimplementation headaches so that someone making a sidechain doesn't have to rewrite every bitcoin program. As far as scheduling, it's playing out basically the way I thought, it's not my first time around the bend so I expected many delays, looking at it with rose-colored glasses I thought a few months, but then I went and added a factor of 10. I think it's gone alright. I'm not really sure what the future holds, but it will be fun to find out.

I'll be around today so people can grab me to chat.