summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBryan Bishop <kanzure@gmail.com>2015-12-08 11:11:07 -0600
committerBryan Bishop <kanzure@gmail.com>2015-12-08 11:27:08 -0600
commit807eaf95d5ac397c67b459db4969c083bff1ec0d (patch)
treeb2730877f7796a15c8f6f0bf94e7cede73e3317e
parenta9bf59cccc2675fbad3b897fa99157f617193760 (diff)
downloaddiyhpluswiki-807eaf95d5ac397c67b459db4969c083bff1ec0d.tar.gz
diyhpluswiki-807eaf95d5ac397c67b459db4969c083bff1ec0d.zip
include more links in talks
-rw-r--r--transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn2
-rw-r--r--transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet.mdwn4
-rw-r--r--transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain.mdwn2
-rw-r--r--transcripts/scalingbitcoin/hong-kong/fungibility-and-scalability.mdwn8
-rw-r--r--transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn2
-rw-r--r--transcripts/scalingbitcoin/hong-kong/security-assumptions.mdwn2
6 files changed, 10 insertions, 10 deletions
diff --git a/transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn b/transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn
index f086fea..e7aea0f 100644
--- a/transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn
@@ -48,7 +48,7 @@ bip000 is again Tadge beat me on the bip numbering. That's status quo. Keep the
My personal thoughts, this is not speaking for anyone else except for myself, all vendor hats are off now. I think we need a small bump now to gather crucial field data. You can theorize and test and so on, but the internet and the real world is the best test lab in the world. You can only get that full accounting of field data if you actually do a real hard-fork. So the venture capital consensus wants to g beyond 1 MB. The technical consensus is that going above 1 MB is risky. I think it's poor signalling to users.
-We have been kicking the can down the road, we have integrated libsecp256k1 to increase validation speed and validation cost. These are big metrics in our system. We have been making positive strides on this. This should reduce some of the pressure to change the block size. The difficulty is finding an algorithm that cannot be gamed, cannot be bought, and is sensitive to miners. You can get 2 out of 3 but not all 3.
+We have been kicking the can down the road, we have integrated [libsecp256k1](https://github.com/bitcoin/secp256k1) to increase validation speed and validation cost. These are big metrics in our system. We have been making positive strides on this. This should reduce some of the pressure to change the block size. The difficulty is finding an algorithm that cannot be gamed, cannot be bought, and is sensitive to miners. You can get 2 out of 3 but not all 3.
....
diff --git a/transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet.mdwn b/transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet.mdwn
index 0ff999b..16c3364 100644
--- a/transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet.mdwn
@@ -6,7 +6,7 @@ video: <https://www.youtube.com/watch?v=ivgxcEOyWNs&t=2h25m20s>
I am a bitcoin miner. I am a C++ programmer and a scientist. I will be going pretty fast. I have a lot of stuff to cover. Bare with me.
-My perspective on this is that scaling bitcoin is an engineer problem. My favorite proposal for how to scale bitcoin is bip101. It's over a 20-year time span. This will give us time to implement fixes to get Bitcoin to a large level. A hard-fork to increase the block size limit is hard, and soft-forks make it easier to decrease, that is to increase it is, so we should have a hard-fork once or at least infrequent and then do progressive short-term limitations when we run into scaling limitations.
+My perspective on this is that scaling bitcoin is an engineer problem. My favorite proposal for how to scale bitcoin is [bip101](https://github.com/bitcoin/bips/blob/master/bip-0101.mediawiki). It's over a 20-year time span. This will give us time to implement fixes to get Bitcoin to a large level. A hard-fork to increase the block size limit is hard, and soft-forks make it easier to decrease, that is to increase it is, so we should have a hard-fork once or at least infrequent and then do progressive short-term limitations when we run into scaling limitations.
In theory, well, first let me say bip101 is 8 MB now, and hten doubling every 2 years for the next 20 years, eventually reaching 8192 MB (8 GB). This is bip101. That 8 GiB would in theory require a large amount of computing power. 16 core CPU with 3.2 GHz per core. About 5000 sigops/sec/core. We would need about 256 GB of RAM. And you would need 8 GB every 64 seconds over the network. The kicker is the block propagation. With 1 gigabit internet connection, it would take 1 minute to upload an 8 GB block. You could use IBLT or other techniques to transmit a smaller amount of data at the time that the block is found, basically the header and some diffs.
@@ -14,7 +14,7 @@ There's some problems. Particularly the case with block propagation code. The RA
The effects of bad block propagation rates have been mentioned by petertodd and others, the quick idea is that currently China has 65% of the hashrate. Not all of that hashrate is running on servers in China. Most of the Chinese mining pools have servers all around the world. Some of it is abroad and some of it is in China. The Great Firewall is a big performance problem. The purpose of the Great Firewall is to censor and control. That has some problems for Bitcoin. It's uncomfortable to have so much hashrate subject to that system. Propagation over the Great Firewall is asymmetrical.
-We wanted to figure out how big of a problem this block propagation thing is. We figured that Gavin had already used some tests with regtest, which is like testnet except predictable. I wanted to try to make things unpredictable, so we chos eto use testnet. We got 21 nodes under our control. 10 under my control, 11 from various users on reddit. Most of these were mid-range VPS, we had a few vastly-underpowered machines, and we also had some boxes running Intel atom CPUs. Our data collection method was different from others for measuring block propagation, lightsword and gmaxwell used stratum to see when a block is being offered by a mining pool. To monitor the debug logs, we added more debug information to see with microsecond resolution when a message was received and all of the other block propagation processes happened.
+We wanted to figure out how big of a problem this block propagation thing is. We figured that Gavin had already used some tests with regtest, which is like testnet except predictable. I wanted to try to make things unpredictable, so we chos eto use testnet. We got 21 nodes under our control. 10 under my control, 11 from various users on reddit. Most of these were mid-range VPS, we had a few vastly-underpowered machines, and we also had some boxes running Intel atom CPUs. Our data collection method was different from others for [measuring block propagation, lightsword and gmaxwell used stratum to see when a block is being offered by a mining pool](http://diyhpl.us/wiki/transcripts/gmaxwell-2015-11-09-mining-and-block-size-etc/). To monitor the debug logs, we added more debug information to see with microsecond resolution when a message was received and all of the other block propagation processes happened.
We collected all of those debug logs and did some analysis. The data we have got so far, I finished about yesterday at 12 pm, so there's a lot of tests that we want to do but haven't done yet. Right now we have mining data using our London server. Everything that you are going to see is coming out of a London broadcast server. I hope to have a switchover to a Beijing server soon, but I haven't done that yet. Maybe in a few days to see some more various data.
diff --git a/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain.mdwn b/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain.mdwn
index b792354..c56c873 100644
--- a/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain.mdwn
@@ -48,7 +48,7 @@ Here is my proposed miner incentive formula. This is a graph. There are many way
Rather than having 25 BTC, I could mine at 1/2 difficulty and get 1/2 BTC twice as often. This is probably valuable to smaller miners. The difficulty-weighted split of fees, bigger miners get more money than smaller miners. This causes us to incentivize and optimize the p2p topology to quickly propagate blocks. We are explicitly incentivizing the work between the youngest parent and oldest child, so we want to transmit thing quickly. The p2p topology in bitcoin is quite random right now, and could be optimized.
-This means that smaller miners could mine without joining funds. I am pulling p2pool into bitcoin itself. Optimizing the p2p topology makes censorship much more easy. Bitcoin has a linear, you just add up in each one, this is simplistic. Which braid has more work?
+This means that smaller miners could mine without joining funds. I am pulling [p2pool](https://github.com/forrestv/p2pool) (( [http://p2pool.in/](http://p2pool.in/) )) into bitcoin itself. Optimizing the p2p topology makes censorship much more easy. Bitcoin has a linear, you just add up in each one, this is simplistic. Which braid has more work?
Getting rid of orphans forces the braid structure. Transaction volume is limited by bandwidth and CPU. Confirmation time can be much faster, we can throw out blocks as fast as possible, and we can do it as much as we can propagate blocks, which is limited by the size of the planet. We don't have to solve the NP-complete traveling salesman problem. Miner income can become much more smooth and more predictable. There are many ways to insert this into Bitcoin, and I am sure we will discuss this in the future. Smaller miners don't have huge pools, which is miner decentralization.
diff --git a/transcripts/scalingbitcoin/hong-kong/fungibility-and-scalability.mdwn b/transcripts/scalingbitcoin/hong-kong/fungibility-and-scalability.mdwn
index 715881d..00d2d3f 100644
--- a/transcripts/scalingbitcoin/hong-kong/fungibility-and-scalability.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/fungibility-and-scalability.mdwn
@@ -16,11 +16,11 @@ Often when you are using a smartphone wallet, bloom filters are not privacy-pres
One thing that could be done to reduce the linkability coming from multiple inputs into a single transactions is to use multiple independent transactions from multiple addresses to multiple receiver addresses. This could be done at a payment-protocol level.
-Using the existing protocols, in a p2p format is coinjoin. You have shared input addresses from different users, there's nothing preventing a transaction having inputs from different users. Combines inputs from different users, and corresponds outputs to the payments that the users would have made, combined in a single transaction that pays each of the change addresses and so on. There are restrictions. You need to provide values that gain ambiguity, you gain no privacy if it's unambiguous if the input amount is the same as a certain output amount. A disadvantage of this is that you need to coordinate with other spenders. Also, this is vulnerable to sybil attacks because some users might be attacking the system by offering to mix the coins, but really they are just trying to fill up the transaction set so that you are the only person who isn't the attacker. These protocols are trying to be trustless, such that the person operating hte server for the p2p mechanism cannot steal the coins.
+Using the existing protocols, in a p2p format is [coinjoin](https://bitcointalk.org/index.php?topic=279249.0) (([coinjoin status](https://en.bitcoin.it/wiki/User:Gmaxwell/state_of_coinjoin); see also [coinswap](https://bitcointalk.org/index.php?topic=321228.0) and [coinshuffle](https://bitcointalk.org/index.php?topic=567625.0))). You have shared input addresses from different users, there's nothing preventing a transaction having inputs from different users. Combines inputs from different users, and corresponds outputs to the payments that the users would have made, combined in a single transaction that pays each of the change addresses and so on. There are restrictions. You need to provide values that gain ambiguity, you gain no privacy if it's unambiguous if the input amount is the same as a certain output amount. A disadvantage of this is that you need to coordinate with other spenders. Also, this is vulnerable to sybil attacks because some users might be attacking the system by offering to mix the coins, but really they are just trying to fill up the transaction set so that you are the only person who isn't the attacker. These protocols are trying to be trustless, such that the person operating hte server for the p2p mechanism cannot steal the coins.
-Confidential transactions I mentioned at the beginning a number of different types of, some are more directly ... for the amounts of the transactions from the obsrevers. Using some relatively conservative cryptographic primitives, used in ECDSA signatures, that one can hide the values of the transactions to the network. The value of the transaction is only visible to the recipient of the transaction. The values add up and the network can verify this without seeing the actual values. This mechanism provides some indirect privacy improvement. Change is more ambiguous because you wont see $20 going in and then $13.99 going out, and the rest being change, given specific amounts and the exchange rate- well, it's somewhat ambiguous. You could also pre-emptively send zero-value transactions to other users, which adds more ambiguity. Coinjoin is more effective with confidential transactions, you can combine two sets of coins with coinjoin and it's fully ambiguous who's paying who without any extra restrictions on the implement.
+[Confidential transactions](https://people.xiph.org/~greg/confidential_values.txt) I mentioned at the beginning a number of different types of, some are more directly ... for the amounts of the transactions from the obsrevers. Using some relatively conservative cryptographic primitives, used in ECDSA signatures, that one can hide the values of the transactions to the network. The value of the transaction is only visible to the recipient of the transaction. The values add up and the network can verify this without seeing the actual values. This mechanism provides some indirect privacy improvement. Change is more ambiguous because you wont see $20 going in and then $13.99 going out, and the rest being change, given specific amounts and the exchange rate- well, it's somewhat ambiguous. You could also pre-emptively send zero-value transactions to other users, which adds more ambiguity. Coinjoin is more effective with confidential transactions, you can combine two sets of coins with coinjoin and it's fully ambiguous who's paying who without any extra restrictions on the implement.
-Another example of the tradeoff here is that the transaction size with confidential transactions is maybe 6x larger. Would we deploy something like confidential transactions within Bitcoin given that they are 5x bigger? This presents a tough choice because of constraints on block size in Bitcoin. These transactions are providing more functionality per transaction, they are in some sense, there is an opportunity to compress it in some way. A single coinjoin confidential transaction can actually replace multiple standard transactions, to the extent that these mechanisms are used, it reduces overhead of things like merge avoidance, you get balance privacy, which others have tried to achieve by splitting up their BTC into multiple HD wallet addresses, and some have done merge avoidance or multiple payments to get value privacy. You end up with a smaller UTXO set because you have less output fragmentation. It's hard to evaluate the average reclaim space, but we might get closer to an acceptable overhead or neutral overhead. So we would have to do some anonymized data collection with users to wokr out the actual use of merge avoidance and so on, to see whether merging confidential transactions into Bitcoin would make sense.
+Another example of the tradeoff here is that the transaction size with confidential transactions is maybe 6x larger. Would we deploy something like confidential transactions within Bitcoin given that they are 5x bigger? This presents a tough choice because of constraints on block size in Bitcoin. These transactions are providing more functionality per transaction, they are in some sense, there is an opportunity to compress it in some way. A single coinjoin confidential transaction can actually replace multiple standard transactions, to the extent that these mechanisms are used, it reduces overhead of things like [merge avoidance](https://medium.com/@octskyward/merge-avoidance-7f95a386692f), you get balance privacy, which others have tried to achieve by splitting up their BTC into multiple HD wallet addresses, and some have done merge avoidance or multiple payments to get value privacy. You end up with a smaller UTXO set because you have less output fragmentation. It's hard to evaluate the average reclaim space, but we might get closer to an acceptable overhead or neutral overhead. So we would have to do some anonymized data collection with users to wokr out the actual use of merge avoidance and so on, to see whether merging confidential transactions into Bitcoin would make sense.
Another method is linkable ring signatures used in some altcoins. It's slightly better than coinjoin. The sender can choose the mixer, you don't need to coordinate with all the users. The values being mixed, each coin going into the ringsig, must have the same value. This is quite restrictive on usability. Linkability is what prevents double spending. Side-effect of this is that it is not UTXO compatible. The overhead is that the ring signature is ... you can choose 5 possible other inputs, and your signature would be about 5x larger. There might be a small saver here, which is that you are less reliant on reusable addresses for linkability.
@@ -28,7 +28,7 @@ The other more powerful fungibility or privacy mechanisms is zerocoin and zeroca
Another type of fungibility mechanism that I proposed some time ago was encrypted transactions or committed transactions. It follows the Bitcoin model of having fungability without privacy. It provides no privacy at all. It improves fungability. The way it works is that you have two-phase validation. In the first phase, the miner is able to tell that the transaction hasn't been spent. In the second phase, they learn who is being paid. The idea is that in the first phase, the miner has to mine th etransaction, and the other one happens a day later maybe. In the second phase, all the miners learn an encryption key that allows them to encrypt the first phase transaction, tell that it is valid, and do final-stage approval. There is a deterrent to censoring the second stage transaction because the first one was already mined, and you would have to have a consensus rule to approve all valid second-stage transaction, or else you might orphan the entire's day work which is quite expensive.
-A related topic with fungability is the sort of tradeoff with privacy and identity. Bitcoin is intentionally identity-less system. It's permissionless, you can operate and use it without an account. I think that what we want to achieve for internet protocols in general is to avoid it being trivial to do ... by default... investigations are always going to be possible, but there must be an incremental cost to conducting an investigating and retain societal norms. Some people have tried to argue against privacy features on electronic cash, on the basis that it would become too anonymous. I think this is not a concern. [There is a video in the past where I talk about this](http://diyhpl.us/wiki/transcripts/bitcoin-adam3us-fungibility-privacy/).
+A related topic with fungability is the sort of tradeoff with privacy and identity. Bitcoin is intentionally an identity-less system. It's permissionless, you can operate and use it without an account. I think that what we want to achieve for internet protocols in general is to avoid it being trivial to do ... by default... investigations are always going to be possible, but there must be an incremental cost to conducting an investigating and retain societal norms. Some people have tried to argue against privacy features on electronic cash, on the basis that it would become too anonymous. I think this is not a concern. [There is a video in the past where I talk about this](http://diyhpl.us/wiki/transcripts/bitcoin-adam3us-fungibility-privacy/).
There's a tradeoff with scale. Some of the advanced fungability systems might be more deployable than I previously thought, because you reclaim some space overhead. To evaluate this, we need to consider more than just the raw size of the transactions. We also have to consider the fungability and the savings in UTXO size and the number of transactions that would have otherwise been used, because we're introducing a more powerful transaction type that would have replaced other transaction patterns. To actually do this, we need to collect data about how common merge avoidance and coinjoin are.
diff --git a/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn b/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn
index bb153a2..84cdc37 100644
--- a/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn
@@ -10,7 +10,7 @@ I don't have time to introduce the idea of zero-confirmation transactions and li
I think that's a good tradeoff.
-But how much can this get us? What do we need in order to get this? Can Lightning work today? Well, check back nextweek. bip65 is going to be active pretty soon. bip65 is not sufficient, but it's necessary. We need relative timelocks, and the ability to reliably spend from an unconfirmed transaction, which segregated witness allows. OP\_CLTV is almost active. OP\_CSV is maybe soon.
+But how much can this get us? What do we need in order to get this? Can Lightning work today? Well, check back nextweek. [bip65](https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki) is going to be active pretty soon. bip65 is not sufficient, but it's necessary. We need relative timelocks, and the ability to reliably spend from an unconfirmed transaction, which segregated witness allows. OP\_CLTV is almost active. OP\_CSV ([bip112](https://github.com/bitcoin/bips/blob/master/bip-0112.mediawiki)) is maybe soon.
There are levels of lightning that we are prepared to accept. If we never get segregated witness, if we never get checksequenceverify, we can still use lightning, it just wont be as good. Channels can work with only OP\_CLTV (checklocktimeverify), but it's much less efficient ((see [here](http://lists.linuxfoundation.org/pipermail/lightning-dev/2015-November/000310.html) for why segregated witness is useful for lightning)). This could be ready to go next week.
diff --git a/transcripts/scalingbitcoin/hong-kong/security-assumptions.mdwn b/transcripts/scalingbitcoin/hong-kong/security-assumptions.mdwn
index 6349d92..c1757ea 100644
--- a/transcripts/scalingbitcoin/hong-kong/security-assumptions.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/security-assumptions.mdwn
@@ -6,7 +6,7 @@ video: <https://www.youtube.com/watch?v=ivgxcEOyWNs&t=9m20s>
Hi, welcome back.
-I am a developer for libsecp256k1. It's a library that does the underlying traditional cryptography used in Bitcoin. I am going to talk about security assumptions, security models and trust models. I am going to give a high-level overview of how we should be thinking about these issues for scaling and efficiency and decentralization. Bitcoin is a crypto system. Everything about it is a crypto system. It needs to be designed with an adversarial mindset, even in an adversarial setting.
+I am a developer for [libsecp256k1](https://github.com/bitcoin/secp256k1). It's a library that does the underlying traditional cryptography used in Bitcoin. I am going to talk about security assumptions, security models and trust models. I am going to give a high-level overview of how we should be thinking about these issues for scaling and efficiency and decentralization. Bitcoin is a crypto system. Everything about it is a crypto system. It needs to be designed with an adversarial mindset, even in an adversarial setting.
This is probably an unfamiliar area for most areas of research. An example that a friend of mine uses on IRC, you could imagine asking a structural engineer, saying you need to design this building, it needs to be structurally sound, it needs to withstand weather, and further, if anyone studies your plans in detail, if anyone studies it then it they couldn't destroy the building. That mostly can't be done for most systems. It's incredibly expensive to build things that are secure against outlandish attack scenarios. A lot of society is built around preventing adversarial behavior, because of the difficulty of intrinsincally preventing it.