summaryrefslogtreecommitdiff
path: root/transcripts/bitcoin-core-dev-tech/2022-10-12-merging.mdwn
blob: 8068bab1fb516e8b065a5f824e8a95d92bceed3c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
2022-10-12

Strategies for getting stuff merged

# Introduction

I wanted to talk about things that have been leaking out over other conversations because sometimes people get frustrated that their stuff doesn't get merged. This is not a new problem. It's an issue that has been going on for a long time. It can be frustrating. I don't have the answer. This is going to be more discussion based and I've asked a few folks to talk about strategies that have worked for them. Hopefully this will lead to a discussion about copying those strategies or other ways that might work.

# Opening pull requests

For larger projects, at least as a reviewer, I think it's helpful to have something that tracks the big picture. What do you want in the end? What should it look like? Even if that implementation isn't final and has a billion TODOs and it's not entirely robust, but at least an idea of what you're trying to get to. Then, break that up into smaller chunks and somewhere to keep track of all the chunks.

If I see a refactor PR, it might be a part of a larger project but perhaps I don't see how it is related. If there's a follow-up PR, then I can see that maybe it is required for that other one, and that might motivate the why of the original PR. That helps for reviewing and helps to get things merged faster.

The other thing is if it needs rebase, then please rebase. If I see a PR that is tagged rebase, most of the time I will skip it. If I review it now, and then it gets rebased, then my review is not necessarily valid any more. Try to rebase it when the bot tells you that it needs rebase.

Q: How do you pull in the kind of people for review? What's your method of outreach and what kind of conversations are those?

As yo may have noticed, we have 400+ pull requests open. A lot of things fall through the cracks. As new PRs get open, people tend to forget about the old ones. It's helpful to ask people individually, if you feel comfortable doing that, like on IRC or to their Signal. Ask before you Signal someone for the first time, of course. Usually I ask on IRC. Events like this are nice because you can go talk to people and ask them for their Signal or just email them. Reach out individually.

The IRC meetings are also great. We have the high priority for review thing. You can ask for your PRs to be on there. It would be better if we removed the one PR per person limit because sometimes people have multiple items that are high priority. Part of it is, don't put everything on high priority because then it becomes less useful. I go through the high priority list regularly myself.

In general, if you think your PR is ready to be merged, then you should say that either in the PR (or maybe it gets lost in comments) or do it on IRC and personally I'm more likely to see your comment on IRC that a PR is ready for review.

One of the things that starts to work against bigger projects is the sorting order of PRs. One option would be to get the project off of github. But when the default is the newest thing? That's often not particularly helpful. If you're looking for things to review, start by ordering by things that are not recently commented. It feels good to apply effort to something that really needs it. Also it moves it forward in terms of last comment if you leave a comment on it. Traditional thread bumping.

Coming more from an academic background, writing a motivation could be helpful. Spend a few moments explaining what the goal is, why are you doing this, or what's the high level idea. Sometimes people go to the mailing list and describe the coolest idea ever but then they jump into describing something obscurely specific. What's the goal really?

With some PRs recently, I just post a comment asking why if I don't see a clear motivation. What's the point of your PR? If you look at such a PR and you don't really understand first what's going on then you're much more likely to just move on. If I'm looking for stuff to review, and if I see a lot of background and motivation then I see something that I know I have to study. If I see something with only a one sentence description, then I just move on because I don't have time to go grab all the context.

For larger items, there's a large contingency where people will just stay away. There's intimidation involved in working on something long-running or it has a high number of commits. "73 commits? Damn it!". There's been a couple of strategies to try that. Someone organized a taproot review club. It brought in some outsiders. There are some things that came out of that, but it was a ton of effort and it seemed more like a socialization project. The biggest outcome of the taproot review club was giving exposure to ideas like getting people to talk about it. I think most of the review comments htat came from that were not all that interesting, and that's fine, that's still a good outcome. People felt like there voices were heard, too. I felt it was more inclusive. There were other contributors that showed up that I hadn't met before. Giving people outlets to be heard other than twitter is a good exercise for the big stuff. There was a curriculum that was put together. About 20 people did some legitimately deep study like 7 weeks studying it. I felt more confident in taproot just because of that.

# More advice

I haven't been around as long as other people in this room. A few things that I found helpful... this one people underestimate; just have a group chat with people who are birds of a feather of like, hey like let's say you're working on mempool stuff. Well, let's get the mempool people together and then you can ask strange questions to them and beg for them for review on a more private forum instead of a more public forum.

Another one was, taking advantage of synchronous communications. I feel like often we are on github and we're doing asynchronous communication there. There's something to be said for example if I see a review from someone that I don't totally understand, I will literally waste his time and drag him on to a call and try to hash it out. I've found that this route is much more efficient than trying to understand what the other person is saying over the span of four afternoons where you message each other back and forth. But you can also just get on a phone call.

I've also found it is good sometimes to refresh the PR's page really often. Sometimes when people are commenting on it, sometimes they call it back. It's like simulating synchronous process on an asynchronous commenting process. Also, sometimes people edit their comments, so just look to see if there's anything new or interesting in their edits.

Separate the what you want to achieve from how you want to achieve it. That was really important for me because in C++ there's like 1000 ways to do the same thing. The way that you choose may not be the way that everyone else wants. Maybe the way that other people suggest is not the way you want. It doesn't really matter, because you're just trying to achieve a specific goal. Separating these two matters prevents you from getting married to a certain way of doing things, in favor of just getting it done. I think that's the better way to look at it.

I think one of the things if I could embarass you a little bit, one thing you did particularly well is the motivation part. For example, the videos you did introducing the topics. Also the guix video and libconsensus video. Those videos travel well. I think that's worked out well. It's different because for others working on long-running projects, that haven't kept up with that momentum, the contrast of why things move forward and why things don't move forward- what are the things that have proven to work over time?

# assumeutxo

I think there's a distinction between the techniques you use and the subject matter itself. The nature of multiprocess, assumeutxo is just a lot more risk than deglobalizing a bunch of variables. I love the work, it's super important. But really, that's a stream of refactoring changes or introducing a parallel build process whereas ripping out guts is inherently harder to review and more risky. What you guys are saying is all gospel, but with assumeutxo, I've done a ton of documentation and a bunch of talks were recorded. But, that's a necessary but not sufficient condition to move projects forward.

The other thing to look for are concrete deliverables that pay off. It seems to be good deliveries at a good cadence. As a reviewer looking at assumeutxo, it looks good but how many steps before it effects me as a developer or a user. It's a problem, some of us want a dopamine hit or deliverables that are tangible. Pure refactoring work like on the kernel are less visible, which perhaps make developer's life better, QoL matters, like removing globals, and it pays off in increments.

What's funny is that the velocity in the early days of assumeutxo when the changes were refactoring-- the deglobalization stuff started there. When we have to have multiple chainstate instances, we had to deglobalize a bunch of stuff. Those refactor changes moved pretty quickly because it's an equivalent to verifying... I think the maintainers are way more amenable to refactoring changes. It's understandable, because it's easy to check off. But as soon as you get into the core feature logic, like assumeutxo, a lot of people fall off and aren't willing to sit there and review. The project as a consequence definitely slowed down and was harder to stick with.

How many child PRs are you out, post refactoring, from getting to a future that a user would use? I think that the first 8 PRs were refactoring and from there we probably had like 5 or 6 that have been heavy-duty logic. Maybe four more PRs to go?

In the assumeutxo case, a lot of it really does come down to lack of reviewer expertise. I personally don't feel comfortable ACKing things that I'm not working on. This circles back to the working group mentality and updating those people and using those resources. This is a case where it helps to hop on a call. I've done that before. I just had no idea where to start with a PR: hop on a call and help each other out. Telegraphing that you're open to that is another option. Being able to hop on a call motivated me because in the back of my mind I knew that if I got stuck this guy was willing to help me, which helped me take the time to build the pre-requisite knowledge to be useful. If you have the ability to take calls, making that known, might attract more reviewers.

# Review priorities

Well, I first wanted to respond to the idea of not being familiar with a part of code and not wanting to review a PR. I understand. It's reasonable. But it should not stop you from reviewing something that is very important. We were all new to Bitcoin Core completely at some point. I've reviewed PRs where I knew it was extremely important so I spent 2 weeks to figure out how header syncing works, in order to review the PR. But I did that, because I thought it was important. It's easier said than done. I think we should be prioritizing our review based on what's critical to the project, what fixes things that are broken, what is going to kill us if we don't fix this now? It's not about the number of commits or who wrote it or something or how much we know about the subject.

# More advice

Q: You're newer to the project, but had success early on. You're now in the middle of package relay which is a beast. How do you think about that?

I'm lucky to be working on something that is non-controversial. It's been pre-concept-ACKed by a lot of people. I have a few ideas of what's worked. It's mostly like what people have said today. Momentum is a big one: if someone comments and you address a code change within 5 minutes, that turns into everyone having their mental caches warm so we can get things done quickly.

I think package relay has been broken down step-by-step, there were multiple stages. At each stage, there was about 3 PRs, like refactoring, and then follow-ups for any nits. I understand I'm very lucky that I have a lot of reviewers. I think that pattern kind of makes sense. Just even splitting out the refactoring commits into like a scripted diff, or remove this out of that, and then the "meat" commits are just green add these couple lines. I know I'm lucky, but that seems to be good. Every time I open a PR, even if it's not package relay, I think I have an idea already of who's going to review it and what they would say. Sometimes I ping them ahead of time as well. Sometimes if you open a PR and you don't know who is going to review it, then how are they supposed to know that they should be reviewing it anyway?

We should feel comfortable if something is not justified in the PR description to maybe not close them but to publicly comment and say this is not ready to be a PR because it's insufficiently described.

I agree with you about reviewing things outside of my comfort zone. I'm going to review anything, even if I haven't seen the code before I'm going to try. Maybe I won't see things as deeply as experts in that area of code. One thing that might help, and this is just a personal suggestion, if someone asks me to put eyes on code then I'll look at it no matter what. I think it is good to look outside of the usual areas you look in. I find it rewarding to look at new things. I'm more about breadth than depth in one area. Personally I enjoy that and changing things up. I don't know how much weight my review will have, but I'm willing to look at it. I'd like to encourage people to review different PRs and diversify their interest.

My approach to reviewing PRs... first I have an idea of when I think it should go in, or what stage it is at... There's a set of PRs that I'm committed to reviewing and continuing to review until a decision is made either because it's merged or the PR gets closed. I don't think it makes sense to review a PR once and then forget about it. When we review PRs, we should see the whole thing through. We're part of the team of getting that functionality merged into Bitcoin Core or fixing something. We're part of a team. We're not just doing drive-by reviews. That's my philosophy and how I think we should do things. We're a software project.

# Responding to comments

In terms of responding to comments, I think it's better to respond to comments as quickly as possible. I know I haven't always done that, but I try to. That's helpful to reviewers because things are still fresh in their mind. If you respond to a comment three weeks after I wrote it, I won't necessarily remember why I wrote it or what the context is. If you respond within a day, then I will probably still have it in memory. When I am looking for things to merge, if there are a lot of unaddressed comments, that might indicate that it's not ready to be merged. Github's UI can hide ACK comments but show the nits... so it's good to address those comments. You address them by saying "no, I'm not going to do that". That's usually a reasonable comment.

It can be frustrating to get a drive-by review where someone asks a good question, and you craft a thoughtful response, and then there's crickets from the reviewer. Now other reviewers are going to see this as unaddressed and it can make the PR look less interesting to review.

Once you have addressed a comment, make sure you actually push your changes.

One thing I've noticed in this project, compared to other open-source projects I've participated in... if you come up with a change that is incrementally better than what exists currently, then that can be merged. Evne if that approach is not perfect, you can iteratively improve upon that. In the midlife of the project, there was this sense of people just wanting commits and we had to do some gatekeeping to keep people from coming in merely to get their name into Bitcoin Core. That was an auto-immune response. But now a lot of people fixate on a perfect approach, and so as a result the change sits in stasis and sometimes new contributors get really frustrated and walk away. I think maintainers should try to be conscious of, is this just an improved over what we have? Maybe merging that and then dealing with the little bit of churn from nits after the fact might be worth the emotional bandwidth of people that are participating.

Some of that isn't on the maintainers, though. It's on the reviewers. All of that is on the reviewers, not the maintainers. If you have the ACKs lined up, then it's not like the maintainers gatekeep. Sometimes there are people where things will be merged even if there are outstanding nits. But it's not uniform. This is another topic we can get to at some point. There's this role between reviewers and maintainers, some people are good at this where they will summarize the review activity so far and say hey this is where I think it is in the process and I think this is close to being merge-ready. As we get more maintainers, I think maintainers should do that more. At the same time, either we come up with a weird intermediate role that people serve or maintainers step up. Why not make the author do this? Well it's not seen as objective. You want the author to get a closer to objective view of it; a dispassionate third-party observer might be the right person.

Sometimes it's helpful when another person carries the torch of a PR by commenting and reviewing positively and then also opening up additional PRs that are based on that work. Handing off work is another thing to try. Get attached to the goal, not to the implementation. That pattern seems to work well. An important part of that is someone saying, yeah I'm okay with this being merged. If I see someone saying I disagree with that approach, and that seems like a blocker. Then I just wait to see how that gets resolved. But if someone says I disagree with this but it can be merged, then that's good.

# Merging things that need follow-ups

I don't think we should merge stuff and then other ones right away. We shouldn't merge stuff that should be changed in a follow-up PR entirely. It seems bad. Why are we merging this if we could do it better right now anyway? It feels weird. I would agree we shouldn't merge bad logic. Who will I put more weight on what was said? If we agree with the logic and it accomplishes a goal, then we should merge the logic. If you're motivated enough, then you can restyle it. If you really feel that way, you should open up an alternate PR. You're commenting on their work. If you feel that strongly that it's bad and there's a better way, then may the best PR win. But why can't we work on the same team? I agree it's not the best way to do it, but if you're not going to ACK it and just be an obstacle, don't do that. Making a comment about how you should change something is a lot easier than actually making the change and doing the PR. It took me 45 seconds to write this comment but you'll take a full day to do the actual work. It's a lot less work to make a comment than to actually propose and implement the alternative. Hopefully it wouldn't derail a lot, and I would prefer to get the logic merged and then do the follow-up. Sometimes you can find these strategies where preserving momentum more than anything else and the incremental approach is great for that. Being a nice reviewer is another strategy, like offering a commit that is rebased on your work or a tested diff, that helps keep a lot of momentum and makes it feel more collaborative instead of just saying hey you could do this in a comment. Instead, 

Q: High standards, but how much bad code has actually been merged? Outside of consensus.

There have been a few bad merges into the wallet before. Sometimes by me, written by me, sometimes merged by me, sometimes not by me. It does feel like there are places where we can have more relaxed review standards because merging something there is not going to have a large impact on users. There are other places where yeah, we should definitely review these PRs really stringently.

Another thing not brought up yet is where are we in the release cycle. Getting stuff merged early into master, letting it run on master for a full cycle, and it seems like big building blocks that didn't get into the last cycle should get in early into a release. But that doesn't seem to always be the case. Or maybe you can socialize that ahead of time to remind people. Your strategy for opening or reviewing PRs might change depending on where we are in the review cycle. Mentioning it in the dev chats as a reminder hey we're about to go into this phase, that could help.

# The role of refactor PRs

Maybe we should have a moratorium on refactors because refactors tend to cause a lot of things to need to be rebased. It can invalidate other PRs that were already almost ready to merge and we hadn't gone around to doing it because nobody told us about them. But we do a lot of refactoring and it's not always clear that the refactoring is even useful. A lot of it is coming from maintainers at this point, honestly. It's not always clear, like with libbitcoinkernel the bulk of that is essentially refactors. There's a lot of miscellaneous like let's clean up the includes or shift things around. Refactors that have a motivation for a useful goal like libbitcoinkernel make sense, but refactors to clean up some style somewhere, I don't know. How about, we don't waste time on this and potentially invalidate a few other PRs and cause rebasing work. Maybe we should have a moratorium on refactors. I know gmaxwell has a position on us doing refactors something to the effect of "you do too many"...

Is there a time and a place for refactors where it makes more sense to endure that churn? Or should we only be focusing on big stuff and that's where effort should go? How do we get new developers to just jump into the beasts of the project? It's hard. Startups often have "get our house in order" days, where after a big crunch to release they then have a 2 week period to do refactors because they are less likely to cause problems. There was a suggestion in 2018 to have a "refactor week" where you get all your refactors in at the same week.

Historically we have years ago through many cycles of the project agreeing to have some sort of window for refactors. It has never amounted to anything. Doesn't surprise me. It doesn't mean it can't work, but I think there's something about, I think this is the most important thing brought up here is momentum. When a PR spends 6 months before getting merged, usually it's not because it isn't making progress, it can have a variety of causes. When something is ready to be merged, I believe it should be merged. We can think about some sort of changes are better early in the window or later in the window, but I fear that given how important momentum is, that such restrictions will actually worsen momentum. That's my observation.

Traditionally we have tried to do refactors right after a release. I actually disagree with this idea because it messes with backports. I think both things have been suggested, like trying refactors early in the window or late in the window. My suggestion is, let's not do any refactors at all. Refactors are risky. It's risky to try to fix them too.

It's good that we're all in this room but there's a larger ecosystem that this needs to be communicated to. I'll propose refactors as a meeting topic on IRC. Someone is working on refactoring, and it's important because it enables better unit test coverage. If you come up with a refactoring change and bundle with it a bunch of new tests, that's easy. Or even just a refactor with the follow-up being the changes that you needed the refactor. Or motivating the refactor with enabling something that we want, versus just refactoring for style. Or demonstrating, coming up with a parent changeset that demonstrates to completion once we make an architectural change, it makes the benefit very changeable. When you talk about outright refactors, I get the sentiment, but I think you have to put a finer point on it. Wasn't the addrman stuff all about refactors early, but then there were internals we could get rid of? There was a strong motivating argument for it.

"No non-motivated refactors."

I think we have an idea of what is a good refactor and what's a bad refactor. It should all be motivated. Sometimes there might be a secret vulnerability fix in there. That's a good point. Generally we make those things known to maintainers and they get merged quickly. If you could relax your anti-refactor stance so that security fixes can get in, then that would be nice.

This afternoon there will be a discussion on developer quality of life. This afternoon we will also be looking at PRs to either close or merge. I'm open to hearing arguments about which ones are ready for merge. I'll put it in my TODO list. But it's just PRs.. if we have time, we should look at issues, but I doubt we will have time.

# Closing your own PRs

I would like to encourage people to close their own PRs if it looks like it isn't going anywhere. If you don't care that much about it, if everyone else doesn't seem to care that much about it, it helps make the list of things that we're looking at smaller.

Maintainers could also provide a framework for what's required to get something merged. I think that's reasonable to ask for. Just some sort of guidelines. It's a little bit unreasonable though when there are 400 PRs open. It's not reasonable to expect maintainers to keep track of all those PRs. So yes, I think we should be closing more. If we close a PR, then you can't reopen it. So if you can close your own PR, that's better because then you can reopen it in the future.