I have been finding it challenging to put my thoughts about code reviews into a linear format. I've realized that it is because code reviews are a part of the broader and context-specific work of engineering a productive software development process.
Rather than talk about code reviews on their own, today I'm going to try something different: I'm going to talk about a specific heuristic you can look at. These heuristics aren't metrics you can optimize, but they can offer insight into the health of your software development process and suggest tools you might try to address them.
The first thing I look for:
How often does the team discard a set of changes and go on to solve the problem a different way because of the discussion in the code review?
The answer should be more often than "never" and less often than "usually".
If The Answer Is "Usually"
In cases where most approaches to changing the system don't work the first time, the team is either missing norms, skills, communication or an architecture that supports their current work.
Code reviews can be a tool to address all those issues, but if you see lots of PR churn they aren't being enough.
Too Many Silos
If information about a change is surfacing for the first time during code review, is often a signal that there aren't enough conversations happening.
I have never seen this problem occur when the team is pairing or doing ensemble sessions with any regularity, but those techniques aren't the only ways to boost communication. The first thing to try is resolving whatever is keeping people from talking. The technical review process may be too heavy-weight for people to be willing to go through it. Maybe tickets are handed down so tightly-scoped developers are simply executing on pseudo-code, or the team isn't talking through user stories together to identify technical requirements. Potentially a scarcity mindset has taken root and people are concerned about taking the time for casual discussions, or worse being seen to take the time for casual discussions. Cutting scope, adding slack and creating specific spaces to talk can all speed up development by reducing rework.
Whatever the reason, when the only venue for architectural input and technical requirements to come to light is code reviews, it is not surprising that most PRs won't satisfy them purely by luck. The solution is to seek out times and spaces for wide-spread conversations, in whatever formats might work for your team.
Unresolved Disagreements
The other root cause I've seen is a lack of convergence.
If the team doesn't agree on what properties a good architecture has it creates churn. It is also frustrating if different reviewers provide dramatically different feedback: developers give up on finding a good solution and just try to satisfy whoever it is who is keeping them from merging today. If reviewers expect people to write code in a specific way or demand changes without explaining what purpose those changes serve it creates a sense of learned helplessness, where good code and actual agreement is impossible.
Even when everyone is trying to be collaborative, you can still end up in a situation where there is no common agreement. There may be too many private discussions, or discussions within separate cliques who come to very different conclusions. If a team isn't confident in its ability to engage in productive conflict, people may avoid public discussion specifically to avoid stepping on an undetected third rail.
Regardless of the reason, techniques like event storming, ensemble programming, or question-only reviews can help a team converge on a shared culture of practice. If those would be new to your organization, even just doing a quick five-whys review of whatever was discovered in the code review processes that week at the team meeting can be a good start. It gives the team a facilitated place to practice conflict resolution, and spreads whatever was learned to the whole of the team.
If The Answer Is "Never"
This is worse. When they don't include architectural feedback, reviews aren't able to build quality into your code. There are always things that only become obvious when you take a step back from the painting and look at the bigger picture. If those either aren't being remarked on or aren't being fixed when they are noticed, the code that ships will be worse than it could be.
There are a lot of reasons those opportunities for improvement may be being missed, but I'm going to talk about three here: pressure to merge, a lack of safety, and missing aesthetic feedback.
Pressure To Merge
If developers either don't want to take the time to make code better, or don't feel like they have permission to make the code better, the software will suck.
This is distinct from the situation where there are real reasons for delivering shitty software: in those cases the whole team should be aligned around that vision and the trade-offs made in the particular PR, as well as a plan for how those problems will be addressed once the short-term value opportunity has been realized. If you are going from short-term value opportunity to short-term value opportunity that is still a problem, but those situations do sometimes come up.
But usually the reasons businesses push developers to write shitty software are fake. Invented deadlines, arbitrary OKR review cycles, misuse of velocity metrics, low-trust relationships between product and development, promotion systems that incentivize only short-term feature delivery, and so on and so forth. The list of ways businesses manage to shoot themselves in the foot when it comes to their investment in tech is long and varied, each one making future development more expensive than it needed to be, increasing the fixed cost of the system, and hurting developers' health and well-being.
As a business leader, if you discover that people feel pressured to merge PRs as soon as possible, that is within your sphere of control. Dig in on why, and figure out how you can set up the incentives so developers believe in their soul that you want them to build software in the way that will be most valuable to the business. Enough companies prefer legibility to business value that it's a non-trivial task, but if you get it right the competitive advantage is enormous.
Hidden Ignorance
Sometimes we are missing a community where it is safe to express ignorance or confusion. If people feel like they have to come to the table with declarative statements and concrete answers, we miss the opportunities to clarify the semantics of our system and create graceful architectures.
There is no one right way to write a piece of software, and a good-enough-to-function way may still be awkward enough that it's a missed opportunity. Building a culture where it is safe enough to ask questions or express opinions even if they are unlikely to change the code review ironically is what opens up the door to discover the long-tail of major changes that have the greatest payoff to the business.
We spend a lot of time and money explaining code, reading code and communicating about code. Any change we want to make requires starting by understanding the system how it is now. We make that work visible and correctly incentivize creating easy-to-explain architectures by having people ask questions in code reviews rather than struggling alone to find the answers. When people believe it is possible to have the software make sense, and feel entitled to architectures that can be explained to them, we end up with software that is reliable, extensible and maintainable.
Unfortunately, this useful work is socially risky.
To build the most business value, we are asking developers to set aside some of our most basic social instincts. There are some tricks that make that easier, like responding to any question with "thank you for asking!" (and meaning it). Engineering leaders can model being ignorant whenever they spot an opportunity. We can also make an effort to respond to questions we don't know the answer to with enthusiasm and curiosity and collaborative learning, so that we aren't putting each other on the spot by asking a question.
It can help to just explain these dynamics. "If you have a question about the code, it is just as useful for the author of the code to know that you had that question as it is for you to get it answered" is one of my go-to phrases. I also like telling a story about how as a first-time manager I had a report with 30 years experience and in his first week he asked me more questions than any intern has ever been able to come up with. I tell that story and challenge new interns to prove me wrong. So far none of them have, and in the process their code reviews have been amazing feedback on where my code can be more accessible and exemplary.
Sometimes those social fears are also well-founded. Facilitated ensemble sessions are particularly helpful as a place to debug any interpersonal dynamics that are contributing to our lack of useful ignorance, as well as opportunities to improve the situation by providing public modeling and corrections.
Missing Aesthetic Feedback
I have a separate post exploring why aesthetic feedback is valuable. Here it is simply enough to say that without taking into account aesthetic feedback, your architecture is unlikely to randomly evolve into something suited to this program's particular context. We are trying to optimize subjective properties of the system: no linear measure can provide more than the roughest of estimates.
There are a couple of common reasons for aesthetic feedback to be missing: programmers might not have learned to articulate their aesthetic reactions, they might fear the social consequences of sharing their aesthetic reactions, or they might have been explicitly instructed to keep their aesthetic reactions to themselves.
They Were Told To
Some time back Microsoft published a series of blog posts confidently declaring incredibly misguided advice about how to provide "useful" code reviews. These blog posts were based on surveys that weren't looking at any of the outcomes we would actually care about, like code quality, learning, value of the software produced, etc. Instead, the stated goal was minimizing time-to-commit of each PR. You could speed that up to infinity by simply eliminating code reviews, but instead they wanted to keep the reviews while discarding most of the value they can provide. Thus their advice was to never say anything positive, subjective or indeed comment on anything other than a specific error in the code.
This advice does successfully minimize conflict. It just also minimizes the usefulness of code reviews. It's basically like economists who write editorials telling you to give your family cash for the holidays: it makes sense only if you completely misunderstand the point and don't believe human relationships have value.
Unsurprisingly, when you forbid people from talking about architecture in code reviews, code reviews don't help you build good architecture. The up-front design process that approach substitutes, in addition to involving significant duplicate work and relying on people guessing right, can't do the whole job. Up-front design decisions are made before the system itself has a chance to provide feedback on how its architecture supports (or doesn't support) a particular change.
This advice has spread far enough that I believe it is usually worth addressing directly. You can't assume that the developers you hire will know all the roles code reviews play in building quality. Make sure your code review philosophy is highlighted when on-boarding new members of the team, and incorporate it explicitly into your leveling guide. Have people in positions of power and leadership state frequently that the goal of code reviews is to cultivate practices that build good software, not to find bugs. And then provide feedback and support to engineers who are treating it as a bug hunt instead.
If you are working with someone who believes in this advice, you can try pointing them to these blog posts. Alternatively, you can arrange to do some synchronous code reviews. This allows you to observe their non-verbal communication as they read the code, and by asking questions of them you can get at the information they were taught to withhold. It also helps build a relationship of trust: when you embody curiosity about their opinion and gratitude when they share it, it tells them that you understand you are all in this software project together.
They Don't Know How
Unlike most professions that involve design, the majority of computer science programs don't incorporate aesthetic critique as something students are taught. Software development management is often focused on what is measurable, like velocity, rather than engineering a context where it is possible to write quality software, and managers often aren't reading enough code on a day to day basis to have their own honed aesthetic sense. Our profession has grown so fast that even in communities where aesthetics are prioritized, we haven't aligned on vocabulary and practices.
The way we get better about talking about aesthetics is to do it, so the solution to this is creating opportunities for people to practice. One technique is to read a book like Implementation Patterns as a group: it highlights the choices we make as programmers and design principles we can use to guide them.
Another fun exercise is reading pieces of open source code and talking about them like they are poetry: what was the author trying to achieve? What techniques did they use to communicate that to the read? How successful were those techniques? How else could they have gone about it, and how would that have changed the code?
They Don't Want To
The thing about subjective feedback is that it is subjective. It is entirely possible for someone else to disagree with our aesthetic judgement and neither one of us to be wrong.
That is hard for some people to deal with, especially people used to being able to get an A+ by finding the right answer. Ideally in those situations we become curious about what our differences are. What context do I have that you don't? What do you value that I wasn't taking into account? But we have probably all seen situations where the resulting engagement wasn't anything like that productive. One or two bad experiences can scare people off from sharing their judgement.
Leaving positive comments on reviews can help break through that impasse and start building trust. Realizing that positive reactions are just as much feedback on our architecture as squick reactions makes talking about aesthetics less socially risky. Once people are in the habit, they can start sharing times when they notice an opportunity for more symmetry or following up when the semantics of an object aren't clear to them.
It is also useful to note that receiving critique is a skill to be practiced, just like giving it is. If people on your team struggle with being enthusiastic about feedback, consider highlighting the advantages of developing these skills. To progress in technical leadership, being able to write accessible, inclusive code that makes junior developers productive is vital. It is a lot easier to develop that skill if you can incorporate vague feedback delivered by someone new to offering it.
If The Answer Is "Rarely"
It might mean everything is great! Then again, we might be in a mix of dynamics from both failure states at once.
This is why these are heuristics, and not rules or laws or metrics to be measured. There isn't a specific number of PRs I aim for my team to end up completely throwing out. Instead, the most informative signals are qualitative. I keep an eye out for situations where someone would like to rework something but doesn't feel like they can, to see if I can address the blockers and pressures they are feeling. I also keep an eye out for people reworking things purely out of people-pleasing instincts, without personally agreeing with the changes, and encourage deeper engagement on those teams.
Regardless
Don't give up on stepping back from the painting and taking the whole picture in.
We can spend all the time in the world trying to guess the right architecture before we start coding, but until bytes meet silicon it is all just a guess. Unless a product is so stable and slowly-evolving that you reliably guess correctly, it is better for both the software and the company to make it cheaper to make architecture changes once you know what changes will be asked of the code, rather than cutting off optionality from the start.
Process can make it impossible to write quality code. We can use tools like code reviews in our pursuit of the goal. But in the end only developers can build reliable, maintainable, valuable software. The company benefits when we advocate for the conditions that let us do our job well.