Peer reviews seem to be a source of seething hatred for many developers. I can sort of understand why (no one likes rocks being pitched at something they created), but they can actually be very beneficial if done correctly. This post will lay out the case for doing peer reviews as well as an approach to performing reviews that I happen to like (stolen from my friend, 'Poprythm'). If you have any suggestions for ways to improve this approach, please share in the comments! Also, if you're not doing peer reviews, I'd like to hear why.
Peer reviews may seem like a hassle, but they actually serve several purposes. First, they help identify and address deficiencies in the code that will become maintenance issues later. By finding them early (or at least earlier), the cost of fixing them is somewhat reduced. Identifying some deficiencies, such as code that is completely unreadable or that is unnecessarily complex, will help save time and money (and also sanity) down the line.
Another benefit of code reviews is that they can help spread knowledge. Both the person doing the review and the original author can learn new things via the process. The original author gets feedback on ways to improve their code, such as alternative algorithms, syntax shortcuts, and design improvements. The person doing the review also gets several of those same benefits (the author may have solved a problem elegantly in a way that the reviewer had never thought of), but they also gain experience in reading code. The ability to read and understand code, especially someone elses, is an underrated skill. Becoming proficient at it may help you to analyze and troubleshoot problems more easily in the future.
Performing the review
When performing the review, follow a few simple rules. First, do not make any code changes at all. When you spot something that needs to be addressed, place a comment in the code prefixed with "PEER:", and explain the issue with the following code block. For example:
1: //PEER: This will cause a run-time exception!
2: if (value == null)
4: logger.Log("Found a null value: " + value.Name);
(Sadly yes, that is a real example I ran into from many years ago…)
Second, while no one likes a critic, it is important to identify any and all issues. If the author used a variable name that doesn't make since (such as using 'l' for a logger instead of a more descriptive name), go ahead and note it. It may seem trivial on its own, but trivial issues add up.
Third, (obviously) try to be constructive with any criticism. If you identify a problem, try to explain why it is a problem and offer a couple of ways to address it.
Peer reviews are somewhat subjective, and everyone tends to look for slightly different things. However, here are the things you should absolutely be looking for:
- Failure to adhere to coding guidelines or best practices.
- Untested code. TestDriven.NET includes a code coverage analysis tool, so use it to examine the code in question. Note any areas that are not tested. While high test coverage doesn't necessarily equate to high-quality code, lack of coverage does indicate low-quality code.
- Tests that don't actually test anything (better known as smoke tests). Tests should not just make sure that something runs without throwing an exception, they should also validate the state of things after the method completes. This means they should be testing the return type (and its properties!) and verifying any other state changes the method made.
- Poorly designed code. This is somewhat subjective, but there are a few easy things to watch for, such as duplicated code (copy-and-paste is BAD), things that are difficult to test, and things that are coupled to a large number of other classes.
- Obvious inefficiencies. Are they using a List where a Dictionary or HashSet would be better?
Aside from that, suggest changes anywhere that you see improvements. Doing so will improve the code base (making life for everyone easier) and will help the original author to become a better developer.
Once you are finished with the review, go ahead and commit the changes to source control. The author can then review your comments by searching for "PEER:" in their code. As each issue is addressed, the author should remove the tag from the code. If there is a question about a comment, the author should ask the reviewer for clarification. If there is a disagreement (which will happen from time to time), a senior developer should be consulted.
If applied correctly, Peer Reviews can help you become a better developer, help save your company money, and make life easier for everyone in the future. Don't forget, if you're not doing peer reviews, let me know why not in the comments.
[quote]Peer reviews seem to be a source of seething hatred for many developers. I can sort of understand why […][/quote]
I can’t. If you don’t like peer reviews, you don’t want anyone to see your code. If you don’t want your peers to see your code, that can only mean one thing – you are not proud of your code. In that case, you should do your own ‘peer’ review of the code, and find all (or most) the problems yourself first. Make sure your code is good enough for the review, and you will take pride in your code during reviews.
Also, we do our reviews quite differently: the reviewer reviews all code changes while the original developer is right there. Usually no problems are found. If they find a minor problem (such as changing a string for display or formatting the code a bit differently, removing unreachable code, etc.), that file is not committed and is fixed immediately after the review. If a more serious problem is found, nothing (or at least nothing relevant) is committed and a review is done again after it is fixed. Otherwise, we commit our changes to source control. Are there any benefits in your way to do it?
Our team has a couple of things working against it: we’re distributed (one guy is in a completely different region of the country), and everyone works crazy schedules, so there’s not really a lot of overlap when everyone is in the office at the same time. That is a problem in and of itself, but we’re a small team with probably 75% part-time staff and little funding, so we make do the best we can.
Given that constraint, doing the review asynchronously lets us get things reviewed quickly without trying to sync schedules up so that the author and reviewer can be sitting together/IMing together during the review. Personally, I also tend to be a bit more thorough if I have time to digest something on my own. If someone is sitting there with me, I tend to try to hurry. The only other advantage that I can think of is that, by committing the peer review to source control, you have a record of issues that were identified. This may be important if you ever end up with a chronic offender/problem developer.
Aside from that, I think doing the reviews as you described is probably a better way to go about it.