Opinionated code reviews
02 MayCode reviews are a good idea for many reasons. They encourage people to share their opinions on how code should look at its prettiest and also what they like to see when they're called in to look at the ashes of a dead production app.
These are my opinions on reviewing code with a view to swearing as little as possible if I have to see it again when stressed.
I try to be quite relaxed about code reviews, but there are a few basic things I check before even looking at the code. If any of these fail, we're not even at code review stage, we're starting a pair programming session. Hopefully these are all uncontroversial.
If you're using a code review tool that hooks into a build system and source control, it's likely that these are all covered by the time you can even create the code review in the first place. If you aren't using a code review tool - consider doing so. When I was introduced to Crucible, there was a learning curve, but once you've started climbing it the view is quite impressive.
Just to note, I'm talking here about the kind of code reviews that happen at the end of a development cycle, at some point in the process where a new release is being prepped. I'm not talking about the kind of reviews done on small git pull requests, or after a short bout of particularly intense code cranking.
Can I check it out? Which version should I get? Is that all of it? Can I see the commit comments? This is very much the lowest possible gate to stumble at. If it isn't in source control, there really is nothing to review.
Code is code if it can be translated into instructions to make a computer do something. If the compiler/interpreter can't even parse what you're saying, or the build system can't turn that code into a package that can be executed, then you don't have any code to be reviewed - you just have a bundle of text files that neither human nor computer can understand.
If it was in source control, could be checked out and built into an actual app, the next step is to see that app do something. Once I've seen it checked out, built, and running, then we know we're looking at actual code for an app that at least does something. I'm not talking about a full demo or acceptance test run, just a glance at it in action.
So you have code, it builds and runs. Does it do what you think it should? How can you tell? By showing me that the tests you wrote for it all pass. Of course you did write tests, and actually run them against the code that we're looking at running here, so you'll have some test code and results available if we want to review them too.
Once we're out of the starting blocks, reviews can be lengthy tedious jobs. So it's nice to have some help along the way. I appreciate anything the authors or co-reviewers can do to make it easier and faster to give a thumbs up and go home.
No review comment I make is an automatic fail - I try to avoid saying any review ends in a fail, because the code already does what the authors wanted by the time we get to a review. This is a chance for them to explain the what and why of the code and for the reviewers to say what they would prefer to see and why. Obviously there could be blocker problems noticed/raised - but the authors should have a chance to correct and resubmit, rather than have the review ended with a "FAIL!" stamped over it.
If there's a really big blocker problem, then it may be best for the project if the review is failed, allowing the authors to take things back a stage and making the whole project team aware of the problem. However, if that's the case, there will need to be some project management discussions too and that's well out of the realm of code reviewing.
In a perfect world, if the code hasn't changed since the last review, it doesn't need reviewing again. I'd prefer to focus on the changed code, rather than have the review consist of the full codebase for a given revision tag. With tools like Crucible, you can add/omit files from a review easily and it's also fairly easy to diff
the current version with any older revision to see what has changed.
If we're doing a full review then by all means add the whole codebase to it, but at least warn all the reviewers first - because this is probably going to take hours.
The authors know the code better than the reviewers, so it's helpful if they can act as guides. They know where the knotty, ugly parts are - and (as long as they know code reviews won't be used to beat them up) they should be keen to point them out. One of the ways to do this with a review tool is for the authors to also add comments to the review. Alternatively, they could give a quick "map" for the review point out the files/directories where the gremlins are likely to hide, and which parts are just dull grunt work.
That bit in brackets deserves a bit more emphasis. Code reviews can get pretty ugly if either the authors or the reviewers believe the review or their comments will get thrown back at them.
If people come to treat code reviews as an ordeal or in any way competitive, the reviews just became a negative aspect of the workplace. The authors become defensive, comments become point scoring and the whole process gets gamed.
The previous comment was about the codebase as a whole - this one says a similar thing about the individual code files. I'm not a stickler for coding style guides, apart from one rule:
Be consistent! Both within each file and across all files in the same language
Other than that - if I can read it easily, I'm happy. If I have to work at reading it, I'll add a comment to winge a bit, then pretty-print it either with my editor/IDE or by hand, just for local review. I have my own preferences for block layout, whitespace, naming, etc - but I try to be as lenient as I hope others would be when reading my code.
Having said that, my experience has been that a lot of review comments and discussions are triggered by styling preferences. Personally I think that's not necessarily a bad thing - a team can end up with a shared set of accepted styling (and variations) that means they know what to expect in each other's code and won't reel back in horror if they see the code lying in those ashes I mentioned at the beginning.
This relates to my earlier comment about not wanting to fail any code under review. If there is a problem that really needs fixing before the code can go any further, I'll often accept a "gaffer tape" fix for immediate release, as long as a matching bug report is also raised to do a proper fix in a future release. Ideally comments should also be added to the review, source control commit and the code itself, referencing the bug number and that this is an accepted temporary fix.
However, at the next review, if there's evidence from the comments that the gaffer tape is piling up, that's a sign there's a larger problem going on here. Technical debt is building up, the code is not being maintained and review comments are not being acted upon. So new gaffer tape might be ok, but if I see old gaffer tape, I get nervous.
These are the really interesting and valuable parts of a review. This is where we get to share the knowledge, learn new things and understand how the rest of our team thinks and works. At the very best, someone is about to either have their horizons broadened a bit, or their opinions changed a bit.
When this kind of comment appears, it may be worth taking the discussion outside of the formal review, because this is likely to be a bigger conversation than you want in a bunch of text boxes. This is also the part where it's really important to remember you're all on the same team and that people are allowed to do things different ways.
These are the bits where reviews give the most long-term value. Discuss the code, the design and any changes to be made. Get some agreement. As a reviewer, be lenient. As an author, be open to suggested changes.
Depending on the purpose of the review "done" could mean various things and it's up to the reviewers and authors to agree on what that means for each review.
For an initial review of a new app, there could be an awful lot of code, all new, that no one deeply understands yet, because it hasn't seen live action. But also, this might be the best shape the code ever sees, so maybe it's worth spending the time making it as pretty as can be.
For a minor incremental release, the authors and reviewers may be old friends with the code, in which case the review might be a scan and a nod, with everyone knowing what they're looking at.
For any reviews I see, the most fundamental question I'm answering is:
How much would I swear at this code and its authors if I had to find out why the live version is smouldering in a heap in Live on Monday morning?
and the second, bonus question is:
Having looked at that code, how likely do I reckon it is that I'll find out the answer to the first question?
If the answers are both "meh, not very" - that's a big thumbs up. What we're all trying to avoid is the answer "HAH! No way should that ever see Live!"