Home > Professional > Essays > Reviewing
Round Robin Review
Training Plan
This document describes a plan for a training session designed to teach people the "round-robin" technical review process. It describes the topics to be covered by the trainer, and points that need to be presented for each of those topics.
Introduction
Why review? Everyone makes mistakes. Even when we try to catch all our mistakes, we miss some. Having other people look for mistakes dramatically increases the chance that the mistake will be caught.
The whole point of reviews is to produce a better product. Check those egos at the door. It's not your code, it's the product's code.
Why This Process?
Why not just hand it to someone to look over? Well, an informal private review like that is often a good idea. It's not the same sort of thing, though. The formal review produces a reliable piece of information.
What about a less formal but still public process? Personal experience with another one shows that this one is about as effective, and twice as fast. And, like any good process, this one includes measurements so that you can compare it to any other you want to try. If you think you've got a better one, measure it and see.
What about a more formal one, like Fagan's inspections? Well, if you can get the training to do them right, try it out. The important thing is to measure the results and know which one(s) are more effective.
Expected Effects
The whole purpose of a technical review is to reliably answer the question: "Will this product do the job it's supposed to do?"
Reviews are very helpful for management. They provide a reliable look at the quality of the product. Schedules and progress reports alone aren't enough (Brooks: "Coding is '90 percent finished' for half of the total coding time" [Brooks 1999]). Reviews make it much easier to track the actual rate of progress. Also, should a producer leave, the rest of the team will have a good deal of knowledge about that person's work, making it somewhat easier to cope.
Reviews are helpful for the product. They catch defects more effectively than any other activity that can be done after the producer thinks they are done (much better than testing). They also reduce rework, since the defects are caught sooner. Note, however, that reviews really shouldn't replace testing, since testing catches some types of errors more effectively than reviews.
Reviews are helpful for the producers. They provide a very effective means of communication about the product. They provide a forum to spread the best practices. They provide timely feedback on the quality of the work, so that the producer can correct any bad habits before doing too much work.
Pitfalls
Why doesn't everyone do them? What can make them work badly?
Lack of Management Support
What happens: Managers don't allow reviews to succeed.
Why it happens: Who knows? Reviews appear to cause the product to take longer to produce (they don't, but they make the early part of the production take longer, in return for a larger time savings later). Managers aren't invited, and may resent not being allowed to watch their subordinates in action. Managers may not believe the reviews are worthwhile.
How to avoid the pitfall: Don't even attempt reviews if the management doesn't support them, or won't allow the reviews to be private. Convince the management that reviews are effective and worthwhile using the immense amount of data showing them to be so.
The Identification Effect
What happens: Producers think of product as "mine", and don't want to let anyone look at their work.
How to avoid the pitfall: Don't let producers hide their work. Remind them that the product is public and the reputation of the team as a whole rides on the product as a whole. Make it clear from the beginning that producers will be expected to submit work for review, and that they're expected to do so fairly often. Count work as completely unfinished until it's been reviewed.
Killing the Goose
What happens: People use the technical review for rating the producers.
Why it's tempting: The data is excellent. If reviews are working well, they're the best windows for a manager into the quality of the product. If the manager can correlate the results with an individual, the manager knows what quality of work they're producing.
Why it fails: As soon as people think they're being rated on the results of the review, the review results will stop correlating with the quality of the product, and will become political. There's just too much incentive to say "I like Joe, so I'll make sure he gets that raise he wants," or "Karen gave me a bad review result last time, so I'll make sure she gets one this time."
How to avoid the pitfall: Make it impossible to use reviews to rate people. Don't publish the results of a single review (always aggregate reviews to include enough different producers, and only publish the aggregate results). Don't let managers sit in on a review that involves any of their subordinates. Make sure managers take this seriously; even jokes about using reviews improperly have been known to seriously impact the process.
Professional Reviewers
A review can be difficult to take. If a reviewer never has to submit a work for review himself, they have a strong tendency to misunderstand their role and even take on an air of superiority. Make sure all reviewers either have to submit work for review, or participate as an expert in something other than the technical merit of the work (e.g., a user might participate in a review, as an expert in what users want from the product).
Use experts. Use trained review leaders. Use gurus. Just don't create full-time reviewers who are "above" review.
Process Description
This describes a type of technical review that is sometimes called a "round-robin review" (other, very different review processes also go by the name "round-robin review" though).
When and How to Call a Review
When someone decides that they've finished creating a piece of work, it's time to call a review. The product should pass all tests that the team expects of that kind of product (e.g., passes the daily build and smoke test, or passes the spelling checker). The producer should be unable to efficiently find more defects.
The work under review should be of a size that makes the review meeting last between one and two hours. If the work is too small, it may be combined with another small work; if it is too large, it should be broken into more than one part for review. The division may be by portion of product (e.g., top half, bottom half), or may be by type of review (e.g., review conformance with standards, review conformance with requirements).
The producer announces that there will be a review, usually two business days from the time of the announcement. The announcement is sent to all people who are expected to attend. The producer also chooses a review coordinator from among those expected to attend. The announcement gives the time and place of the meeting, and either contains the material to be reviewed and any supporting documents, or gives an easily accessible location for them.
Preparation
Each reviewer takes the appropriate amount of time and examines the work. Reviewers should make notes of any defects, from the most minor details all the way up to major conceptual problems.
If the reviewer is unable to properly review the product, the reviewer should drop out of the review. The reviewer's attendance is considered to mean that the reviewer has examined the product, and has formed an opinion of the technical merit of it.
If the producer notices a defect after calling the review meeting, he can do one of two things: either raise the issue himself at the review meeting, or call off the review (with an explanation that the defect was serious enough to require full rework). Don't waste the reviewers' time; pick the option that wastes the least time.
Review Meeting
The meeting should be held in a private location, not interrupted. Reviewers can attend telephonically without much difficulty, once they understand the process well enough . Remember to take time zone differences into account when scheduling a meeting.
For the meeting, these roles should be filled:
Review Coordinator: Keeps things moving, reminds people of the rules. May be a reviewer, a producer, or a person with no other duties.
Producer: The person who made the product being reviewed.
Reviewer: Someone who's commenting on the product.
Scribe: Records the issues raised (often the producer and scribe are the same person).
There should also be a historian, who does not need to be present at the meeting. The historian records the results of the reviews and aggregates them so that they can be presented to the rest of the organization. The historian must be a person who will refuse to release results that can be traced to a single producer.
Presentation
The producer tells the reviewers anything that he feels is important to understanding the product, but that isn't covered in any of the other material available. Since nearly every product should be understandable in context, the presentation isn't usually necessary.
Round Robin Commentary
Each reviewer gets to present their comments on the product. The reviewers present one comment at a time. The person to the left of the leader starts, the reviewer to their left goes next, and so on, around the table (telephonic attendees are assigned an arbitrary order).
Each reviewer must present a positive comment first. The positive comment must be true, but it doesn't have to be substantive. It's OK to say "The font used to present the code helped make it easy to read" or "English was a good choice of language for the US version of the user manual." It's better to say something that matters, but it's not required. Any reviewer who can't say something nice about the product is excused from the rest of the review.
Reviewers then present their issues, one at a time. An issue should be a fairly simple statement of a defect in the product. The issue is assumed to point out a problem, even if the problem isn't what the reviewer stated it as. The producer isn't allowed to argue about whether it's a defect, whether it's the right kind of issue, or whatever. At the least, if it caused someone to think it's a defect, there's a problem with the expectations that have been raised for the product. The producer (or any reviewer) is allowed to ask for clarification.
Reviewers shouldn't mention duplicate issues. "Me too!" isn't particularly helpful; an issue is valid whether one or five people spotted it. Reviewers are allowed to present related issues, or raise a different view of the same defect as an issue. It's worth clarifying why the reviewer thought the issue wasn't quite the same as the one presented earlier.
An issue may not mention a solution to the problem, not even if it's obvious. This is hard for technical people, but it's important. The review coordinator must be especially vigilant for this when reviews are new; it becomes self-enforcing after a while.
If a reviewer has no issue to raise, the reviewer may pass when their turn comes around. Reviewers may pass one round and raise an issue on the next. The round robin commentary continues until all reviewers have no more comments to make.
Reviewers may make a positive comment in their turn, as well. The positive comments are often used to point out something that might otherwise be lost in correcting an issue.
The scribe records the comments, in enough detail that he can produce the necessary report. If possible, the scribe should use a flip chart and post each page on the wall as the review progresses.
Recommendation
After the commentary is over, the reviewers discuss and agree on a recommendation for the product. The recommendation can be:
-
Accept: to accept the product as is with no changes.
-
Re-Review: to reject the product and require another review before acceptance.
-
Simple Check: to reject the product and require a simple conformance check against any issues raised before acceptance.
-
If the third option is chosen, one or two of the reviewers should be designated as the people to do the conformance check.
If the reviewers can't agree on a recommendation, then the review's recommendation is the least favorable recommendation from an individual reviewer or producer. If a simple check is recommended, but the producer or a reviewer finds that the rework is more extensive than anticipated, they may upgrade the check into a full review.
Post-Review Work
The scribe writes up the review results, and sends a copy of this report to the producer and the historian (who adds it to the results). If the reviewers aren't responsible for the product, then the reviewers sign the report. The report should explain comments in enough detail that the producers can figure out the meaning a few weeks later.
The producer either celebrates, or makes the necessary changes (or both).
Metrics
Why Measure?
The review takes time away from "productive work." It's supposed to be helpful, in that it reduces the amount of work needed. How do we know it's working well enough to be worth it?
Also, there are many ways to do reviews. How do we know that one way is better than another?
What Should Be Measured?
We can measure the product. For programmers, one popular measure is lines of code, often written LOC (or KLOC, for thousand lines of code). It's probably worthwhile figuring out how to count without including blank lines, comments, etc. Another, perhaps better, way to count the size of a program is in function points.
We can measure the amount of time spent on the review. Account for all time: preparation time for each reviewer, preparation time for the producer (but don't include time spent on things that should have been done anyway, like polishing the product), time spent in the review meeting, including time spent rescheduling if it's necessary, time spent preparing any reports.
We can measure the number of defects that are caught by the review. If possible, categorize the defects by how necessary it would be to correct them had they been caught later, and by how much effort it would be to correct them later.
We can estimate and later measure the number of defects that were not caught by the review. These defects should be categorized in the same way as the defects that were caught. They're a reasonable check on how well the reviews are working.
We can measure the defects correlated against producer, but that would be a bad idea.
Individual producers can use the review process to measure the effectiveness of various individual work practices ("if I do it this way, I get about 10 defects per KLOC, and if I do it that way, I get about 5 defects per KLOC but spend an extra 10% of time").
Alternate Processes
Inspection
An inspection is a review in which the reviewers are checking for a specific set of potential defects. The reviewers are given a checklist of the defects to inspect for.
Inspections generally can cover a larger amount of material in the same amount of time, at the cost of missing the defects that don't fit on the checklist. Accordingly, having good checklists is critical for this type of review.
Unstructured Meeting
If the meeting is handled without the round-robin structure, some other way must be used to make sure everyone's issues are raised. The simple method I've tried tends to be about as effective, but twice as slow.
Walkthrough
A walkthrough is a review in which the meeting consists of a presentation by the producer. Issues are raised as the producer mentions the context of the issue in the presentation. Often, walkthroughs are conducted with minimal prior preparation on the part of the reviewers.
Walkthroughs are most useful when the producers are the only qualified people to make a technical evaluation of the product. The reviewers can force the producers to re-examine decisions, but aren't able to identify defects with authority.
Lecture
A lecture is a rather extreme case of a walkthrough: the lecturer is assumed to have the full knowledge of the defect-free product, and the audience is assumed to have nothing to contribute.
Lectures are useful for education, but don't provide much assurance of quality.
Dispersed
A dispersed review omits the meeting, instead having each reviewer make comments independently on the product. Often, the reviewers use a software tool to gather the comments and make them available to the producer. The tool may also prevent the product from being released (merged to the main code base, sent to the manufacturer or publisher, etc.) until enough reviewers have approved.
Depending on the company culture, this may work more or less effectively than a review with a meeting. Meetings take time, but also let the reviewers learn about defects they missed, and apply a subtle pressure on the reviewers to be diligent.
Practice Review
Hand out an example product (watch instructions, C program, or Java program); conduct review. Allow five or ten minutes for the preparation phase.
The teacher plays the part of the review coordinator, with an imaginary producer. Select a scribe from the students.
Checklist:
-
Declare product done
-
Prepare
-
Start meeting
-
Presentation
-
Positive comment from each reviewer
-
Issues from each reviewer, until no more from anyone
-
Recommendation
-
Generate report
Conclusion
The round-robin review is a reasonably efficient and reliable way to ensure that a product has few defects. As with any human process, much of the benefits depend on who's doing the work, and how much care they take.
Remember, this whole process is supposed to make the product meet the quality target, in a shorter time than any alternate process. Measure the thing to make sure it does that, and change or discard it if it ever stops doing what it's supposed to do.
Copyright © 1995, 2000, 2002, 2019, Leif Bennett. All rights reserved.