Optimizing your code review process starts by looking at the wait time, which in Lean terms represents waste. Wait time starts when the processing time ends, and it approximates the point when a developer asks for feedback until the PR has been merged. There are a couple of events that factor in identifying the point when a developer asked for feedback:
request review box checked on the PR)Whichever of these three events happens first, the clock for the wait time starts ticking.

In the overview section of the report, you’ll notice the Wait Time section.
For a given PR data set, you can see the calculated cumulative amount of wait time in days and months. Cumulative wait time is calculated as a sum of all wait times for all PRs. It can happen that it took you, say 6 months to merge 100 PRs, while the cumulative wait time can be, and unfortunately most often is, a multiple of that. The reason is that with async code reviews, wait time tends to take the majority of PRs’ lead time. Since changes introduced through PRs represent potential value for our customers and our business, that means the potential value is stuck in our system of work. Unclogging the system of work helps with getting things sooner out of the door, and I’m yet to find a company that doesn’t desire that.

From all of the metrics you see, wait time for PRs with no engagement tends to be most obvious candidate for improvement. When there’s a PR raised and there’s no engagement whatsoever, while at the same time it’s being delayed, it’s a pure waste.



In the Engagement section of the overview, you can also see the share of PRs with no engagement. For this particular data set, 38% of the PRs had no non-trivial comments:
