7 Continuous Delivery Metrics to Track
(Updated July 18, 2018)
How prepared is your organization for continuous delivery? Michael Bowler’s article on the Agile Advice blog discusses his thoughts about which continuous delivery metrics should be tracked as an organization begins the move towards continuous integration and delivery. These aren’t metrics to track once the organization is already practicing continuous delivery, but rather measurements that should be taken to gauge whether or not the organization is ready to begin such an ambitious initiative.
Here are the 7 continuous delivery metrics you should be tracking before you start:
1. Lead time to production
The clock starts ticking on this one at the point where enough is known about the work to begin and ends when the work is live in production. On this metric, Bowler points out that if lead time is “excessively long then we might want to track just cycle time.” This is because, “When teams are first starting their journey to continuous delivery, lead times to production are often measured in months and it can be hard to get sufficient feedback with cycles that long.” Measuring cycle time in the interim, or the time between when work starts on an item and that work meets the team’s definition of ‘done,’ “can be a good intermediate measurement while we work on reducing lead time to production.”
2. Number of bugs
“Shipping buggy code is bad and this should be obvious,” writes Bowler, “Continuously delivering buggy code is worse.” If there are quality issues with the code then continuous delivery is only going to get more bugs out into the world faster, causing more headaches later on.
3. Defect resolution time
Again, quality code is a prerequisite to successful Continuous Delivery. One way to measure quality, along with the team’s commitment to quality, is to track the life of bugs. “I’ve seen teams that had bug lists that went on for pages and where the oldest was measured in years,” Bowler observes, adding that, “Really successful teams fix bugs as fast as they appear.”
4. Regression test duration
Tracking the time it takes to complete a full regression test “is important because we would like to do a full regression test prior to any production deploy.” For teams with manual testing practices, this metric will be measured in “weeks or months,” and “minutes or hours” for teams who primarily use automated tests. The argument that Bowler is putting forth here is that regression testing helps to reduce the risk of a failed deployment, and shortening this cycle down as much as possible will help increase the probability that continuous delivery will be successful.
5. Broken build time
“We all make mistakes,” writes Bowler, but “the question is how important is it to the team to get that build fixed?” This is, again, a measure of the team’s commitment to quality as a prerequisite to Continuous Delivery. If a team lets a build stay broken “for days at a time,” continuous delivery won’t be an achievable goal.
6. Number of code branches
Continuous Delivery advocates for main trunk development so that everything is maintained in the same pipeline. Tracking the number of branches you have to the code “now and tracking that over time will give you some indication of where you stand” in your preparedness to begin continuous delivery. Bowler adds that “if your code isn’t in version control at all then stop taking measurements and just fix that one right now.” No advanced practice like continuous integration or continuous delivery is possible without managing and tracking code in version control.
7. Production downtime during deployment
“Some applications such as batch processes may never require zero-downtime deploys,” writes Bowler, stipulating that, “Interactive applications like web apps absolutely do.” There is a cost to downtime for customer-facing or revenue-producing applications, and for those, the goal should be zero downtime during deployments. Bowler writes that “If you achieve zero-downtime deploys then stop measuring this one.”