What impact can metrics have on your team?
Over the past few months I’ve spent at Haystack, I’ve been lucky enough to help a variety of engineering leaders prevent burnout in their teams, while allowing their product managers to test new ideas in production quicker and helping their engineering teams ship more reliably.
Haystack’s clientbase is certainly diverse; one of our clients uses software to fight cancer and another distributes an online newspaper with a circulation of almost a million readers. Regardless, even among clients in the same space, it’s clear that the challenges every company faces are different.
In this article, I want to discuss how you can systematically go about identifying the bottlenecks of a developer team and find the solutions to the challenges your team is actually facing, instead of relying simply on gut feel.
Engineering north star metrics
To improve, it’s important to measure. Measurements fundamentally provide an empirical assessment of your current performance, and by drilling in, you can identify areas of further improvement.
However, poor metrics can do more harm than good.
Optimizing initially against local metrics (like build time or test coverage) can cause you to excessively optimize one metric while losing sight of the global picture. Instead, you need to view the entire system.
It is important though that these metrics are under the sole control of the engineering team, measuring engineering performance rather than being tied to product indicators that the engineering team itself might not control.
Using engineering metrics as a tool for micromanagement may also be particularly harmful; indeed, some of our users have come to us after this has become a concern. For example, product managers using Git commit metrics for micromanaging teams or engineering managers using metrics to compare engineers on their team. The harms of micromanagement and the benefits of psychological safety have been empirically studied both by Google and in the 2019 State of DevOps Reports (backed by ~31,000 different data points from different organizations).
Proven north star metrics
As documented in both the book Accelerate and in the annual State of DevOps Reports, rigorous research by the DORA research program (led by Dr. Nicole Forsgren) has found that four key metrics are predictive of the effectiveness of technology teams.
- Cycle Time/Change Lead Time. Time to implement, test, and deliver code for a feature (measured from first commit to deployment).
- Mean Time To Recovery (MTTR). Mean time it takes to restore service after production failure.
- Deployment Frequency. Number of deployments in a given duration of time.
- Change Failure Rate (CFR). Percentage of deployments that caused a failure in production.
High performers are 2x more likely to achieve their commercial and non-commercial goals. Indeed, companies that do well under these metrics have a 50% higher market cap growth over 3 years.
Cycle Time is particularly important as it measures not only that the engineering team can work iteratively, but also that product managers can experiment ideas into production quickly and react to customers' opinions. Elite performing teams typically see Cycle Times of less than one day.
At Haystack, we’ve seen that by providing visibility into north star metrics at a team level, organizations have seen over 70% improvement in Cycle Time on average.
It isn’t enough to just understand where you stand from a north star perspective though, it’s important to diagnose where these bottlenecks actually are. Once you’ve identified the north star metric that needs improvement, you can look at the leading indicators that influence it.
For example, suppose you want to cut Cycle Time, you can divide this metric up into both Development Time (the time spent coding) and Review Time (the time spent with the pull request in review). If you then identify that Review Time is the problem, you can then subdivide this into First Response Time, Rework Time, and Idle Completion Time.
To dig down further, you can then look to other indicators and qualitative data sources. For example, if First Response Time is particularly slow, you can look at your CI pipeline, review the slowest pull requests, and talk to your team about it in more detail. In this situation, I was able to identify that the problem rested with slow CI build times.
This then allows you to identify where the bottlenecks are and remove them for your team in a systematic way. Critically, you’re able to experiment with interventions to deliver greater Developer Experience, without purely relying on gut feel.
Monitor risks early
When experimenting new approaches to deliver faster, it’s important to be mindful of risks that may emerge on your team. For example, if your team is pushing more work than it can manage, your team may be about to burnout. If pull requests are being stuck in review, it’s important to address them before they turn into statistics.
In most instances, the team should be empowered to self-resolve these issues in real-time without manager intervention, simply by using email or Slack alerts. In a small minority of cases, they may need to be resolved by the engineering manager (for example, controversial pull requests stuck in back-and-forth discussion).
Leaving these issues to fester until your next team retrospective can draw out problems necessarily and delay learnings from the course that could be easily corrected. In the worst instances, burying such problems can lead to ineffective and psychologically unsafe teams.
Trust more, deliver faster
A common trap organizations fall into when they seek to become more agile is they keep the old waterfall processes but bolt-on micromanagement. Surveillance tools will not address real blockers, though.
To become truly agile at delivering software, you must continuously improve by learning and experimenting. The DevOps revolution has taught us that improving software delivery allows us to experiment product faster while delivering reliably, but also prevent our teams from burning out. Indeed, the 2018 State of DevOps report found that companies that were elite performers against the four key metrics ‘are 1.8 times more likely to recommend their team as a great place to work’.
To be able to continuously improve, your first must be able to measure right.