April 8, 2021

Accelerating DevOps With DORA Metrics

Table of Contents

Peter Drucker once said, “If you can’t measure it, you can’t improve it.” The same is true for DevOps. To efficiently and effectively deliver better software, teams need the visibility, data, and decisions to drive DevOps capabilities.

The blog post will explore DevOps Research and Assessment (DORA) survey findings and share what you need to know about achieving Continuous Delivery and the DevOps philosophy on speed and stability.

What is DORA?

The DevOps Research and Assessment (DORA) team is Google’s research group best known for their work in a six-year program to measure and understand DevOps practices and capabilities across the IT industry. DORA’s research was presented in the annual State of DevOps Report from 2014 - 2019. The group also produced an ROI whitepaper, providing insights into DevOps transformations.

DORA’s research identified four key metrics that indicate software development and delivery performance from the six years of study data. DORA team’s lead, Nicole Forsgren, co-authored a book called Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations. This book shared the DORA team’s findings explaining the science and research behind the capabilities that delivery and development teams should invest in to drive higher software delivery performance.

What Are the Four Key Metrics?

Measurements for developer productivity and performance like lines of code, velocity, and utilization focus on individual or siloed team outputs. In the spirit of cross-functional delivery teams, tracking cross-functional team outcomes versus individual outputs allows organizations to achieve their organizational goals with more focus and speed. 

Through their book Accelerate, Nicole Forsgren, Jez Humble, and Gene Kim identified key characteristics for building high performance technology organizations. These top-performing organizations focused on engineering outcomes over outputs and teams over individuals by tracking four measures: Lead Time, Deployment Frequency, Mean Time to Restore (MTTR), and Change Failure Rate.

Let’s define each of these terms and discuss practical methods for measuring these metrics.

Lead Time

Delivery Lead Time is the total time between the initiation of a feature request to the delivery of that feature to a customer. In lean manufacturing and value stream mapping, it’s common to capture the Lead Time for a process like deploying a service. Capturing the total time it takes from source code commit to production release helps indicate the tempo of software delivery.

Deployment Frequency

Deployment Frequency is how often an organization deploys code for a service or application. Frequency indicates the tempo of software delivery. The theory behind Deployment Frequency also borrows from lean manufacturing concepts and how it translates to controlling the batch size of inventory to be delivered. Highly-performing organizations do smaller and more frequent deployments.

Mean Time to Restore (MTTR)

Mean Time to Restore or MTTR refers to incident resolution. Where there is a failure, the Mean Time to Restore is the average time it takes to restore the service when an incident occurs. MTTR is a measure of time similar to Lead Time.

Change Failure Rate

The Change Failure Rate is the percentage of changes made to a service where the change results in remedies, incidents, rollbacks, or failed deployments. Change Fail Percentage is a measure of quality. Based on the DORA team’s research, high-performing teams were somewhere between the 0-15% range.

How Do You Use Accelerate Metrics?

When tracking these metrics, it is important to consider time, context, and resources. Data analysis requires consistent measurement over time. Different levels of leadership can then understand these results based on context. Was there a lack of tooling or automation to aid in deployments, triaging incidents, and testing our services? Were there changes in architecture, planning, or goals during this time? Similarly, tracking these metrics per service and across various teams can provide additional insights into what’s going well and what is not.

These metrics are meant to encourage improvement, discussion, and delivery across anyone with a stake in the software service or application. Accelerate metrics can be used to compare the performance of different teams. By comparing the performance of different teams, organizations can identify teams that are performing well and teams that could use improvement. This information can be used to identify best practices that can be shared across teams.

Understanding Speed vs. Stability

It’s also worth considering when the speed or frequency of delivery is a cost to its stability. For example, a higher Change Failure Rate may indicate that poor quality control in a CI/CD pipeline (learn more about Continuous Integration, Continuous Delivery, and Continuous Deployment in our “What Is a CI/CD Platform and Why Should I Care?” article).

This could be especially true if the deployment frequency was daily or weekly. If the deployment frequency was infrequent, but the Change Failure Rate was high, this could indicate that the deployments were not well planned and could have contained major or large feature changes.

Context, timing, and resources matter in these conversations. But these four key metrics influence one another and often help unravel stories and insights that would otherwise be harder to understand. Looking at the duality of speed and stability is one method for analyzing your DevOps performance.

Beginning the Journey of Continuous Insights

Now that we understand the four key metrics shared by the DORA team, we can begin leveraging these metrics to gain deployment insights. Harness’ Continuous Insights allows for teams to quickly and easily build custom dashboards that encourage continuous improvement and shared responsibility for the delivery and quality of your software.

Continuous Insights provides real-time delivery analytics, automatically giving DevOps and team leads insight across all applications, environments, versions, and deployments within the Harness platform. There are many ways to track these metrics. For example, the open-source project Four Keys gathers and displays your DevOps performance data from your GitHub or GitLab repos.

Improve Your DevOps Toolkit With the Harness Platform

The blog post shares how to measure and track DORA metrics. Like many other forms of manufacturing and production, software delivery can be tracked, understood, and improved. If you’d like to leverage these four key metrics for continuous insights for free, try Harness today.

Continuous Delivery & GitOps