10 Signs You Don’t Do Continuous Delivery

Most organizations have DevOps or CI/CD initiatives, yet still struggle with Continuous Delivery.

By Steve Burton
April 18, 2018

1.  “Agile” and “Release” Are Used in Meetings

Continuous Delivery doesn’t mean “release more than once a year or quarter”; it means deployments are continuous, never-ending, ongoing, nonstop – and so on.

Agile was something dev teams aspired to achieve 10 years ago when weekly or monthly release cycles were considered valuable.

Today, daily production deployments have become the norm. All the planning, meetings, approvals, tickets, politics, and general BS associated with managing “releases” will actually slow you down or kill your business in today’s world. In the year 2018, you don’t want to be riding a horse when all of your competitors are driving cars.

Once code is committed, it’s down to your deployment pipelines to tell you if it’s ready for production or not.

2. You Don’t Commit to Trunk/Master

Branch, commit, merge, and resolve conflicts – more tasks that slow you down. The whole point of Continuous Delivery is to deliver and deploy independent software components fast and frequently using dev teams that work in parallel.

If a build, test or deployment pipeline fails you should stop, understand what happened, fix it, and learn from it. This is what happens in production, so why not do it during development and test? Committing to Trunk/Master will drive the right team behavior and urgency required for Continuous Delivery.

3. Fixing Builds/Deployments takes 30+ mins

The whole point of a deployment pipeline is to kill your build before it kills customers in production. When a deployment pipeline or test fails, it should be treated as a “stop-the-world” event where everyone stops, focuses, and fixes the build so things can progress.

If your production canary or deployment fails, you should be able to rollback instantly or roll forward with a rapid fix.

I used to be skeptical of this myself, but it actually works. I’ve reported bugs several times at Harness over the past year and they’ve literally been fixed in minutes. Once I reported a bug in our demo environment at 8.45am, and one engineer fixed the bug minutes later on her Caltrain journey into work. Shortly afterward, another developer pushed the fix while riding BART using his smartphone. What’s really scary is that I didn’t just make this story up…this actually happened.

4. Deployment Pipelines Take Hours To Complete

Let’s imagine your new build or artifact is perfect. How long does it take for that artifact to be promoted through all your deployment pipeline stages (dev, QA, staging) and into production? One hour? Two Hours? Six Hours? Longer?

Feedback loops for deployment pipelines (and stages) need to happen in minutes, not hours. This may require more test/tool/feedback automation so you can eliminate manual tasks and approvals throughout your deployment pipelines. For example, if one pipeline stage succeeds, it should automatically move on to the next until a stage fails.

5. Your Deployment Pipelines Rarely Fail

If 95% or your deployments succeed in dev/QA/staging, something is very wrong. Your deployment pipelines either lack basic test coverage or they lack integration with your existing toolsets. If you’ve already invested $$$ in tools, why aren’t you integrating and leveraging them to gain insight into your pipelines?

For example, many of our customers leverage APM (AppDynamics, New Relic, Dynatrace, …) and Log Analytics (Splunk, ELK, Sumo Logic, …) to help identify performance or quality regression in deployments. We call this Continuous Verification at Harness.

At Harness, over the past 30 days, we did 878 deployments (~30/day), and our deployment pipeline success rate was 76% (see below). Roughly 1/4 deployments result in a failure being detected without customers being impacted. This is a good thing.

Harness_failures

 

Test automation, an integrated toolchain, and the ability to verify deployments are key components of Continuous Delivery.

6. It Takes a Village to Deploy & Debug Deployments

It’s entirely possible you’ve automated your deployment process. By automated I mean you’ve stitched together tens of deployment scripts written by 20 different people, who at some stage, tweaked and refactored a few hundred lines here and there.

So, when something breaks, it literally takes a village to debug and troubleshoot a deployment. We had one customer who described their previous deployment process as a “village exercise” that took 15 people 6 hours to debug.

Next time you deploy, count the number of people involved and multiply that number by how long the deployment process lasts. Village deployments are expensive.

7. You Have a Dedicated Deployment/Release team

If you have multiple devs or product teams, the last thing you need is another team that bottlenecks all your deployments. Tickets, change control, approvals, handoff, documentation, and so on are more activities that slow down your deployment timelines.

If a deployment/release team does multiple deployments a day and the first deployment goes to s***, then they’ll probably spend the next 6 hours debugging versus deploying, and thus bottlenecking other deployment pipelines. This is bad.

8. Developers Don’t Deploy/Debug their Own Code

Most well-oiled CI/CD organizations let DevOps teams build deployment pipelines, and let their developers deploy using these pipelines. Let me say that again: DevOps manages the tooling/automation/framework and developers use this platform-as-a-service so they can deploy their own code. We call this CD-as-a-Service at Harness.

When you let developers deploy their own code something natural happens – they end up debugging their own code should something fail. This is a very good thing.

I hear too often from customers that Deployment/Release teams often drag DevOps or SREs teams into firefights when deployments go south. If code fails in production, then you need the right set of eyes, context, and knowledge to rapidly troubleshoot. You also need the ability to automatically roll back if you can’t roll forward.

9. You Don’t Know The Business Impact of Deployments

The whole reason for doing CI/CD isn’t so you can spend a tonne of money on people, technology, and tools. You do it to grow your business and make it more competitive.

Organizations do CI/CD so their applications and services deliver a better service and experience than their competitors. End of story.

Let’s imagine you spent $1m/month on a new service for your business and your team did daily production deployments. Do you know what positive impact each of those deployments had on your business? More importantly, do you know whether any deployments had a negative impact? If so, by how much? As Winston Churchill said, “No matter the strategy, it’s always good to occasionally look at the results.”

10. You think DevOps is Continuous Delivery

It’s not. DevOps is a culture or mindset, whereas Continuous Delivery is a practice or set of principles that teams follow to deliver software safely, quickly, and in a sustainable manner.

A DevOps culture makes Continuous Delivery principles easier to implement, but it’s not going to magically re-architect your application so more components can be deployed more frequently in the cloud.

The vast majority of Harness customers are migrating from “vintage” monolithic applications to cloud-native microservices. CI/CD is a core initiative that is helping them make that transition.

DevOps is popular initiative, but this is more about people, culture, transparency, and collaboration across teams. It’s not about tools or technology.

What other signs or mistakes do you see other teams and organizations making?

Cheers,
Steve.

@BurtonSays

➞ Back to Blog

Leave a Reply

avatar
  Subscribe  
Notify of