Product
|
Cloud costs
|
released
May 22, 2018
|
3
min read
|

Deployment Scripting is not Automation

Updated

Over the past year, I've talked Continuous Delivery with hundreds of teams around the world. I've also observed 50+ different deployment processes when customers have been kind enough to show me their current methodology and tooling.

Now comes my controversial bombshell: deployment scripting isn't automation.
Let's first remind ourselves of what Automation actually means:

Automation can be defined as the technology by which a process or procedure is performed without human assistance.[Wikipedia]

And remember, Wikipedia is NEVER wrong.

The Golden Deployment Days

Deployment scripting generally works with simple, small, and static applications.

Back in 2005, it was a relatively trivial exercise to model, map, and manage all the different components and dependencies of your apps/services using shell scripts. You could hardcode those hostnames, environment variables, and package dependencies and life was generally good...../startWeblogic.sh.

In 2018, things are a little different. Cloud-native apps are not static, small, and simple. You have multiple technology stacks, hundreds of service dependencies, and immutable infrastructure. Change is constant, which means your automated deployment scripts require more attention than a 2-year-old toddler.

Bottom line: There is nothing automated about writing or maintaining deployment scripts in 2018.

A Few Real-Life Examples

Below are 3 simple examples of what life was like for customers who lived (and died) by deployment scripts:

Customer #1 - 23 deployment scripts maintained by 15 different engineers over many years. Some engineers are no longer at the company,  while some scripts contain zero comments or docs. A typical deployment required 10 people split across DevOps, deployment, dev team leads, and engineers. When deployments failed (and they frequently did), it would take 2 hours to rollback and 2-3 days to debug and understand what happened. As a result, deployments happened once every six weeks.

Customer #2 - A DevOps team of 4 dedicated engineers maintained deployment scripts and plugins on top of their existing CI platform (Jenkins). Development teams scripted their own deployment pipelines to orchestrate the library of available deployment scripts/plugins. When one deployment script or plugin fails, it has an impact across all dev team pipelines. Health checks and rollbacks were manual and took 30 mins to 1 hour, and up to 5 engineers to troubleshoot.

Customer #3 - A centralized deployment team maintained 10+ deployment scripts, which were executed in a serial manually. Many of them required manual inputs like the major and minor build versions of various service artifacts, environment parameters, and flags. 3-4 engineers watched every deployment by tailing logs and context switching between production consoles. Health checks were often "gut feel" and rollback was a last resort that took several hours to complete and validate.

The three above examples don't exactly paint a picture of automation -- more like chaos and carnage.
It's not just the deployment process itself; it's the constant tweaking, tinkering, and maintenance of the underlying deployment scripts. The majority of these scripts are not dynamic or change tolerant to what is happening within the application code or infrastructure in 2018.

Deployment Scripting != Continuous Delivery

It's true you can build anything with shell scripts in the same way you can build anything with wood, nails, and a hammer. If the goal of your business is to compete and reduce its time-to-market, then deployment scripting may not be the answer.

A few rules of Continuous Delivery:

  • Anyone can deploy
  • Fast & frequent deployments
  • Consistent, repeatable & safe deployments
  • Develop New Innovation vs. Maintain Old Innovation

With deployment scripting:

  • Not everyone can deploy. If your scripts fail, then you need the author(s) to debug what happened.
  • Deployments are orchestrated manually and have manual inputs, which increases deployment time and reduces frequency.
  • Your deployments are often ad-hoc across apps/services and unpredictable as those entities evolve and change, which leads to failure.
  • You spend almost as much time updating deployment scripts as your apps/services themselves.

I got off a call last week with a major cloud storage vendor that had 4 full-time engineers dedicated to deployment scripting. They now have an active project to automate this team entirely to fully embrace Continuous Delivery.

Fortunately, today's APIs across clouds, technology stacks, and tooling are relatively mature. It's possible to parameterize the inputs (e.g. artifacts) of your deployment pipelines and make the actual deployment, verification, and rollback tasks fully automated and dynamic. Dynamic meaning deployment adapts to the underlying changes of your apps and infrastructure.

At Harness we call this "Smart Automation" -- you can sign up for your free trial.

Cheers,
Steve.
@BurtonSays

Sign up now

Sign up for our free plan, start building and deploying with Harness, take your software delivery to the next level.

Get a demo

Sign up for a free 14 day trial and take your software development to the next level

Documentation

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

Case studies

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.

Sign up for our monthly newsletter

Subscribe to our newsletter to receive the latest Harness content in your inbox every month.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Continuous Delivery & GitOps