In the DevOps world, there seems to be no shortage of “Continuous” terms. Like saying Kleenex for tissue or Coke for soda, a common eponym to describe your entire software delivery pipeline is “CI/CD.” However, each part of CI/CD (Continuous Integration and Continuous Delivery) represents different goals and disciplines to achieve. And to further muddy the waters, there are both Continuous Delivery and Continuous Deployment, which again, have two different goals.
Releases that were once done on a quarterly basis seem asinine now, with the advent of Agile Development, getting incremental features out as fast as possible. Infrastructure and practices need to be in place to allow changes to propagate quickly. Like any technology, don’t let semantics get in the way of your goals, but tooling and practices for each are geared differently. No matter which Continuous journey you are on, automation is key to fulfill the goals of each Continuous pillar to lessen human intervention.
What is the Difference Between Continuous Integration, Continuous Deployment, and Continuous Delivery?
The entire goal of the software delivery pipeline is to get your ideas into production. As altruistic as that sounds, there are certainly lots of steps to do so. For a software engineer, your external customers don’t care how you delivered something once they have access to the feature. Comparing that sentiment to a DevOps engineer, your internal customers (development teams) do care about how something is delivered because that process directly impacts them and their feature delivery.
Starting with a code change or new code, the journey to production can wind through many different environments and confidence-building exercises before being signed off into production. The only constant in technology is change, so the entire process starts again as soon as one release is in. Starting with the advent of a new feature, changes will take the journey to production.
Continuous Integration is the automation of builds. Depending on your source control/version control strategy, code changes for a bug fix or new feature need to be merged/committed into a branch in the source code repository. No matter which side of the mono-repo vs. multi-repo argument you are on, a build – and eventually, release artifacts – will be created as part of the Continuous Integration processes.
Why it’s Important
Rarely do you work alone as a software engineer. Integrating your features/bug fixes into the application is a common task for a software engineer. For the newly-minted software engineer whose only experience is in group projects, that can take a little getting used to. The ability to merge ideas quickly is the big allure of Continuous Integration. With modern systems, the build and packaging steps can be different. In JAVA development, for example, the JAVA build produces a JAR file. Then that JAR file is packaged into a Docker Image for deployment.
Three pillars that Continuous Integration solves for is having the builds be repeatable, consistent, and available. In software we strive for practices to be repeatable; having an externalized build allows for this, which in turn bubbles into consistency. Modern Continuous Integration platforms allow for scaling of the builds (having the builds available when needed vs. having your local machine pegged).
With Continuous Integration, keeping the automated builds fast is key. As this process will be run multiple times a day, with triggers around each commit or merge, time waiting for results can snowball. A challenge observed with Continuous Integration is overstepping into other Continuous pillars, like overburdening Continuous Integration platforms into Continuous Delivery.
Confidence in the build and deployable packaging is different than confidence in the deployment and subsequent release. Being strategic in where to apply parts of your test suite is needed in order to avoid overburdening the Continuous Integration process. A line in the sand should be that Continuous Integration tackles artifact-centric confidence-building (for example, unit tests and artifact scans). Tests that take into account other artifacts and dependencies and/or application infrastructure are better-suited for the Continuous Delivery process. After the build is checked into a central repository, the next step to getting your idea into production is the promotion/deployment process.
Continuous Deployment focuses – just like the name implies – on the deployment; the actual installation and distribution of the bits. During a deployment, the application binary/packaging can transverse the topology on where the application or application infrastructure needs to serve traffic. In the traditional sense, Continuous Deployment focuses on the automation to deploy across environments or clusters. As you traverse environments from non-prod to the staging environment and eventually to production, the number of endpoints you deploy to increases. Continuous Deployment focuses on the path of least resistance to get the software into the needed environment(s).
With modern platforms such as Kubernetes, the separation of environments might not be physical when compared to legacy or traditional machine-based platforms. A namespace (software separation) might be all that is separating development from production, though good distributed systems principles still apply regardless of what platform you chose. With distributed systems, the topologies that changes need to propagate to can be large, even in pre-production environments.
Why it’s Important
The days of SCPing into boxes and dropping off binary distributions are almost faded away. As Continuous Integration provides a deployable artifact, Continuous Deployment can take that artifact forward into additional environments. Low hanging fruit might be as soon as a new artifact is created, immediately deploy that artifact into a dev-integration and/or quality assurance environment to start integration testing. Especially with modern paradigms, such as immutable applications and infrastructure meaning any change is rebuilt, the number of deployments will increase significantly.
Deployments encompass two pairs: the installation/activation pair and the uninstallation/deactivation pair. From a pure deployment standpoint, leveraging a rolling deployment is the defacto standard. A rolling deployment allows for old application nodes to be replaced in an incremental interval, typically one by one, until all the nodes are the new version. The application instance/node being upgraded is taken out of the load balancer pool, then when the installation is complete, it is reconstituted back into the pool.
Having a clear map of the topology, especially if the infrastructure is elastic or on-demand, is key to understanding where your artifacts are going. Similar to the goals of Continuous Integration, keeping the deployment fast is a good goal to have. The appearance of speed can be there if certain tasks have to be run in parallel (ie: spinning up the infrastructure for artifacts to be deployed onto).
Understand that Continuous Deployment and Continuous Delivery have slightly different goals. Continuous Delivery has overlap with Continuous Deployment on the deployment front, but we need to be thoughtful not to overburden deployment systems, confidence-building, safety – which are crucial for Continuous Delivery.
Technology is fallible because humans make technology. Confidence-building steps are crucial to any engineering team making changes. Continuous Delivery is the automation of steps to safely get changes into production. Where Continuous Deployment focuses on the actual deployment, Continuous Delivery focuses on the release and release strategy. An elusive goal would be a “push of a button” to get changes into production. That “push of a button” is Continuous Delivery.
Why it’s Important
The path to a release can be a snowflake. As software is the culmination of choices from several teams, traversing all the needed steps and modifications needed to propagate/support changes or new features can be a winding one. Continuous Delivery brings all of the testing steps, incremental/environmental deployments, and validation steps to safely get changes into production.
Simply put, good pipelines are fast, safe, and repeatable. With Continuous Delivery, though, no one size fits all. Having a pragmatic Continuous Delivery approach is crucial for adoption. Most likely in any enterprise, there is a wide swath of technology choices and platforms that most likely predate the Continuous Delivery journey.
Recently, Harness took a census on the patterns that organizations are using with our Pipeline Patterns eBook. The structure of your pipeline tends to follow what they are driven by. Pipelines are driven by either environments, tests, services, outcomes, or people. The form of the pipeline will follow the function.
The one anti-pattern to watch out for is driven by people. Without automation, processes would be people-centric. As the gap starts to be bridged, manual steps in a Continuous Delivery pipeline will fade. The benefit of capturing all the steps needed in a Continuous Delivery pipeline is to easily identify where bottlenecks exist and start to invest in automation in the bottleneck.
One of the hardest items to automate is the validation/promotion of deployments. When running test suites, the results are binary – either it is a passed test or a failed test and there is no need to further progress the deployment. Validating if a deployed application is successful or not can bring in a host of concerns and areas to check. Especially when implementing a canary release strategy with several incremental deployments, this validation can keep occurring, and should, even post-deployment. Correlating data/key metrics from different monitoring, observability, and performance management solutions can be tedious – especially when trying to determine a baseline. The Harness Platform takes the guesswork out of that scenario.
How They Work Together in Software Delivery
There is a reason that “CI/CD” is often referred to together. Having a pipeline of artifacts to be deployed in a safe manner is software delivery in a nutshell. If an organization only had a single artifact that was created once every few months and just a few nodes to deploy on and could accept downtime during maintenance, then there would be no need for any sort of investment. Unfortunately, that scenario has not existed for generations.
Your Continuous Integration process will keep a ready supply of deployable artifacts that can be deployed. Your Continuous Delivery process will orchestrate all of the deployment and confidence-building/release activities which are needed, potentially calling your Continuous Deployment infrastructure as the artifact/new change(s) traverses environments/clusters. Being able to deploy a release quickly and in a safe manner will allow organizations to become more agile with modern software delivery capabilities. With the paradigm shifting quickly, questions are sure to arise.
FAQ by the DevOps Community
Here are a few questions we found in different DevOps communities, which can help describe the similarities and differences further.
What Differentiates Deployment and Release in the Continuous Delivery Pipeline?
A deployment is an act of installing/activating the software/binaries. During a deployment, if an existing version is there, then uninstalling/deactivating the previous version takes place. A release is the culmination of all the activities to get changes safely into production. As part of a release, there is a deployment component. Different release strategies, such as a canary release, take advantage of incremental deployment strategies. Releases are usually signed off on and a record of all the events that lead up to a change or new version of the application (the new stable version) is created.
What is the Checklist for Deployment Pipeline in Continuous Delivery?
Without getting into the semantics of the difference between a deployment and a software release, a checklist for a successful deployment validates that the features match the expectations. With a wide paintbrush, the expectation from a technical side would be not to violate some sort of SLA/SLI/SLO. From a functional perspective, success would be if there is adoption and no drop-off in usage. Validation steps in the checklist can make sure that QA team goals (code/test coverage) are met and changes are vetted out during, for example, a soak test. Did the changes meet the expectation, meet quality standards, and are available to meet scaling requirements? Harness helps to automate the entire checklist.
Automate the Build and Testing Process with Harness
The Harness Platform allows you to have an entire end-to-end CI/CD pipeline so your ideas can truly reach production in a safe manner. No matter if you are looking to deploy to several disparate pieces of infrastructure or orchestrate multiple confidence-building steps and testing automation, Harness can help your dev team achieve a push-button release.
If you have not already, feel free to sign up for the Harness Platform today.