Unfortunately, computers don’t understand words like humans. To get your ideas (ie: source code) into the hands of the public, some sort of build or packaging needs to occur. Inside the build, source code is converted/compiled into a machine-usable format. Usually, the programming language has build-specific tools aiding in the compilation and packaging of the build.  

In a local development environment, an engineer might produce a local build that could be a subset of the total deployable unit several dozen times before getting the feature ready for quality assurance. Software, though, is not written in a vacuum. Eventually, the individual engineer’s work will need to be integrated into the greater team’s work. Thus, the dawn of Continuous Integration. 

Continuous Integration is automated builds that can be triggered by some sort of event, such as a code check-in, or merge, or on a regular schedule. The end goal of a build is to be deployed somewhere, and the main goal of Continuous Integration is to build and publish that deployable unit. Outside of Continuous Integration, there are other disciplines, such as Continuous Deployment and Continuous Delivery, that focus on getting the changes safely into production. 

Coders working

What is the Difference Between Continuous Integration, Continuous Deployment, and Continuous Delivery?

When talking about your development pipeline, a common eponym is to say “CI/CD Pipeline”. Decomposing CI/CD, there are multiple disciplines involved. The “continuous” portion of Continuous Integration, Continuous Deployment, and Continuous Delivery bind the terms together. That continuous aspect focuses on being ready as soon as the change is ready; basically being on demand. 

Continuous Integration

Simply put, Continuous Integration is build automation. Though, more than just the compiled source code can go into a build. The end product for Continuous Integration is a release candidate. A release candidate is the final form of an artifact to be deployed. There could be quality steps taken to produce the artifact, such as finding bugs, and identifying their fix. Packaging, distribution, and configuration all go into a release candidate.

For example, a JAVA application has a JAR, which is then packaged into a Docker Image, and has available all of the environmental configurations the image needs to run. Engineers use tools to create their CI pipelines, such as Jenkins, CircleCI, Travis CI, Bamboo, Gitlab CI, and Harness – to name a few. You can find a comprehensive list of tools on our DevOps Tools comparison pages. Simply filter by CI tools on the left.

Continuous Deployment

The act of deploying software is the distribution and installation of the build/release candidate. In modern distributed applications, you never only have one destination or instance of an application node. Services would typically be installed in a cluster – or to multiple destinations – for performance or availability reasons. Continuous Deployment is to take the path of least resistance and, as quickly as possible, deploy your changes to make them available. This is great especially for lower environments before a lot of rigor is applied. 

Continuous Delivery

Delivering software can be seen as continuous decision making. Getting your ideas to production in a safe manner requires confidence-building exercises in the form of tests and approvals and safe mechanisms to deploy such as a canary deployment. Continuous Delivery is the ability in an automated fashion to deliver changes to your users. Continuous Delivery is interdisciplinary bringing in automation practices around monitoring, verification, change management, and notifications/ChatOps. Though without an artifact to deploy, there would be no deployment; Continuous Integration provides the artifact to deploy. 

Why Use Continuous Integration?

Having an artifact or artifacts ready to deploy to continue to be vetted (for example, from development to quality assurance environment) is prudent in today’s software engineering organizations. The main output of a software engineer’s work is iterative by nature. Several artifacts can be created before a viable release candidate is made. The ability to build on-demand and start the integration and quality journey starts with a build that can happen multiple times a day. According to Paul Duvall, co-author of Continuous Integration, in a nutshell, CI will improve quality and reduce risk. 

Continuous Integration Pipeline Example

Benefits of Continuous Integration

Having a Continuous Integration approach frees teams from the burden of manual builds and also makes builds more repeatable, consistent, and available. Having the main work product of a software engineering team (the deployable unit) ready to be deployed regularly is beneficial to the entire software delivery lifecycle and allows for consistent collaboration between engineers by avoiding common bottlenecks. 

Repeatability

Externalizing the build instead of being locked locally by a developer puts more eyes on the build steps. The Continuous Integration configuration starts to have less of an individual snowflake approach and can be an asset the broader team uses. Having a build executed by a system is repeatable and a march towards having consistency. 

Consistency 

The ability to build consistently is one of the major pillars of Continuous Integration. After having repeatable builds, efficiency and consistency start to step in as Continuous Integration practices become more mature on the team. With consistency, there is also the ability to have builds more readily available. 

Availability

The ability to scale to match the demands of the concurrent builds needed by a team and the ability to recreate a build is availability of the build. Modern containerized builds require more horsepower than just building the application binary. Distributed build systems allow for those builds to be more available. Since the builds are repeatable and consistent, a core tenet of modern software development is to be repeatable at any step of the process. Old builds and previous versions can be available by simply calling a recipe from the past. With the emphasis on having a build available at any time, challenges can arise supporting a wide swath of technology. 

Challenges of Continuous Integration

Because builds and release candidates follow advancements in development technology closely, such as new languages, packaging, and paradigms in testing the artifact, expanding the capabilities in Continuous Integration implementations can be challenging. When the introduction of containerization (learn more about containers/container orchestration in our Kubernetes six-part series – this link takes you to the first part) technology, the firepower required to build has increased alongside the velocity. 

Scaling Continuous Integration Platforms

As the velocity of builds increase to match the mantra “every commit should trigger a build,” development teams could potentially be generating several builds a day per team member, if not more. The firepower required to produce a modern containerized build has increased over the years, versus traditional application packaging. 

The infrastructure required to run a distributed Continuous Integration platform can be as complex as the applications they are building, because of the heavy compute requirements. Distributed build runners are one area that can be complex; managing when new build nodes are spun up and spun down can depend on the platform/end-user. 

Keeping Up With the Technology Velocity

The adage “the only constant in technology is change” is true. New languages, platforms, and paradigms are to be expected as technology pushes forward. The ease of including new technologies in a heterogeneous build or accepting new testing paradigms can be difficult for more rigid/legacy Continuous Integration platforms that were designed for a small subset of technologies. Homegrown/legacy Continuous Integration platforms can be very prone to rigidity, in terms of being designed for what was in the enterprise at the point in time when the platform was built. 

Overstretching CI Platforms Into CD

As some of the first systems that automate parts of the development pipeline, there would be a natural tendency to continue to build the automation that takes software all the way to production. Though organizations quickly realize that failing the build due to failing unit tests is different than handling multiple deployments and release strategies; a failed deployment can leave a system in a non-running state. 

The rigor needed to create and test the infrastructure and application together all while having a safe release strategy, such as a canary release, requires codifying tribal knowledge about applications to determine pass/failure scenarios. The burden of adding additional applications can be substantial and can go against best practices for Continuous Integration, such as keeping the build fast. 

CI/CD Pipeline

Continuous Integration Best Practices

As Continuous Integration continues to evolve, certain practices allude themselves to a more mature Continuous Integration approach. A mature Continuous Integration practice should allow for speed, agility, simplicity, and allowing for the dissemination of results in an automated fashion. 

Keep the Automated Build Fast

Since builds will occur throughout the day, having a speedy and automated build is core to engineering efficiency. Not tying up an engineer’s machine or local environment for a build by having a build externalized can allow an engineer to continue to make strides and adjustments as a build occurs. Simply put, the quicker the build, the quicker the feedback can be implemented or a release candidate created to be deployed by a Continuous Delivery solution. 

Every Commit Should Be Built Automatically

For a software engineer, a commit – or merge, for that matter – in shared repository signals moving forward in the software development lifecycle. With a commit, you are committing that you are ready to start trying out what you developed. Core to Continuous Integration is to treat each commit as a potential release candidate and start building the artifact. This will allow for less lead time when a decision is made to deploy.

Small Pieces

In microservices and in Continuous Integration, smaller pieces can help reduce complexity. By having smaller and functionally independent pieces such as build, testing, packaging, and publishing, the identification of problems/bottlenecks becomes much easier. If there are changes to any one of the functional areas, they can be made and tweaked and the steps inside a Continuous Integration platform can be updated. With smaller pieces, if certain pieces need to run on other systems, finding the line in the sand to lift or migrate functionality is easier. 

Understanding CI is Not CD

Being aware that Continuous Integration and Continuous Delivery are two separate disciplines will help avoid falling into anti-patterns when designing CI pipelines. Keeping with the small piece and fast build goals, creating multiple flows, and introducing deployment confidence complexity in a CI pipeline can be taxing. Your CI process will typically run more than your delivery process especially during development as multiple builds will occur before a successful release candidate is created. On the other hand, Continuous Delivery is designed to run as a workflow with potential manual approvals and decision steps, which would be counterproductive in CI.

Support Heterogeneous Technologies

Core for organizational adoption of Continuous Integration is having coverage to support automated builds and furthering Continuous Integration practices. New languages, packaging, and paradigms are a constant in technology. Being inclusive is not unique to just Continuous Integration, but as a core piece of engineering efficiency and the SDLC, adoption is crucial. 

Be Transparent With Results

Feedback is crucial in the software development lifecycle, and most likely, the first time changes are leaving an engineer’s local environment is with a Continuous Integration process/practice. Disseminating build and test results across the teams in a clear, concise, and timely manner helps engineering teams adjust and march towards a successful release candidate. Initial builds are expected to run more than once as iteration occurs. Depending on the Continuous Integration platform, implementations can vary, especially around sharing results.  

Half-closed laptop

How is Continuous Integration Implemented?

Continuous Integration, like most engineering efficiency movements, is part process and part platform. When looking to implement Continuous Integration, there are a few moving parts to enable source code events to act as triggers, and having the appropriate infrastructure and test coverage in place.  

Source Code Management Integration

Having the build triggered by a commit/merge requires some sort of integration with the source code management/version control system. The ability to listen for SCM events is key. Modern solutions take this integration a step further and store Continuous Integration configuration in a declarative format with the source code projects in SCM, such as Git. 

Distributed Build Infrastructure

Running a build on your local machine is a resource-intensive exercise, potentially using up most of your machine’s resources during the build. Some sort of distributed and elastic infrastructure is needed; imagine multiple engineers/teams hitting the build server at the same time. The resource crunch is only needed during the actual compilation, build, and packaging steps. Then the infrastructure can be reconstituted for something else. Modern solutions have excellent support and implementations for distributed builds. 

Appropriate Test Coverage

Quality exercises should be run in your Continuous Integration pipeline, even though the temptation would be there to overburden the build process with test coverage that is more appropriate later in the pipeline (for example, when some sort of deployment occurs to a QA environment, or redundant tests). 

What is Continuous Integration Testing?

Continuous Integration Testing should focus on the artifact and not necessarily the environment. Testing the environment, such as a load test, requires a deployment and is not well suited for Continuous Integration. Tests, where feedback is needed earlier in the development cycle vs multiple deployments, are prudent to run during automated builds. 

Code Quality Tests

Prior to a build/packaging step being kicked off, a prudent quality gate to pass is a code quality test. When all the source code is being gathered for a build, inspection of the source code makes sense at this step. Looking out for common design, security, and syntax improvements, code quality tests are commonly found in CI pipelines. 

Unit/Functional Tests

Code coverage should expand in your Continuous Integration pipeline when compared to your local development environment. Since the build might represent several workstreams, testing the combination of unit/functional tests together that each individual engineer has been working towards makes sense in a CI pipeline. Lifting and shifting local IDE-based tests to be called by a Continuous Integration platform is a solid march towards automated testing. 

Open Source Dependency Scans

All of the dependencies and transitive dependencies might not be available until a build process has started. In your Continuous Integration process, it would be a very prudent time to scan for dependencies in the build and packaging, so the binary distributions can be marked as hygienic. For example, scanning a Java JAR distribution and the housing Docker Image would make sense in a CI pipeline. All of this, and more, can be automated by Harness. 

Automate the Build and Testing Process with Harness

No matter where you are in your Continuous Integration journey, Harness has you covered. As the premier platform to handle all of your software delivery needs, Harness can provide industry-leading Continuous Integration and Continuous Delivery capabilities to your organization.

Harness Continuous Integration

Learn more about the Continuous Integration capabilities Harness possesses, and learn about the entire end-to-end software delivery platform from your friends at Harness. 

Cheers!

-Ravi

Keep Reading

  • The Women of DevOps: Patricia Anong

    The Women of DevOps: Patricia Anong

    Meet Patricia Anong, DevOps Consultant. We're thrilled for you to meet her!
  • Introduction to Helm: Charts, Deployments, & More

    Introduction to Helm: Charts, Deployments, & More

    Probably one of the first packages installed after your Kubernetes cluster is up and running is Helm. A stalwart in the Kubernetes ecosystem, Helm is a package manager for Kubernetes. If you are unfamiliar with Helm, Helm helps users to have a more consistent deployment by packaging up all of the needed resources needed for a Kubernetes deployment.
  • GitOps Got Me Up

    GitOps Got Me Up

    Two years ago, I joined the technology space - and as such, I am now a strong proponent for DevOps methodologies.