Continuous Integration Testing ensures software quality by running automated tests during the build, packaging, and publishing phases. It includes code quality, unit, integration, and security tests to catch issues early, enhancing confidence in software deployments.
Software is a journey of iteration where innovation comes with a great amount of trial and error. As we march towards production, our confidence increases. Continuous Integration is a core part of allowing for iteration by allowing systemic builds and publishing of deployment artifacts into artifact repositories. On traditional teams, the deployable artifact signifies a significant milestone of their work product being completed.
The software delivery life cycle (SDLC) is an exercise in confidence-building. With increased ownership, software development teams want to produce high-quality features and artifacts and quality engineering steps continue to push left to increase confidence. A natural spot to start to run certain confidence-building steps and ensure the quality and consistency of changes is your Continuous Integration (CI) pipeline.
There is a difference between Continuous Integration and Continuous Delivery. A good line in the sand about where to run test coverage is if there needs to be a deployment. Taking application infrastructure/the environment into account, such as a soak test - those tests are better run in the Continuous Delivery process or Continuous Deployment portion of the pipeline (ie, CD pipeline). For more information on CD tools, visit our DevOps Tools comparisons.
Repeatability is a crucial concept in creating and building software. Having the ability to create or recreate at any point in time adds to confidence. One of the big benefits of Continuous Integration is repeatability; externalizing the builds vs being locked locally on a developer machine. The second is the ability to run builds or tests in parallel with other team members. Test coverage that is run locally is an excellent candidate to be headed to the Continuous Integration pipeline. Like DevOps bringing together the development and operations teams, testing in the CI pipeline can help QA teams shift towards the development team in a systemic way.
Continuous Integration Testing is testing that is focused and executed during the CI process and orchestrated by CI tools (such as CircleCI, Travis CI, and open-source tools like Drone or Jenkins) which accounts for the build, packaging, and publishing of artifacts. There is a certain level of quality that is expected when an artifact/artifacts are being built as they traverse the journey to a release candidate. Testing in the CI process allows for rapid feedback, and by design, stops the progression of the artifact if the minimum quality is not met. Usually, CI testing focuses on the artifact prior to deployment to the first integration environment.
Continuous Integration Testing serves as quality gates during each of the trifecta of CI pillars, build, package, and publishing of artifacts take place. In a JAVA example, this could be running unit tests before the JAVA JAR build takes place. Then, continuing some sort of conformance test around the Docker Image being produced out of that JAR in the packaging phase. Finally, before publishing to a Docker Registry, pre-flight checks around licensing or vulnerabilities to be published into a registry. All of these measures make for better quality software.
Continuous Integration Testing allows for iteration and feedback with another level of rigor outside of a local environment. The journey to production, local development can incur dozens of local builds/cycles before a commit, though integration of the newly written or revised features into the application/service starts with the build. Passing Continuous Integration Tests not only means that the artifact has the ability to be consistently be built, but a level of quality around the artifact has also been achieved before publishing.
A failing build is much less severe than a failing deployment. There is an expectation that the build with the inherited quality checks might not pass the first go, because this is the first time external factors from an engineer’s machine are brought together. Having bugs and fixes found in CI testing is perfectly normal. These tests are orchestrated in a pipeline and fall under two categories: on and off process testing.
The Continuous Integration Pipeline is the orchestrator of the Continuous Integration Testing. Tasks accomplished in CI are the trifecta of build, packaging, and publishing of the artifacts created during the CI process.
Testing in Continuous Integration pipelines can be delineated on processes that are directly invoked and controlled by the CI process and those that are external to it. This is similar to a process running on a developer’s local machine vs one that has to reach out. If you consider your Integrated Development Environment (IDE) as the nexus of your local build, items run inside your IDE like unit tests and interact with third-party tools outside your IDE like container scanners.
Tests that can be executed in or on the build agent/runner represents on process testing. On process testing is typically accomplished by executing language-specific test steps that the language-specific build process understands. The infrastructure that is needed to run the tests is included as dependencies; for example, JUnit dependencies included in the Maven POM. Though for sanity and security, those dependencies should be removed from the final artifact before publishing.
With a big push for items shifting left towards the engineer, such as the DevSecOps movement, there has been a bloom in tools to help disseminate and unlock crucial data and feedback to the development teams. Off process testing focuses on non-functional requirements, which unit/functional tests (e.g. which are run on process) would cover. A hallmark of off process testing is submitting data, code, or an artifact to a third party off the Continuous Integration process. For example, scanning a container, the image would be introspected by an external process (e.g. container scanning software).
No matter if the test is on or off process, test coverage is designed to instill confidence in the artifacts produced. There are several testing methodologies that can be run in a Continuous Integration Pipeline and are worthy of implementation.
Walking through the inception of the CI process (ie, it triggers by a code check-in) to a published artifact, there are several types of CI tests to run during the automated build.
Depending on where the official CI process starts, if a quality gate to check-in or merge code is in place vs after the fact, code quality tests can be part of the CI process. Code quality tools, such as SonarQube and Checkmarx, focus on static code analysis and are prudent during code changes. Code by itself is not running and taking into account all of the infrastructure and environment variables that power running software, though with code analysis can infer quality. Looking out for dead blocks (never called or reached code), syntax quality/standards, and potential security issues are all part of code quality tests. Though code quality does not focus on the actual functionality of the code, which is where unit testing comes in.
The quintessential tests that are run with new features or written or updated are unit tests. Unit tests focus on the blocks/methods of code that changed. If building a new from-the-ground-up project, these would cover the functionality of the application. Unit tests are typically on process testing where mock objects are created and assertions are verified. JUnit in the JAVA world and Mocha in the NPM world. Unit tests are designed to be granular and run as a suite. For example, if you are writing a calculator application and adding a new feature for division for whole numbers, a unit test might be how dividing by zero is handled or the expected result of executing that division. If the use case requires external methods/parties to call, unit test coverage would not suffice and integration testing would take over.
As multiple pieces of functionality are built, rarely do they live in a vacuum. The calculator application has more than just division, and division can be complicated (e.g. decimals, precision, etc). Integration testing can have a wide brush depending on the boundaries set on the testing. For integration testing in the context of Continuous Integration, it will focus on testing cross modules of the application. Back to the calculator example, an integration test would be dividing and multiplying at the same time (remember your order of operations?). There is a lot of overlap in modern unit and integration testing tools. JUnit can be used for unit and integration tests since JUnit can chase/follow/invoke the method calls which can be chained together. Integration testing continues the confidence continuum as the new features/new code changes are seen as working together.
The adage that software ages more like milk than wine is true - especially in modern software with a reliance on third-party open-source, the engineering team is rarely the author of 100% of the bits that are going into the final distribution. With the “fog of development,” it would be unreasonable for an engineer to know all the transitive dependencies (dependencies that call other dependencies). The purpose of security/license testing is to find exposure and risk of using certain packages. Security/license analysis tools, such as Blackduck, Snyk, and StackHawk, have different methods of introspecting. Some tools require the finished artifact, e.g. a docker image, to run. Other tools integrate at the code level introspecting build files.
An added benefit with running tests in a CI pipeline is they are run more frequently, enabling continuous testing. If you are starting from scratch or are early on your journey around Continuous Integration and integrating test coverage, moving your first set of tests into a CI pipeline is a prudent move.
Ironically, a part of the Continuous Integration curve is getting the required dependencies for builds, tests, and packaging to run. For example, if you are writing a JAVA application that will be packaged into Docker, make sure the CI platform can support the JAVA build (having a JDK, Maven/Gradle) and the Docker dependencies (Docker Runtime).
How does the testing influence the build? If one of the test coverages does not pass or finds less than acceptable results, does the build, packaging, and publishing step move forward? Convention would be: they would not. Though if the first pass into CI Testing, lifting what was created locally on a CI pipeline would be an approach to build confidence in the CI process.
Sample Harness Continuous Integration Kubernetes Pipeline calling Maven Test.
Above: JAVA-centric example. The tests are called by the build tool. In this case, Maven Test.
Below: NodeJS example, running npm test.
Sample Harness Continuous Integration Kubernetes Pipeline calling npm test.
No matter if this is your first test headed to your CI platform or your thousandth, Harness has you covered.
The Harness Continuous Integration platform allows you to get up and running with test automation quickly and also helps you hone in on - and refine - your testing strategy. The first hurdle that is overcome is the “CI dependency” problem (common with Jenkins, a Continuous Integration server/tool). With a modern approach, we can resolve dependencies in a Docker fashion. Simply declare what is needed in a simple configuration and Harness Continuous Integration will resolve what is needed.
A problem with running tests at scale is loss of visibility around coverage. This drives up execution time and potentially creates duplicitous test coverage. Having an appropriate measure of test coverage and execution is critical at scale.
Visualizing pipeline execution times
Software development practices have increased in velocity. With microservices/agile development, the importance of CI is significant with smaller, more frequent pieces being built. No matter if this is your first time leveraging Continuous Integration to start automating test execution or if you’ve been doing this for decades, the Harness Platform can help you achieve your goals quickly and consistently.
As new technologies and testing approaches come about, having a robust and flexible platform that allows you to add these into a pipeline and measure the impact builds confidence and assists in agility. If you would like to learn more, check out our CI Webinar for Developers, and make sure to sign up for the Harness Platform today!