Jenkins is recognized by millions of developers as it is the world’s most popular CI server. But what is Jenkins and where did it come from? Jenkins is an open-source automation server that allows companies to accelerate their software development cycle.
It was created in 2004, under the name Hudson, by a Java developer named Kohsuke Kawaguchi, who worked for Sun Microsystems. He was tired of his code breaking builds and wanted a way to test his code before committing it to a repository.
Hudson became a hit at Sun, so Kawaguchi open-sourced it for other developers to use. In 2011, after a dispute between Oracle and the open-source community of Hudson, it was forked with a new name: Jenkins.
With Jenkins and the focus on automation, developers turned to make their software development cycles easier and more efficient with the “as code” movement.
What is “As Code?”
Within DevOps, there has been a shift to “[insert software term] as code” but what does that mean? The “as code” terminology simply means automation. Creating configuration code for [some software term] and putting it into a centralized source code repository (GitHub, GitLab, Bitbucket, etc.) allows users to meet automation goals, which include minimizing testing, increasing efficiency, and enhancing software quality.
The “as code” movement is not new. However, it has seen a rise in popularity within the past couple of years because hot technologies are starting to embrace this methodology. For example, as companies are focusing on modernizing their applications, they turn to Kubernetes, an open-source system for automating deployments, scaling, and managing containerized applications.
Kubernetes uses configuration files and manages applications via code. Most “as code” technologies use YAML or JSON as their configuration language. This standardization makes it easier for users to use the same language and methodology for all of their tools. When users can manipulate their software delivery environment all through code, they get excited and the momentum for “as code” grows.
Infrastructure as Code
Infrastructure as code (IaC) is the managing and provisioning of infrastructure in configuration files rather than through a manual process. This ensures the same infrastructure is spun up every time this code is applied and codifies the process of creating infrastructure.
By putting infrastructure into source code and in turn a source repository, it allows for easy customization among different environments as each configuration can be placed in different branches. Also, IaC doesn’t demand a programming language as the main focus is on configuration.
Build as Code
Just like infrastructure as code, build as code is the approach to define builds through configuration files that are checked into some sort of source control like Git.
Pipelines as Code
Pipelines as code, following the structure of the other “[insert software term] as code,” is the practice of defining deployment pipelines through code. This allows users to create builds, run tests, and deploy code that has an audit trail because it is stored in a central repository. To build these pipelines as code, users can use a declarative approach using YAML or a vendor-specific language such as Groovy.
These pipelines as code files lay out the specific stages and order for which a pipeline needs to run successfully. Changes in the pipeline code can be tested in different branches because the file is versioned. This methodology, pipeline as code, is an industry best practice in creating Continuous Integration pipelines.
Where Did Pipeline as Code Come From?
Pipeline as code stems from the larger infrastructure as code movement but has been dominated by Jenkins. Jenkins describes pipeline as code as a set of features that allow Jenkins users to define pipelined job processes with code, stored and versioned in a source repository.
Benefits of Pipeline as Code
Because pipeline as code addresses so many pain points that developers face in the software development process, there are many benefits. Pipeline as code allows for version control, which allows changes to be trackable and also allows for easy rollbacks.
This provides an audit trail for developers, which can be useful in debugging and transparency. If at any point the team decides they want to rollback to a previous release or something breaks, having the pipeline in version control gives the team the ease to revert back.
Also, since the source code for CI pipelines and the application code are stored in the same repository, it provides ease of access as all the code is in one spot. Having the pipeline code in a shared repository allows for collaboration as all team members have access to the code base.
Developers can make changes without additional permissions and since there is a code review process before changes are merged back into the main branch, it avoids any potential breaks in the pipeline.
Drawbacks of Pipeline as Code
Like all things, pipeline as code does have its cons. A major disadvantage to pipeline as code is the steep learning curve involved when writing a deployment pipeline. It takes time to learn how to properly write one with the correct syntax and modules.
To add another layer of complexity, if someone else had already written the code, it could take even more time to completely understand it. Thankfully, there are a lot of resources out there and it’s easy to reuse parts of a previously created pipeline.
Another drawback worth highlighting is that it may be difficult to manage the code. These pipelines can get to over 1000 lines of code and to update and maintain them could become a headache.
How to Get Started Using Pipeline as Code
Many have relied on using Jenkins to create pipelines as code. To create these Continuous Delivery pipelines, developers would go through the process of learning how to write fickle Jenkinsfiles so that they could be checked into a source code repo and perform the steps to deploy an application.
As Jenkins shifted to the pipeline as code model, there was a major breakthrough with the creation of the workflow plugin. This produced the domain-specific language (DSL) steps that translated to simpler pipelines.
Scripted vs. Declarative Pipelines
A key distinction between scripted and declarative pipelines is the way they look. Below is an example of both pipelines as it will provide an overview before we dive into the specifics.
Some things to note in this scripted pipeline that makes it distinctive is that the first element is node and variables are defined in Groovy language. Also errors are managed through a try/catch clause and the Artifactory configuration is defined through variables.
On the other hand, comparing the declarative pipeline to the scripted pipeline the first element is pipeline and variables are defined in the Environments sections. Since Groovy syntax is not allowed, the try/catch structure of error handling is not allowed but plugins are allowed to configure things like Artifactory.
Scripted pipelines are the traditional way of writing pipelines. It uses stricter Groovy based syntaxes and can help developers create advanced and complex pipelines. To create a scripted pipeline, the user must start with a node block. A node block executes the core work of the pipeline, and within it, there are stages.
A stage defines a conceptually distinct subset of tasks that will be performed through the entire pipeline. For example, stages can include Build, Test, and Deploy. Although there are no limits as to what a scripted pipeline can do programmatically, there are some major disadvantages to using one. There is no formal structure or flow for a scripted pipeline, the only limitation is Groovy syntax.
This can lead to wonky pipeline code that is not easy to interpret or maintain. Also, because of this lack of structure and freedom to interpret Groovy, not everything done in a scripted pipeline will render well in the Jenkins UI.
Declarative pipelines, on the other hand, are a newer feature that makes the pipeline code easier to read and write. Declarative syntax is more limited and strict, as it doesn’t allow developers to inject code. To create a declarative pipeline, the user must begin with the word pipeline. Then, it will break down into individual stages that can contain multiple steps.
Even though declarative pipelines are less suitable for pipelines with complex logic, have restrictive syntax, and lack compatibility with old plugins, they are the modern way of developing pipelines. These pipelines are more structured, have simpler syntax, have an integration with the Blue Ocean interface, and allow users to restart from a specific stage.
In creating a pipeline as code model for Jenkins, a Jenkinsfile is key. A Jenkinsfile is a text file that holds the configurations for a Jenkins Pipeline. This file is checked into a source code manager and the relative path to the file should be defined within Jenkins itself.
Without this file, Jenkins doesn’t have the ability to automatically manage and execute jobs. Inside this file, there is a pipeline script that highlights the steps to execute jobs. Jenkinsfiles are usually written using the Groovy DSL and can be created through a text editor or through the configuration page on the Jenkins instance.
Managing Pipeline as Code with Harness
Pipelines as code simplify software development. With Jenkins, the process is automated, but developers still have to suffer through the arduous process of creating this pipeline – but what if they didn’t have to? What if there was a platform that required no scripting – a tool that made it easy to perform CI/CD?
Meet the Harness platform. Harness reduces the need for developers to create these pipelines – indeed, it creates the pipelines by itself! Check out the platform and request a demo today.