Over the past six months, we've added cloud provider support for Azure and Pivotal Cloud Foundry. OpenShift is the latest addition to the Harness stack, and now Red Hat customers can leverage Harness Continuous Delivery as-a-Service.
What is OpenShift?
Red Hat® OpenShift® is a comprehensive enterprise-grade application platform, built for containers with Kubernetes.
For customers who invested in Red Hat Enterprise Linux, it’s an easy way for them to run and manage their container-based applications. You can think of Red Hat OpenShift as an alternative to Pivotal Cloud Foundry, meaning it’s an abstraction layer between the application and the underlying infrastructure or cloud provider.
The standard Red Hat OpenShift architecture looks like this:
Where Does Harness Continuous Delivery Fit?
Most enterprises run multi-cloud and multi-stack applications. Harness is to Continuous Delivery (CD) what Jenkins is to Continuous Integration (CI), meaning we pick up artifacts from the CI process, and help customers automate artifact deployment across their environments to customers in production.
One of our large Fortune 500 customers requested OpenShift support so they could enable more development teams with Harness Continuous Delivery in addition to the teams that leverage Helm and pure Kubernetes orchestration. Harness also has out-of-the-box integrations with 38+ DevOps tools including CI, Artifact Repository, APM, Log and Secrets management.
I thought OpenShift Already Has Pipelines…
Underneath the covers, OpenShift Pipelines is basically Jenkins Pipelines:
OpenShift Pipelines give you control over building, deploying, and promoting your applications on OpenShift. Using a combination of the Jenkins Pipeline Build Strategy, Jenkinsfiles, and the OpenShift Domain Specific Language (DSL) (provided by the OpenShift Jenkins Client Plug-in), you can create advanced build, test, deploy, and promote pipelines for any scenario.
Jenkins Pipelines are basically constructed by customers writing their own deployment shell scripts known as “Jobs,” and are stitched together in sequences to represent a deployment pipeline of stages.
Instead of customers writing deployment pipelines per app/service with shell scripts and jobs, Harness automates this process using its templates (aka Smart Automation).
Think of Jenkins Pipelines as hard-coded, brittle pipelines that require hundreds of community plugin–whereas Harness represents dynamic, flexible pipelines that come with out-of-the-box supported plugins. For a more detailed Jenkins vs. Harness comparison, read here.
I thought OpenShift Already Has Blue/Green/Canary Deployments…
Yes, OpenShift enables both Blue/Green and Canary deployments by directing traffic between Kubernetes pods. However, Canary deployments are treated as standard Rolling Deployments so no validation or canaries actually govern each deployment phase.
Harness offers a new concept called Continuous Verification that automatically verifies all deployments and types (basic, multi-service, blue/green, canary, rolling, …) using AI and unsupervised machine learning.
Harness has integrations with all your APM tool (AppDynamics, New Relic, Dynatrace, …) and Log Analytics tools (Splunk, Sumo Logic, ELK, …) and can automatically a.) verify performance and quality and b.) rollback to the previous working version should anomalies or regressions be identified.
For example, you can build a multi-stage canary deployment in 4 minutes with Harness that integrates with your existing monitoring ecosystem.
Deployment Pipelines In Minutes With OpenShift Support
To build an OpenShift deployment pipeline simply follow these five steps:
1. Create a new Service
Setup > Application > Create Service
Simply add a Harness Connector for your Artifact Repository and link the artifact source to each service you create in Harness.
Harness will then automatically pick up every new build from your repo and version control it. Harness is able to automatically re-deploy the last working version should any deployments fail in the future.
2. Create a new Cloud Provider & Environment
Setup > Cloud Providers > Add
Next, we need to setup our OpenShift Cloud Provider by selecting “Kubernetes Cluster” as the type and entering the url, credentials, and Kubernetes Service Account Token details (see below).
This allows Harness to query the OpenShift Kubernetes Engine and retrieve all cluster configuration required for deployment.
Once you have an OpenShift Cloud Provider configured, you can then create Environments in Harness based off the OpenShift Kubernetes clusters that already exist in your Cloud Provider account:
3. Create a new Deployment Workflow
Setup > Application > Create Workflow
To deploy Services to Environments you need Workflows. Harness workflows come preconfigured with several release strategies (blue/green, canary, rolling, …) and can be dynamic so that you can parameterize all inputs for your deployment logic. It’s possible to have simple deployment workflow template for many services and environments instead of needing a workflow for each service/environment combination .
Below are a few screenshots that show how simple it can be to create a canary deployment from scratch.
After deploying your container app, you can pick your verification strategy in a Harness workflow. This is where our unsupervised machine learning analyzes the time-series metrics and unstructured data from your Application Performance Monitoring (APM) and Log Analytics tools.
4. Create a new Pipeline
Setup > Application > Create Pipeline
Lastly, you can attach your deployment workflows to a given Pipeline. Most Harness customers have a pipeline per application with several stages that represent the environments (dev, QA, staging, production, …) that their service artifacts must be promoted across.
Here is an example of a simple 4-stage pipeline that shows how Harness can promote code across Dev, QA, and Production environments with a manual approval in between:
5. Create a new Trigger
Setup > Application > Create Trigger
Finally, to execute our OpenShift Pipeline, we can create a Trigger that will fire on a given condition or event (e.g. new build, time of day, webhook).
The following Webhook Trigger will execute my OpenShift Pipeline.
Details of the Webhook are automatically generated by Harness:
Harness also provides the Curl command if you want to parameterize any of the trigger or pipeline inputs.
That’s it! The above process should take you no more than 5-10 minutes.
Sign-up for your free-trial of Harness Continuous Delivery today.