Red Hat® OpenShift® is a comprehensive enterprise-grade application platform, built for containers with Kubernetes. For customers who invested in Red Hat Enterprise Linux, it's an easy way for them to run and manage their container-based applications. You can think of Red Hat OpenShift as an alternative to Pivotal Cloud Foundry, meaning it's an abstraction layer between the application and the underlying infrastructure or cloud provider.
The standard Red Hat OpenShift architecture looks like this:
Harness fits in the above Application Lifecycle Management CI/CD green box, focused specifically on Continuous Delivery.
One of our large Fortune 500 customers requested OpenShift support so they could enable more development teams with Harness Continuous Delivery in addition to the teams that leverage Helm and pure Kubernetes orchestration.
Underneath the covers, OpenShift Pipelines is basically Jenkins Pipelines:
OpenShift Pipelines give you control over building, deploying, and promoting your applications on OpenShift. Using a combination of the Jenkins Pipeline Build Strategy, Jenkinsfiles, and the OpenShift Domain Specific Language (DSL) (provided by the OpenShift Jenkins Client Plug-in), you can create advanced build, test, deploy, and promote pipelines for any scenario.
OpenShift
Jenkins Pipelines are basically constructed by customers writing their own deployment shell scripts known as “Jobs,” and are stitched together in sequences to represent a deployment pipeline of stages.
Instead of customers writing deployment pipelines per app/service with shell scripts and jobs, Harness automates this process using its templates (aka Smart Automation).
Think of Jenkins Pipelines as hard-coded, brittle pipelines that require hundreds of community plugins, whereas Harness represents dynamic, flexible pipelines that come with out-of-the-box supported plugins.
Click here for a more detailed Jenkins vs. Harness comparison.
Yes, OpenShift enables both Blue-Green and Canary deployments by directing traffic between Kubernetes pods. However, Canary deployments are treated as standard Rolling Deployments so no validation or canaries actually govern each deployment phase.
Harness offers a new concept called Continuous Verification that automatically verifies all deployments and types (basic, multi-service, blue/green, canary, rolling, …) using AI and unsupervised machine learning.
Harness has integrations with all your APM tool (AppDynamics, New Relic, Dynatrace, …) and Log Analytics tools (Splunk, Sumo Logic, ELK, …) and can automatically a.) verify performance and quality and b.) rollback to the previous working version should anomalies or regressions be identified.
To build an OpenShift deployment pipeline simply follow these five steps:
Setup > Application > Create Service
Simply add a Harness Connector for your Artifact Repository and link the artifact source to each service you create in Harness.
Setup > Cloud Providers > Add
Next, we need to setup our OpenShift Cloud Provider by selecting "Kubernetes Cluster" as the type and entering the url, credentials, and Kubernetes Service Account Token details (see below).
This allows Harness to query the OpenShift Kubernetes Engine and retrieve all cluster configuration required for deployment.
Once you have an OpenShift Cloud Provider configured, you can then create Environments in Harness based off the OpenShift Kubernetes clusters that already exist in your Cloud Provider account:
Setup > Application > Create Workflow
To deploy Services to Environments you need Workflows. Harness workflows come preconfigured with several release strategies (blue/green, canary, rolling, ...) and can be dynamic so that you can parameterize all inputs for your deployment logic. It's possible to have simple deployment workflow template for many services and environments instead of needing a workflow for each service/environment combination.
Below are a few screenshots that show how simple it can be to create a canary deployment from scratch.
After deploying your container app, you can pick your verification strategy in a Harness workflow. This is where our unsupervised machine learning analyzes the time-series metrics and unstructured data from your Application Performance Monitoring (APM) and Log Analytics tools.
Setup > Application > Create Pipeline
Lastly, you can attach your deployment workflows to a given Pipeline. Most Harness customers have a pipeline per application with several stages that represent the environments (dev, QA, staging, production, etc.) that their service artifacts must be promoted across.
Here is an example of a simple 4-stage pipeline that shows how Harness can promote code across Dev, QA, and Production environments with a manual approval in between:
Setup > Application > Create Trigger
Finally, to execute our OpenShift Pipeline, we can create a Trigger that will fire on a given condition or event (e.g. new build, time of day, webhook).
The following Webhook Trigger will execute my OpenShift Pipeline.
Details of the Webhook are automatically generated by Harness:
Harness also provides the Curl command if you want to parameterize any of the trigger or pipeline inputs.
That's it! The above process should take you no more than 5-10 minutes.
Sign up for your free trial of Harness Continuous Delivery today.
Cheers,
Steve.