May 2, 2019

Helm Support for Harness Continuous Delivery

Table of Contents

Helm has become a widely popular way to package, distribute, and manage Kubernetes Applications. At Harness, we see an increasing number of our customers are already using Helm and its uses are varied. In this post, we talk about how Harness provides a first-class Continuous Delivery solution for microservices/applications packaged as Helm Charts, and we go over the enhancements we've made to provide best-of-breed Helm support.

Canary and Blue-Green Deployments for Helm Charts

While Helm has been good for packaging multiple Kubernetes resources into Kubernetes Applications, it provides limited functionality with respect to deployment and rollback. Helm project does not aim to solve Continuous Delivery, and implementing advanced deployment strategies like Canary and Blue-Green are not straightforward. In many cases, we see people using smart templating tricks to achieve them. This typically entails manual work to run helm CLI with different values files.

In the Harness platform, we have built first-class support for canary and blue-green deployment strategies. We also do a deployment status check and, if needed, an automatic rollback. Our implementation is very flexible and any set of Kubernetes resources could be deployed as part of a service. With our Helm integration, we aimed to bring the same feature sets to the customers who already use Helm Charts today.

The following figure illustrates the canary deployment strategy. A parallel canary deployment is created by Harness workflow. When the canary deployment passes all verifications, primary deployment is upgraded. Achieving this requires no change in specs.

Helm - Harness Orchestrated Canary Deployment
Figure 1: Harness Orchestrated Canary Deployment
Helm - Example Run of Canary Workflow.jpg
Figure 2: Example Run of Canary Workflow

Similarly, the following figure illustrates a blue-green strategy. Harness workflow creates two parallel deployments (blue and green slots). A service object (i.e. primary service) is used to track which of the deployment serves production traffic. New service is deployed into another slot. Once all the tests pass, production traffic can be routed to new Pods by updating the Service. At any time, the Service can be updated to point to an older version to achieve instant rollback.

Harness Orchestrated Blue-Green Deployment
Figure 3: Harness Orchestrated Blue-Green Deployment

In our approach, we use Helm to fetch chart from repository and template rendering. Once we have rendered Helm chart into Kubernetes Resources - we can orchestrate a canary or blue-green deployment like described above, and do automated rollback in case of failures. All this requires no change in the chart spec of the microservice.

Deploying Services Across Environments Using Helm

With Helm templating, it's easy to have a chart deployed to multiple environments. The environment-specific configuration is kept in values override files. Still, there is a need to manage environment-specific secrets and cluster configuration, e.g. Kubeconfig. Keeping track of what overrides go to what cluster becomes intractable as you scale to more environments.

Harness provides environment abstraction where all of the above can be organized in Environment resource. When Services are deployed to a particular environment, the right cluster is targeted with the right configuration and values overrides.

Harness Environments
Figure 4: Harness Environment Encapsulates Cluster Details & Configuration Specific to an Environment

Automated Rollback of Helm Deployments

Helm install/upgrade commands do not track the status of deployment rollout. After the helm upgrade is complete, there is still a need for manually tracking the status of the rollout. Rolling back is also a manual operation. Harness tracks rollout status of the deployment and can auto-rollback if needed.

Other Helm Enhancements

Some customers maintain their Helm charts in Git repositories but find it a hassle to maintain a separate Helm repository. Many people use Amazon S3 and GCS buckets to store chart packages.

Many use Helm only for packaging and templating, while others leverage the Helm client to manage the install/upgrade/rollback of releases. We heard from a few customers about the challenges they run into with tiller and prefer to use client-only mode.

Based on these learnings, we have improved the following aspects of our Helm integration:

Helm Repository Connectors

We have added a connector for Helm Repository. HTTP server, Amazon S3, and Google Cloud Storage-based repositories are supported out-of-the-box.

Helm Repository

Helm Charts from Source Repository

Helm charts can be directly fetched from chart source in a Git repository. This avoids the overhead of maintaining a separate Helm repository server.

Helm Chart Specification

Remote Values Overrides

Values override at service and environment level can now be stored in remote Git repositories. This enables GitOps flow and environment level overrides can be kept in the same repository as charts.

YAML Override

Support of Client-Only Helm Usage

We have added support for client-only mode of Helm. Client only mode is preferred by many as tiller creates challenges with setup, RBAC, and availability. In this mode, Helm client is used to fetch charts from the repository and render a template. Deployment is done through the standard kubectl mechanism. This avoids dependency on the tiller.

We hope you find these enhancements useful.

Regards,
Anshul, Vaibhav, Ishant, Venkatesh & Puneet

You might also like
No items found.

Similar Blogs

No items found.
Platform