Is Kubernetes All You Need
For Continuous Delivery?

Kubernetes (K8) has won the container orchestration war, but is that enough to enable true Continuous Delivery? Let's discuss.

By Steve Burton
March 1, 2018

Kubernetes describes itself as “an open-source system for automating deployment, scaling, and management of containerized applications.”

At face value, those descriptors are pretty broad and bad ass.

You could be forgiven for thinking all you need these days for DevOps and Continuous Delivery (CD) is Docker, K8, MongoDB and a Cloud Provider. Your apps will magically deploy/run on any Cloud, and will self-scale/heal without any outages or customer complaints, right?

Kind of, but not quite. The good news is that K8 makes CD much easier to achieve because many building blocks exist for you to reuse vs. build yourself.

Where Does Kubernetes Fit in Continuous Delivery?

Below is a brief breakdown of what CD looks like in most organizations.

The large K8 logo shows the primary use case of Kubernetes within the context of CD, and the smaller K8 logos show where Kubernetes can enable, and make life easier for your current CD initiative.

K8_CD

To help illustrate the role of K8 and its APIs, let’s build a simple deployment pipeline from scratch using Harness (a CD  as-a-service platform). You can watch the below 4 min video or keep reading below.

Step 1: Create A Microservice

Below I’ve created a new microservice called ‘MyMicroservice’ and have attached a Docker image from our ‘docker-local’ repo in JFrog Artifactory as the artifact source. If a new build or version becomes available, Harness will automatically version control it.

Kubernetes Service

Next, we can leverage Kubernetes templates to set the container specification (cpu, memory, ports, storage) for our microservice.

If we need more control over the container spec, e.g. replicas (pods) count, labels, args – we can edit these directly using the Kubernetes controller YAML for the microservice. This configuration will also be version controlled by Harness.

For our microservice, let’s give it 6 replicas/pods. You can also see the name of our Docker image from the JFrog repo is automatically inserted as part of the container definition.

yaml_kub

Kubernetes Benefits:

  • Configuration-as-code for containers

Step 2: Create Environments

To deploy and test our microservice across the development lifecycle we need three environments – dev, QA and production.

Let’s create the first environment ‘Development’ in Harness by selecting our microservice, ‘Kubernetes’ as the deployment type and then a cloud provider so we can pick a cluster to represent our development environment:

kubernetes environment

This is probably where the power of K8 shines the most. Using ‘Kubernetes’ deployment type means we can select any cloud provider that supports K8, or we can select ‘Direct Kubernetes’ and point it to a K8 Master node in our own private cloud. K8 will then populate all available clusters which we can use for our new environment.

With K8 it’s possible to deploy our microservice to any Cloud platform and not have to worry about underlying infrastructure configuration or dependencies. This abstraction is pretty awesome.

Kubernetes Benefits:

  • Fully portable microservices
  • Multi-Cloud environments for deployment pipelines (e.g. use GCP for Dev & QA, AWS for Production)
  • Don’t need to write a unique set of deployment scripts for each cloud platform

 Step 3: Create A Deployment Strategy

To deploy our microservice we need to define a deployment strategy for each environment in our dev lifecycle.

Canary deployments are highly fashionable right now, so let’s see how Harness and K8 can enable this type of deployment strategy. Read this blog post if you want to read up on Canary deployments.

Below I’ve created an empty canary deployment workflow in Harness:

canary_kubernetes

Next, we need to define our canary deployment phases:

  • Phase 1 – upgrade/verify 33% of the environment (2 pods)
  • Phase 2 – upgrade/verify 50% of the environment (3 pods)
  • Phase 3 – upgrade/verify 100% of the environment (6 pods)

If we click ‘Add Phase’ in Harness we get the below options which allow us to build canary phases using the Kubernetes Service Setup:

Three steps are required to configure each canary phase for our microservice:

  1. Set up and prepare the containers
  2. Deploy and upgrade the containers
  3. Verify the service/deployment running inside the new containers

To set up and prepare the containers, Harness creates a new controller for every new version of our microservice.

The first controller for our microservice would be called:

retail-app.mymicroservice.development.1

and, as the name suggests, this would be deployed to our development environment (Kubernetes Cluster).

The next time we deploy a new version of the microservice, a new controller would be created:

retail-app.mymicroservice.development.2

This results in two controllers (and microservice versions) being active within our development environment. This is exactly what happens during a canary deployment – two versions of the same service are running in parallel so you can verify how the new version compares to the current version.

Next, we need to deploy the containers and we do this by leveraging the resize function on each Kubernetes controller.

Phase 1 of our canary will resize controller retail-app.mymicroservice.development.2 to 33% of the environment (2 pods) and then resize controller retail-app.mymicroservice.development.1 to 66% of the environment (4 pods).

For example, you can view all deployment controller (workloads) in the Kubernetes Engine Console:

Lastly, we need to verify each canary phase.

Using Harness we can specify any APM solution (AppDynamics, New Relic, Dynatrace) to verify performance or log solution (Splunk, ELK, Sumo Logic) to verify quality (errors/exceptions). Learn more about how this verification works.

If these verifications succeed our deployment workflow will move to canary phase 2 and the controllers will be resized to 50% each. If phase 2 succeeds then the retail-app.mymicroservice.development.2 will resize to 100% of the environment (6 pods) and retail-app.mymicroservice.development.1 will resize to 0% (0 pods). Our canary deployment workflow is now complete.

Harness can also leverage the ingress controllers within K8 and it’s Istio route rules if you want to be more precise with traffic splitting/routing for canary phases. You can also use this for Blue/Green deployment workflows and AB testing.

Kubernetes Benefits:

  • Keep multiple versions of services (controllers) active within the same cluster (environment)
  • Resize the % or count for any service (controller)
  • Horizontal Pod Autoscaler lets you set a policy so that canary pods can grow as your service grows
  • Ingress Controller & Istio rules let you split/route traffic as you wish

Step 4: Create a Failure/Rollback Strategy

Let’s suppose some of our microservice deployments or canary verifications fail. This is perfectly normal for Continuous Delivery. You want your deployment pipelines to kill your release candidates before they reach your customers in production.

Fortunately, rollback is super easy with Harness and K8. You can keep a few old deployment controllers active with zero pods for each app/service/environment and then simply auto-resize the controllers back again when you need them. Harness does exact this by defining a ‘failure strategy’ for each deployment workflow so it can perform smart automatic rollbacks whenever deployment or verifications fail. Read how Build.com rolled back production in 32 seconds.

With Harness, you can also rollback environment variables and service configuration as part of the controller rollback.

Kubernetes Benefits:

  • Older service versions can remain passive/active in clusters and be resized instantly for rollback
  • Kubernetes Control (kubectl) has rollout history and undo functions so you can rollback to the previous deployment manually. Note: this only rolls back containers images not environment variables etc.

Kubernetes & Continuous Delivery Go Hand In Hand

It’s pretty obvious why Kubernetes is ruling the roost right now for container orchestration. Its a truly portable system that has powerful capabilities for deploying, scaling and managing containerized applications.

In addition, it provides many building blocks and APIs for creating complex deployment pipelines. The ability to configure-as-code, deploy controllers, auto scale, and rollback make it very attractive for any Continuous Delivery platform.

Cheers,
Steve.

@BurtonSays

➞ Back to Blog

Leave a Reply

avatar
  Subscribe  
Notify of