Kubernetes Series 2/6 – Container Sprawl is the New VM Sprawl – Hello Kubernetes!

Container sprawl is the new VM sprawl. With all of these containers, we need a solution to help run containers at scale. Welcome to Kubernetes!

By Ravi Lachhman
July 18, 2019

The cliche saying that container sprawl is the new virtual machine sprawl proves to be very true. Compared to physical servers, when folks started to migrate to VMs, spinning up a VM was much quicker than procuring a physical server. So folks started provisioning more frequently and soon there was a plethora of VMs. Fast forward to the container revolution, spinning up a container for the application process to start loading typically takes seconds. The sprawl has been amplified from the VM days. 

With the number of images and running containers exploding and what we learned in part one of this series is that containers are immutable so when you make a change, a new image and new containers are created. We have a load of containers not only to cope with scale and reliability now we have more to deal with change.

Happy 5th Birthday Kubernetes!

A solution to this that is shaping our distributed systems today is none other than Kubernetes aka K8s. Released on GitHub in June 2014, Kubernetes in years gone by was one of the most popular open-source projects in terms of contributions only lately to be surpassed by machine learning packages such as TensorFlow

The lineage of Kubernetes can be traced to two large Google Platforms that were and still are highly critical for Google achieving the web-scale that they have today. Google Omega and later Google Borg in the 2000s were groundbreaking cluster management solutions solving scheduling and resource management problems which were influences on the Kubernetes project.

Scheduler + Resource Manager = Kubernetes

Kubernetes provides a prescription to help solve two fundamental distributed system problems. 

From a scheduler definition, a scheduler has to answer “how many to run” and “what happens when we don’t have that many to run”. From a resource manager definition, along the same vein, a resource manager has to answer “where to run”. With the combination of a scheduler and resource manager, you have a modern-day container orchestrator. 

Let’s say you went on your container journey without using an orchestrator. Since containers are made to die, making sure that minimums and maximums of running containers can be a hard scheduling problem to solve. Also since our infrastructure becomes more distributed, finding the right place to run your workload could be challenging.  

Luckily Kubernetes is here and we can interact with a Kubernetes Cluster with some YAML. There are a few important parts in understanding how a Kubernetes Cluster is put together which can be described in a few pieces.

Kubernetes Building Blocks

The fundamentals of Kubernetes can be described as a master to worker model. A master controls n number of workers. As simple as that! Though you might be scratching your head with the 100’s of packages/platforms that can provide functionality with Kubernetes why did I just list two pieces? Kubernetes is designed to be pluggable so most pieces can be swapped out thus changing the opinion on how your Kubernetes Cluster solves a problem. The 100’s of packages/platforms typically get deployed/plugged into the master or worker. 

There are primarily two ways of managing your cluster. There is the Web UI and there is a command-line interface [CLI] called KubeCTL. These interfaces typically talk to the Master. 

The Kubernetes Master can be seen as the brains of the operation. Home to many important services such as the scheduler and API server. The master is also home to etcd which is used to store all of your configuration and important state information about your Kubernetes Cluster. 

You don’t have a Kubernetes Cluster yet without a few Worker Nodes. The Worker Nodes is exactly as the name sounds, where your work is going to be performed. Your applications live on these nodes. Other than your application, the number of nodes are made to scale. The Worker Nodes also contain pieces that help your containers run such as a Docker Engine

The concept of a Pod in the Worker Nodes was a sticking point for me when I was learning about Kubernetes a few years ago as I did not understand the purpose. The only pods I knew about before were coffee pods which I would fight with my Keurig on. Pods in the non-coffee sense inside Kubernetes are designed to be a logical grouping of containers that reside in the Worker Nodes. This has to do with applications can typically have more than one container [going back to part one of our series again] and can scale each Pod in a Replica Set. Not too bad!

As you will find out Kubernetes is pretty configurable. There are tools that can help us keep track and make our Kubernetes experience more repeatable.

What the Hell is Helm?

One of the first tools that folks install on their Kubernetes Clusters typically is a package manager called Helm which was the first package manager designed for Kubernetes. Helm works on the concept of Charts. A Helm Chart describes a set of Kubernetes resources which you need to deploy an application or a template in more simplistic terms.

Since the October 2015 release of Helm [a year and change into the Kubernetes project], there certainly has been a lot of different approaches to configuration management inside Kubernetes. Honorable mentions would be Kustomize and Kapitan for configuration management approaches today.

Kubernetes Everywhere

As the Kubernetes ecosystem expands and the number of providers increase providing Kubernetes services, K8s as a runtime is becoming more ubiquitous. From running locally on your desktop with Minikube [sneak peak to part three] to the host of cloud and platform-as-a-service providers that will run Kubernetes, you have lots of choices where to run. 

Up for debate, Kubernetes could be one of the great enablers of the hybrid cloud. Though good distributed systems principles apply and all about latency between Workers and potential Masters. There is a lot of opinions out there how to scale and chain desperate clusters together which could lengthy series in itself. 

Harness and Kubernetes – Better Together

Harness provides a powerful orchestration layer working with your Kubernetes investment. Harness is built in the age of Kubernetes and provides substantial coverage in Kubernetes Deployments. Our documentation has an excellent Helm example as you go through your Kubernetes journey. Stay tuned for part three where we crack open K8s and have some deployments.

-Ravi

➞ Back to Blog

Leave a Reply

avatar
  Subscribe  
Notify of