Kubernetes Series Part 6/6 – The Future – Modify the Kubernetes?

Looking to the future, Kubernetes is evolving to become our own, non-vanilla, flavor of the platform.

By Ravi Lachhman
August 13, 2019

Finally, the culmination of the blog series! After part five, we have an understanding of the challenges in getting our workloads into Kubernetes. Since Kubernetes is a fast evolving platform, what does the future if K8s have in store for us?

The big push in the ecosystem is to be capturing workloads that traditionally have been difficult to either be in a container and/or be orchestrated by Kubernetes. We are hitting a critical mass now that application and infrastructure providers have been modernizing their stacks to play better with the container bloom.

Kubernetes has been going through an evolution from a platform that organizations just deploy on to a platform that organizations build to. With more being built on Kubernetes, even the cadence of the project is shifting.

Slowing the roll

As of Kubernetes 1.15 which is the most recent release at the time of this blog, the mantra of the 1.15 release is stability in terms of the platform and the release cadence. In the early days of Kubernetes, a sub or minor release would seem to happen every few months. This is to be expected in a fast moving open source project.

Between the sub and minor releases, was almost a constant barrage of APIs changing and APIs being deprecated or even promoted. As a personal antidote, I took the Certified Kubernetes Administrator exam between versions 1.9 [1.09] and 1.10, and about 5% of the APIs were different between the training material and exam. Which means when I was trying to apply KubeCTL commands on my trusty YAMLs, I would get validation errors back and have to make structural changes to the YAMLs. 

As the platform is reaching into the first phases of maturity and workloads increase, vulnerabilities are certainly going to creep up as the platform is exposed more. One of the most severe vulnerabilities was in 2018 which was a privilege escalation which wasn’t terribly difficult to pull off. The community and vendors acted quickly to patch.

One of the biggest changes in the Kubernetes ecosystem is making Kubernetes less generic. Ironically, a vulnerability was found in June 2019 in one of the primary mechanisms that allow that.

Your modified cluster is not vanilla anymore

Back in part five, a critique on the Kubernetes platform is that Kubernetes is generic. One of the biggest if not the biggest change to the Kubernetes platform is customizing the platform to your applications. 

There has been a rise in the container orchestrator software development kits (SDKs). Kubernetes and other orchestrators like Apache Mesos are providing software development kits to build your applications to. Before this shift, a majority (and still probably a majority) folks are building their applications on Kubernetes which means using Kubernetes as-is. As the sands start to shift, folks are starting to build their applications to Kubernetes which means modifying Kubernetes to fit your application is picking up steam. 

One of the best examples is all of the development into Operators. Introduced by CoreOS a few years ago, Operators have really picked up a lot of steam. A Kubernetes Operator makes Kubernetes very application aware / application centric. Operators extend and leverage Controllers and Custom Resource Definitions (CRDs). With that combination, the possibility is there to have Kubernetes react in very specific ways for your application and application infrastructure. 

A stellar piece of why using an operator comes from a blog post by the Confluent (Apache Kafka) CEO. She spells out several points that operationalizing Kafka on Kubernetes is now precise and prescriptive thanks to Operators. 

Though by installing another controller or CRD, your vanilla Kubernetes Cluster is no longer vanilla. There are installation steps to get your new CRDs and Controller(s) humming in the cluster. Jogging your memory back to part two, package/configuration managers such as Helm, Kustomize and Kapitan become even more important.

As our cluster size starts to expand and potentially even criss cross different infrastructure providers aka “hybrid cloud”, leveraging Kubernetes across disparate infrastructure is still subject to the Fallacies of Distributed Computing

Hybrid Cloud

Kubernetes can still be viewed as one of the great equalizers in distributed computing. In theory if you have one Kubernetes Cluster running, you should be able to deploy your application in another cluster without too many hiccups. 

Kubernetes and vendors in the Kubernetes space are working on ways to have more of a “cluster of cluster” approach which is attainable for Kubernetes. Kubernetes Federation is one answer by the Kubernetes project to have multiple clusters be more centrally managed. 

Though having disparate / remote clusters especially across different infrastructure providers, Kubernetes will not auto-magically solve your problems. Good distributed system principles apply. Latency in not reduced just because you are using Kubernetes. If cross cluster communication is not strong, Kubernetes can start marking nodes as unhealthy when in fact those nodes are healthy. 

As industry catches up with the power of Kubernetes, the power of distributed and robust computing is becoming more attainable to the masses. Effectively, the tide for our applications is rising.

A rising tide raises all ships

Even items that were traditionally difficult to run in the early days of Kubernetes such as a database are becoming easier. Vendors are building products and platforms in this new design paradigm. CockroachDB has a good story around all the work they have done in the past few years to make SQL databases consumable on Kubernetes. 

Taking another look at the CNCF Landscape after going through the blog series, can start to see the amount of investment organizations are making to embrace the new design paradigms. The amount of cloud native projects out there are used as the building blocks to modernize / re-create platforms that play well in a container orchestrator world.  The Container Storage Interface (CSI) and Container Networking Interface (CNI) are also instrumental in moving the container market forward. 

The expectation to be running workloads on a container orchestration is becoming mainstream.  As we reap the benefits of containers, the technology machine always moves forwards. Containers are even to start to face competition now.

Are containers too heavy?

A year after Docker was introduced to the wild from dotCloud (remember part one?), Amazon Web Services introduced to the masses serverless computing with AWS Lambda. If a container takes seconds to spawn to allow the application to start running, Lambdas take sub-seconds before execution starts. Containers are short lived aka ephemeral but serverless functions spin up and spin down with every request; using almost exactly what you need exactly when you need the function. 

Not to fret, if Kubernetes is an integral part of your application infrastructure, the KNative Project exposes the pieces of serverless implementations so you can have serverless infrastructure orchestrated by Kubernetes. 

Kubernetes continues to mature and the ecosystem continues to expand. There is certainly a need for workloads that need to live longer than a serverless invocation thus the Kubernetes ecosystem will be around for a while.

Looking ahead

The Kubernetes ecosystem continues to expand and vendors and organizations are still going through the journey embracing this five year old technology. If you are looking for closure at the end of this blog series, unfortunately the book does not end. Luckily Harness is there to help you weather the storm. As new design paradigms come online can make sure your pipelines are there to support them. 

As an added treat, Harness will be hosting a webinar talking through all six parts of the blog series. We hope you can attend! 

Cheers,

-Ravi

➞ Back to Blog

Leave a Reply

avatar
  Subscribe  
Notify of