Helm is certainly one of the stalwarts of the Kubernetes ecosystem. Helm for many is one of the first packages installed after getting a cluster up and running. In part one of our blog series, we took a look at Helm and running through a basic example. Like any ecosystem that moves quickly, Helm is subject to the ebbs and flows of the Kubernetes ecosystem and competition from different opinions and platforms. Can a package manager be good at configuration management? This is where opinion starts to differ.
Helm or Highwater?
As we moved from singular Kubernetes deployments to items that require multiple pieces, configurations, and potentially multiple clusters, Helm was being challenged. A package manager was desperately needed to move forward the blooming Kubernetes ecosystem and Helm certainly addressed that gap.
A funny subreddit to read is “I am using Helm but I don’t know why!”. Sometimes technology becomes so ubiquitous we don’t question the use of the technology. Helm for me is like the eponym for Kleenex; I tend to call all package/configuration management [even though Helm does not claim to be a configuration management solution] solutions inside the Kubernetes ecosystems Helm.
Hindsight is 20/20, we can take a look at some contemporary arguments against Helm. The first item would be adding a layer of abstraction for an already complex technology. The computer science argument we just move complexity around with abstraction can be true especially for those not experienced in Kubernetes administration.
Like any package manager, contention can occur if there is a collision with Charts. This can be Charts leveraging the same labels can cause a collision. As we move towards templating, re-use of Templates are bound to happen and incorrectly creating a new chart can lead to deployment/roll-back problems down the line.
If you are part of the stringent GitOps definition, purists would argue at what point do you add the Templates to source control. Though historically, most of the drawbacks around Helm as a platform surrounds Tiller.
Introduced in Helm V2 as part of the project integration with Google’s Deployment Manager, Tiller ended up having more detractors than fans. Tiller is the in-cluster portion of Helm that runs the commands and charts for Helm. Because of how Tiller was designed, Tiller has broad access and facilitate in-cluster attacks.
I can’t fault the Helm project, I would have created a job-runner almost in the same way. I am a fan of Cloud Foundry’s Bosh and Tiller certainly taken some design aspects for other platforms. Alas, the adage “security takes a back seat” is true as the Kubernetes ecosystem was exploding and the race for functionality let us not focus on the attack surface/vectors are there.
The good news is that with Helm’s V3 very close to KubeCon NA release, Tiller is no longer required. At Harness, we have been running deployments leveraging Helm without Tiller for some time. As the sands of time continue to move, additional platforms and opinions are up and coming.
Alternatives to Helm
Today one of the biggest alternatives to Helm is Kustomize. Proponents of both Helm and Kustomize projects would say that they solve different problems e.g package vs configuration management. The almighty KubeCTL since Kubernetes 1.14 supports a kustomization file. Especially if you have more than one cluster to maintain, configuration management is key. Because you can generate resources, you can describe packages with a Kustomize.
Another honorable mention is up and coming Kots or Kubernetes off-the-shelf software. Kots can help package both Helm and Kustomize configurations to install Commercial-off-the-shelf (COTS) application inside Kubernetes. Though the goal of Kots is for vendors and you most likely have to be a vendor to leverage Kots packaging of your application. As the ecosystem continues to evolve, certainly more players will enter the growing ecosystem.
Harness at the Helm
At Harness, we have been solving the Tiller problem with a “Tiller-less” Helm deployment ahead of the Helm V3 release for some time. We recently returned from KubeCon NA last week excited to see first hand all the grown in the Cloud Native ecosystems. As our applications continue to grow and become more distributed, package and configuration management are important pillars in a Cloud Native ecosystem. Stay tuned for part three of our series where we have a few Helm based deployments leveraging Harness.