Harness Continuous Delivery allows an organization to respond more quickly to its market and to customers by deploying scalable infrastructure and solutions, both internal and external. To build these solutions, one of the first packages installed after your Kubernetes cluster is up and running is probably Helm.

What Is Helm?

Helm is the first application package manager running atop Kubernetes. It allows users to describe the application structure through convenient Helm Charts, and manage it with simple commands. It’s a huge shift in the way the server-side applications are defined, stored, and managed.

Using a package manager like Helm reduces duplication and complexity in orchestrating the steps defined in the package manager. Charts in Helm are the primary format in which Helm runs on. A Helm Chart is a collection of files that describes a set of Kubernetes resources. Like other package manager formats based on convention, Helm Charts follow a directory structure/tree. Helm Charts can be archived and sent to a Helm Chart Repository. Helm is a client that is installed outside the Kubernetes cluster, and it leverages kubectl to connect and interact with the Kubernetes cluster. These Charts can easily be stored in Chart Repositories.

A Chart repository is an HTTP server that houses something called an index.yaml file, and optionally, some packaged charts. When you build a Chart Repository for the first time and are ready to upload your charts, the Repository will generate something called an index.yaml file. The repository index (index.yaml) is dynamically generated based on packages found in storage. This file contains the metadata of the Charts, which is written into this file when the new Charts are uploaded to the Repository. It will be used at the time of deployment to fetch the Charts, as it contains all the required information. If you store your own version of index.yaml, it will be completely ignored – so that’s something to keep in mind!

If there is a need to orchestrate more than one Kubernetes resource and if there are multiple clusters with different configurations, you have a strong use case for leveraging Helm. Software vendors and open-source projects alike can benefit by using Helm resources as a way for customers to install applications into Kubernetes clusters.

What Is ChartMuseum?

Taking all of the above benefits into account, one could question what the need for ChartMuseum would be. Let’s start by understanding what ChartMuseum is.

ChartMuseum is an open-source Helm Chart Repository written in Go (Golang) with support for cloud storage backends, including Google Cloud Storage, Amazon S3, Microsoft Azure Blob Storage, Alibaba Cloud OSS Storage, and Openstack Object Storage. It is used to store and serve Helm Charts to deploy apps to a Kubernetes cluster. ChartMuseum performs the task of being a Chart Management tool. When Charts are uploaded to a cloud store, ChartMuseum acts like a binary to fetch and maintain the Chart information.

The Benefits of ChartMuseum

When we take a look at the limitations of using Helm, we can observe that when we are setting up a Repository to store the Charts, Helm only supports HTTP-based Repositories and does not support Secrets, which results in Helm not being able to handle OAuth. In the case of ChartMuseum, when it is run, there is a local Chart Server that is started on a port of your choice, which has the capability of handling and storing credentials. This allows authentication to take place. It also handles more APIs, while Helm is only capable of handling a few.

ChartMuseum: Where it comes in.

When Chart Museum starts, it exposes an API that you can use to manipulate and fetch Charts. This is not the case in Helm, as it requires fetching the index.yaml file to list available Charts. In the event that the index.yaml file is deleted by the user, they will be unable to fetch the Charts on request. To combat this issue, when the index.yaml is deleted and regenerated, ChartMuseum will save a statefile in storage called index-cache.yaml, used for cache optimization. This file is only meant for internal use, but may be used for migration to simple storage.

Something else that it does really well is conversion of the index.yaml files. To elaborate: the only file that Helm really understands, in order to fetch the list of available charts, is the index.yaml file. In the case of cloud backends, which can be used as Chart Repositories like an S3 bucket, when we start uploading charts to the S3 bucket, it generates an index-cache.yaml (unlike other HTTP-based repos, which generate an index.yaml file). This file contains the Charts metadata, but index-cache.yaml is not understandable by Helm since it can only understand files like index.yaml.

In this case, ChartMuseum plays a great mediator. When Helm calls to fetch the index details of the Chart repository, it hits the ChartMuseum since Helm cannot directly reach the S3 bucket (it would not be able to authenticate to it and understand the protocols). So, ChartMuseum makes a call to the S3 bucket and uses the capability of storing Secrets to authenticate to it and fetch the index-cache.yaml. But since Helm cannot understand this index-cache.yaml file, ChartMuseum converts it to an understandable format for Helm and sends it all the details.

How We Use ChartMuseum in Harness

When we plan to deploy a Helm Chart which is stored in a cloud Repository like GCS or S3, we need the help of ChartMuseum. When we create an S3 connector, which is going to be used as a Chart Repository, and then add this connector in our service, the background process would look like this:

1. The Delegate, which is capable of performing the task, is selected first. It then downloads the ChartMuseum Library Binary (similar to how the kubectl binary is downloaded) and runs ChartMuseum.

2. Once ChartMuseum is started, it is run on a port with an exposed API, which can be used to fetch Charts from a Repository.

3. Helm then communicates with ChartMuseum on this port. To Helm, ChartMuseum is just an HTTP server that helps it hit the cloud Repo end point and authenticate to it.

4. Once ChartMuseum is done authenticating to the cloud Repo, it then fetches the index file and sends it to Helm. This results in the list of Charts showing up on Harness’ end.

5. After the list of Charts is fetched, ChartMuseum is stopped on the Delegate. The Repo is removed locally from the Delegate, like deleting the entry from the Cache file. The overall process would look like this:

ChartMuseum supports a wide variety of backends, like GCS, S3, Microsoft Azure, and Openstack, while Helm does not support the protocols required to use them as Chart Repositories. When it comes to cloud-provided backends, there is also an extra layer of security that is present, which requires the end point to be authenticated to by using Secrets & OAuth. Currently, the Helm Repo standard is to use anonymous HTTP or HTTP basic authentication, so Helm does not understand what Secrets are and how they are used. Helm would not work as a viable option by itself since it isn’t capable of understanding the variety of protocols that different cloud backends use. It also wouldn’t be able to authenticate to them. This is where Chart Museum plays a key role. 

ChartMuseum does have some internal caching in place that retains data after it is stopped. We do not have any caching on the Harness end related to ChartMuseum.

Common Issues

When using ChartMuseum, we do run into a few common issues related to the Charts not being fetched as expected. This results in errors in the UI. Let’s take a look at common errors that could occur.

ChartMuseum Not Starting

The most common issue that we have noticed with ChartMuseum is that when a Delegate fetches the Charts from a Cloud Repo, in the background, Helm would need to communicate to ChartMuseum to achieve these details. The Delegate runs ChartMuseum in the background, and we have to wait for it to complete. 

ChartMuseum is run as an HTTP-based server on a port that Helm uses to communicate to the Repos. When ChartMuseum comes up, there have been instances where although the process was started, the port wasn’t assigned to the server. This results in the above error showing up.

This error can be resolved by re-adding the connector to the service. Of course, this will require the Delegate to create a new task and spin up the ChartMuseum server again, and from there, the port will be assigned.


Similar to the above issue, when we link a connector on the service-level to fetch charts from a cloud Repository, timeouts can occur. These timeouts can be related to the size of the Repo or the index.yaml file on it.

We have a certain timeout in place that runs when ChartMuseum is communicating with the cloud Repo. If we do not get a response in the allocated time, it results in a timeout.


As seen in this brief article, using Helm has multiple advantages on how we handle our applications in Kubernetes. With the addition of ChartMuseum, storing and versioning the packages of our application becomes a piece of cake. Also, deploying this tool is simple, which is why we have a wide range of clients capable of using multiple cloud backends of their choice to store Helm Charts.

For further reading on Helm, please check out our Helm vs. Kustomize blog, and our What Is Helm piece, which contains a tutorial on how to launch your first Helm deployment with Harness. If you have not signed up for Harness, feel free to sign up today – getting started is free. Come be a part of the rocketship!