Helm Series – Part 3/3 – Helm and Harness – Better Together

Have a Helm Chart? Can supercharge your Chart with Harness!

By Ravi Lachhman
December 2, 2019

We learned about what on earth is Helm and took a more critical look at Helm in part one and part two of our series respectively. Let’s go through the example we did in part one but this time let’s include that as part of a Harness Deployment Pipeline. 

As a heads up, I am using a more realistic topology with a remote Harness Shell Delegate [Amazon EC2] and a remote Kubernetes Cluster [Amazon EKS in this case].  In part one we kept to Minikube, let’s go for gold today. 

Your First Helm Deployment with Harness

For leveraging Helm V2 with Harness, we will need a few pieces. I decided to use Amazon EKS for the Kubernetes Cluster here. I am using EKSTL to spin up and spin down an EKS Cluster. I will also spin up an Amazon EC2 Centos Instance to use as our Harness Delegate.  Like always you can watch the video and/or follow along with the blog.

The Prep

Having just spun up a fresh Centos EC2 instance, you first need to get the Harness Delegate installed. The easiest way is to log into a Harness Account, then go to Setup -> Harness Delegates -> Download Delegate. Click on the Copy icon for the Shell Delegate Download.

With copying that command (a curl), we can simply paste that command into an awaiting Centos instance. 

Hit Enter

Can untar the Harness Delegate download with tar -xvf harness-delegate.tar.gz

Lastly, CD into the harness-delegate folder and run ./start.sh and the Harness Delegate will be installed.

After the install, you can validate in the Harness Web UI under Setup -> Harness Delegates that our Delegate is there. Sweet, success!

Next, validate that your EKS instance is up and running. I created one in us-east-2 called “Helm-Cluster”.

The next steps are to get a KubeCTL wired from the delegate. Helm will typically get context from KubeCTL where to connect. Unfortunately, this is a chore with Amazon EKS and can be accomplished in a few ways. 

The easiest is to get the Amazon Web Services CLI on your EC2 (Centos) instance and then have the Amazon CLI set the KubeCTL context. The AWS CLI requires Python 3. The below commands should get the needed items installed for the AWS CLI on Centos.

sudo yum update
sudo yum install centos-release-scl
sudo yum install rh-python36
scl enable rh-python36 bash
pip3 install –upgrade –user awscli
export PATH=/home/ec2-user/.local/bin:$PATH
aws –version

Now you can run the aws configure command. This does require your access key and secret that created the EKS Cluster or has permission to run KubeCTL and get/update an authmap. 

With the AWS CLI wired, now you can install KubeCTL on your Centos instance. 

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version

Here is where some wiring magic happens. We can use the AWS CLI to inject the KubeConfig information. From your EC2 Centos instance run the following command to match your EKS instance:

aws eks –region us-east-2 update-kubeconfig –name Helm-Cluster

Run kubectl version one more time to see the cluster [Server Version] connected.

Lastly let’s setup the Kubernetes WebUI from Amazon’s docs. This will require a role binding for the Kubernetes cluster.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

Per the Amazon documentation, create a cluster role binding [step 3]. Copy the YAML and apply it. 

kubectl apply -f eks-admin-service-account.yaml

Lastly keep the all important token, will be needed to wire the K8s cluster to Harness later. 

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk ‘{print $1}’) 

Run KubeCTL proxy to allow proxying to the Kubernetes Web UI.

Navigate your browser to: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login

Enter your token and sign-in.

If you are thinking to yourself “wow, that is a lot of work”, Kubernetes is not easy but now you are all wired up to the EKS Cluster. 

We now can install the Helm Client on our Harness Delegate. By leveraging a Delegate Profile, we can ensure that Helm will be installed in our Harness Delegate(s). 

Back in our Delegate Setup [Setup -> Harness Delegates] can click on Manage Delegate Profiles +Add Delegate Profile

You can follow the Helm Install Steps [V2] from Helm’s script.  You can place the script steps into a Delegate Profile. My Centos Amazon AMI did not come with Git, so I added sudo yum install git to the Delegate Profile. The perfect use for a Delegate Profile! Kubernetes Role bindings can be found in the Harness Docs.

#Install Git Helm

echo “Installing Git”
sudo yum install git

echo “Grabbing Helm Script”
curl -LO https://git.io/get_helm.sh

echo “Modifying Get Helm”
chmod 700 get_helm.sh

echo “Get Helm”
./get_helm.sh

echo “Install Tiller and Role Bindings”
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller –clusterrole cluster-admin –serviceaccount=kube-system:tiller
helm init –service-account tiller

echo “Wait for Tiller Pod to come up”
sleep 15

#Check Version
helm version

Click Submit and now you can run the Profile by selecting the Profile name.

Once applied, you can take a look at the Delegate Profile log output right from the Harness UI / The Kubernetes Web UI under the kube-system Namespace.

Tiller will show up in the K8’s Dashboard in the kube-system name space.

You can validate that Helm/Tiller V2 was installed with helm version from your Centos EC2 instance.

With all of that wiring, you are ready to create the Harness pieces to start to run a Helm Deploy in a Harness Workflow.

Helm o’clock

Let’s go through and create a Helm powered deployment. First, let’s create a new Application by
going to Setup -> Applications + Add Application.  We can call the Application “Helm_or_Highwater”.

Once created, let’s go wire the Helm Repository that we used in part one [https://charts.bitnami.com/bitnami] of the blog series. To add, go to Setup -> Connectors -> Artifact Servers + Add Artifact Server. We will be adding type of Helm Repository.

Click Test at the bottom to validate then click Submit.

Let’s now add a Service from the Bitnami Repo. Can go to Setup -> Helm_or_HighWater -> Services + Add Service of type Helm. Can all the Service “Bitnami_Nginx”.

Once you hit submit, can now wire the Chart Specifications by scrolling down to the middle of the UI and clicking on “Add Chart Specification”.

Let’s leverage the same details from part one of the blog. The Chart Name will be “bitnami/nginx” and the Chart Version will be “5.1.0”.

If you want to use the version were to change, if you wired your local Helm Client back in part one, can search helm search bitnami/nginx from your local instance or your newly minted EC2 instance.

Click submit and your Chart Specification should be there.

Since you are using an EKS Cluster, time for you to wire the EKS Cluster to Harness.  

Navigate back to your AWS EKS Web Console and grab the API server endpoint and Certificate authority.

You will be using the bearer/authorization token [remember that long token?] to wire the Kubernetes cluster to Harness. 

Just in case you need a quick refresh:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk ‘{print $1}’)

You can add the Kubernetes cluster by going to Setup -> Cloud Providers +Add Cloud Provider. Type will be Kubernetes Cluster.

The Master URL is the EKS API server endpoint. The CA Certificate is the Certificate authority. The Kubernetes Service Account Token is the token from the kubectl command above.

 

Test/Submit

 

 

Next we will add the Environment. Can add an Environment by going to Setup -> Helm_or_Highwater -> Environments + Add Environment.

Once you defined the environment, you can define the Service Infrastructure by + Add Service Infrastructure.

Submit

Next you will create a Workflow by going to Setup -> Helm_or_Highwater -> Workflows + Add Workflow.

Once you hit submit, can add a Workflow Phase [Canary Phase].

Once you hit Submit, In the Phase, add recommended Release Name “${service.name}-${env.name}-${infra.helm.shortId}” Or like part one you can use “my-nginx” which I am using in this example.

Next you can create a Pipeline by going to Setup -> Helm_or_Highwater -> Pipelines + Add Pipeline.

Submit and add a Pipeline Stage

Now it is time to start your Deployment. Can navigate to Continuous Deployment -> Start New Deployment.

You can watch the success!

Can validate in the Kubernetes UI back in the default Namespace.

Just like that, you have a fully functioning, consistent, and repeatable Helm infrastructure!

Harness, your Partner in Kubernetes

If these steps seemed kind of lengthy, you are correct. The Harness wiring steps are simple when you get all of the infrastructure needed e.g a Kubernetes Cluster running Helm and a piece of infrastructure to run as a bastion host [aka jump box]. Helm certainly is a powerful tool that is standard issue for many’s Kubernetes toolkits. As the Kubernetes ecosystem continues to evolve Harness will be there by your side providing consistency.

Cheers!

-Ravi

➞ Back to Blog

Leave a Reply

avatar
  Subscribe  
Notify of