July 19, 2021

Tutorial: Vault Agent Advanced Use Case With Kubernetes Delegates and Shared Volumes

Table of Contents

This tutorial covers deploying Vault Agent with Kubernetes Delegates using shared volumes for secure and reliable token management. It provides steps for setup, configuration, and managing multiple Vault servers to ensure seamless integration and improved security in Kubernetes environments.

This tutorial is an advanced continuation of my article about the Vault Agent integration and Harness. I received some feedback from expert customers and I decided to create another tutorial focusing on Kubernetes Delegates and our new capability to support Vault Agent as the integration method for the Harness Secrets Manager. We’ll take advantage of ConfigMaps, PersistentVolumes, Secrets, etc. to create a very reliable Vault Agent deployment.

It’s super important to keep in mind that the Vault Agent IS NOT a component of Harness. It’s inside HashiCorp’s Kingdom. At the end of the day, our Delegate only needs to be able to reach for a sink file containing a good token, even in case Tom Marvolo Riddle himself put it there. The Vault Agent is just a facilitator. Harness is NOT responsible for your Vault Secrets Manager.

Important Note: Managing Multiple Vault Servers

Let’s say you have one Vault Server per Environment (like DEV, QA, PROD).

I decided to take advantage of the Harness Environment Name in some Manifest templates. That’s a good way to have this set up with more than one Vault Server.

Also, to keep things atomic and avoid a single point of failure, I consider one-to-one the relationship between Vault Servers and Vault Agents.

So, if you have one Vault per Environment, you can have multiple Deployments of this Vault Agent.

To help us address that use case, we’ll take advantage of some Service Config Variables that will be overwritten by the Environment Service Configuration Overrides (this is a preview of this tutorial, but let’s just take a peek):

This is the Service Config Var:

Service Config Var

And this is the override coming from the DEV Environment:

Tutorial Part 1: Creating a More Professional Vault Agent Deployment in K8s

Requirements

A little K8s experience, a target cluster, and a good old Harness Account. Since we'll use Persistent Volumes, the Vault Agent Workload must reside in the same Kubernetes Cluster Namespace as the Delegate.

Tasks on Vault

Please refer to my first tutorial at the beginning of this article to configure a good AppRole in your Vault Server.

Security Concerns

Since the RoleID and the SecretID are also super important Secrets, I decided to store them in the Default Google KMS Secrets Manager managed by Harness. Then, I used the templating engine to retrieve them for me.

Naturally, this is something you must decide with your SecOps or Security Architects. I’m showing what I did in my use case.

The Vault Agent Kubernetes Manifest Source

I’ll keep all files related to our Vault Agent K8s Service Manifests in this GitHub repo. I won’t break this repo, but it’s a lab repo. Please fork it just to be safe.

First Step

The first step is to store both RoleID and SecretID in Harness. They were previously created in my first tutorial, in case you want to adopt the same approach. Again, the underlying Auth Method has no relationship with Harness, but it’s a decision you should make with your Vault Admin.

Encrypted Text
Encrypted Text

Important: if you want to add another security layer, you can store it as base64 and use a Data Secret (in the Secret Kubernetes Manifest that is currently using stringData), instead of a stringData one. It’s up to you!

Second Step

Let’s create the Harness Service that will host the Vault Agent workload.

Vault Agent Kubernetes

Add the Vault Docker Image that is available in the Artifact Source:

Artifact Source

Make sure to link your Remote Manifest:

image

Kindly add that Service Config Variable we talked about earlier:

Service Config Var

Third Step

Now, create an Environment, an Infrastructure Definition, and add the Override. This is the trick to handle multiple Vault Servers. We will use these variables in the Templating trick.

Infra Definition

Fourth Step

Now, let’s explore a few things related to the Vault Agent Manifest files.

If you take a look at the values.yaml file, you will see that there are a few important things there:

  • The Persistent Volume Claim name, which is the mechanism I choose to share the SINK FILE between this Deployment and the Delegate Deployment;
  • The underlying Auth Method Config HCL File;
  • And, the RoleID and SecretID functions to retrieve them from the Default Secrets Manager.

Notice that I use the Harness ${env.name} when I need to keep one Object per Environment. This will help us to design a multiple Vault Server strategy.

Vault Agent Kubernetes

You can see how I’m handling all that in the deployments.yaml file from inside the templates folder.

Fifth Step

Now, let’s create a Rolling Deployment Workflow in Harness.

Workflow

Sixth Step

If you run it as is, you're already shipping one Vault Agent!

Vault Agent Kubernetes

Seventh Step

So, how do you know that this is working? You can output the logs to your favorite tool that has Logging Capabilities. In my case: Splunk, ELK, and Graylog.

As we are already very far from home, let’s keep the party inside the K8s world. Let’s use ReadinessProbes to make sure that the sink file is available! Since the sink file will be present and filled with a token only at the end of this process, this is a good way to monitor if the Pod is good.

Sink File

Even with all that, nothing on that probe will tell if the Token is good. The Vault Agent will create a file even if your Auth Method has bad credentials. This, like everything else, is a HashiCorp Vault design - nothing related to Harness.

You could create a custom script that will export VAULT_ADDR and VAULT_TOKEN and test that token with a command like:

VAULT_TOKEN=<the_token_generated_in_sink> vault secrets list

I recommend you check the logs with this command:

kubectl -n harness-delegate logs pod/vault-agent-<...>

This is a good deployment:

Good Deployment

And this is bad news:

Bad Deployment

Part One Outcome

Using the same logic that you would use in an advanced Readiness Probe, the token must be good!

Vault Agent Kubernetes

Tutorial Part 2: Sharing the Token With the Delegate Deployment (StatefulSet, To Be Specific)

We're almost at the end of the hard work. Now, it’s time to change our Harness Delegate Manifest. Yes, the one that you used to install the Delegate in your Cluster.

The only requirement is that both Vault Agent and the Delegate must live in the same K8s Cluster Namespace. We are using PVC to share the sink file.

First Step

Let’s change our Delegate Manifest so it can reach our sink file via Volume. I am using the very same manifest that comes from the Delegate UI wizard. No tricks here:

Delegate Manifest

The first trick is to add the Volume definition at the end of the Harness Delegate StatefulSet block:

Sink File

volumes:
- name: vault-agent-sink-shared
 persistentVolumeClaim:
   claimName: vault-agent-sink-pvc-dev

And, of course, adding the mount to make the file available to our Harness Delegate Container:

Vault Agent Kubernetes

Don’t worry, you can get a good example here: GitHub repo.

Second Step:

We can see that now the Delegate Container can see the sink file. This is awesome!

Vault Agent Kubernetes

Let’s go ahead and check if Harness is able to integrate with Vault via the new Agent Method - but using a Kubernetes Delegate:

Configure Secrets Manager

Third Step:

Now, we can create and edit a few Secrets just to stress the Token a little.

Vault Agent Kubernetes

Nice!     

 

Secrets


Any questions or comments? Let me know - I'm always happy to help.

Gabriel

Platform