Product
|
Cloud costs
|
released
November 11, 2020
|
3
min read
|

Deploying to AKS, EKS, and GKE all at Once

Updated

Kubernetes is seen as the platform for you to build your next-generation platforms on. Kubernetes is not without its operational complexity and that is why many organizations look to public cloud vendors to run their Kubernetes workloads on. Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), and Google Kubernetes Engine (GKE) have seen substantial growth and adoption in the past years.

When organizations are designing distributed systems, the old adage of not putting all your eggs in one basket is a design consideration. Leveraging more than one public cloud vendor is a popular approach. Unlike very specific cloud services, Kubernetes does transcend the public cloud vendors; your workload on one cluster should be able to work on another cluster. As infrastructure complexity increases, your deployment complexity increases. In this example, learn how to simply deploy across multiple Kubernetes providers with Harness with ease.

Like always, follow along with the blog and/or watch the video.

Provisioning Your Instances

To leverage the example, we will need to create Kubernetes clusters in the three public cloud vendors. For myself, I am most familiar with AWS and used AWS the most in my previous roles. Spinning up resources in Azure and GCP will be firsts for myself. The very first step would be to provision yourself accounts in AWS, Azure, and GCP if you have not already.

Amazon EKS Provisioning

My favorite tool to provision EKS clusters is EKSCTL. If you are using a Mac, the easiest way to install EKSCTL is leveraging Homebrew. Running the below command will spin up an EKS Cluster.

#Create EKS Clustereksctl create cluster \--name captain-canary-eks \--version 1.18 \--region us-east-2 \--nodegroup-name standard-workers \--node-type t3.xlarge \--nodes 2 \--nodes-min 1 \--nodes-max 3 \--node-ami auto

Once provisioned, run kubectl get nodes to validate connectivity and cluster status.

With EKS out of the way, let’s move to AKS.

Azure AKS Provisioning

Head over to the Azure Portal and sign in. Can navigate to +Create a resource then Kubernetes services.  The Microsoft Documentation has a great getting started guide to get your first AKS cluster up and running.

Once running through the configuration wizard, your AKS cluster is up and running.

To round out the three public cloud vendors, lastly can spin up a GKE cluster.

Google GKE Provisioning

One more tab on your browser, head to the Google Cloud Console, and sign in. Navigate to Kubernetes Engine then +Create Cluster. The Google Documentation has a great getting started guide if this is your first GKE cluster.

With the provisioning out of the way, your GKE cluster is up and running.

With the clusters out of the way, the next step is to start to wire Harness for greatness.

Harness Prep

Let’s wire Harnes to act on our behalf. We will leverage  Harness Delegates e.g Harness Worker Nodes to be deployed inside each of our clusters. A good distributed system design consideration would be to have Harness Delegates across the different disparate cloud providers for availability. It is possible to have one Harness Delegate deploy to the other Kubernetes clusters by managing the IAM/Service Principal/Cloud IAM roles for the resource interaction as per required for each cloud provider.

To get started, navigate to the Harness Web UI and log in. If you don’t have a Harness Account, you can sign up for one for free.

Harness Delegate Installations

We will install Harness Delegates in EKS, AKS, and GKE in that order since my machine is wired to EKS via the CLI and not AKS or GKE yet. Though you can install the Harness Delegates in any order if you have a CLI wired for one of the other cloud providers first. The below instructions show how to re-inject kubeconfig from each one of the cloud providers so you can just use the terminal on your local machine.

EKS Delegate Installation

To install the Harness Delegate is very easy, navigate to Setup then Install Delegate. Select Kubernetes YAML and can name the Delegate something such as “eks-delegate”.

Once downloaded, expand the tar.gz.

In the expanded folder, there is a readme file with the command to install the Harness Delegate which is kubectl apply -f harness-delegate.yaml.

If you have another one of the cloud provider CLIs installed first, you can always re-inject the EKS cluster kubeconfig by aws eks --region your-region update-kubeconfig --name cluster-name e.g aws eks --region us-east-2 update-kubeconfig --name captain-canary-eks.

After a few moments, the Harness Delegate will be wired into Harness and can validate that in the UI.

With the first Harness Delegate out of the way, we can repeat a similar process for AKS and GKE.

AKS Delegate Installation

We will need kubectl access for the AKS cluster. The easiest way is to have the Azure CLI inject the kubeconfig context. If you have not already, install the Azure CLI.

The command to inject the kubeconfig is:

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster#Azure Commandsbrew update && brew install azure-cliaz loginaz aks get-credentials --resource-group RaviGroup --name captain-canary-azurekubectl get nodes

Once kubectl is wired up, can go back and re-download a Harness Delegate from Setup -> Install Delegate. Select Kubernetes YAML and can give a name “aks-delegate”.

Expand the tar.gz and install into your AKS cluster with kubectl apply -f harness-delegate.yaml.

Validate in a few moments that now there are two Harness Delegates present.

Next is to follow a similar flow for the GKE cluster.

GKE Delegate Installation

Following a similar flow for EKS and AKS, having the Google Cloud CLI inject the kubeconfig would be the easiest way to get kubectl access up and running. If you have not installed the Google Cloud CLI, the easiest way is to use Homebrew.  If you have an IPV6 ISP at the time of this blog, you might have to disable IPV6 during your Google Cloud SDK installation.

Google Cloud gives you the gcloud CLI command to execute to update your kubeconfig in the UI. Can navigate to the Google Cloud Console and click “connect” on your GKE cluster for your command.

The Google Cloud command to inject kubeconfig:

gcloud container clusters get-credentials cluster-name --zone your-zone --project your-project#GCP Commandsnetworksetup -setv6off Wi-Fibrew cask install google-cloud-sdkexport CLOUDSDK_PYTHON="$(brew --prefix)/opt/python@3.8/libexec/bin/python"source "$(brew --prefix)/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/path.bash.inc"source "$(brew --prefix)/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/completion.bash.inc"gcloud initnetworksetup -setv6automatic Wi-Figcloud container clusters get-credentials captain-canary-gke --zone us-east1-b --project fresh-forest

Once kubectl is wired up, go back and redownload a Harness Delegate from Setup -> Install Delegate. Select Kubernetes YAML and can give a name “gke-delegate”.

Expand the tar.gz and install into your GKE cluster with kubectl apply -f harness-delegate.yaml.

With the final Harness Delegate installed, you can validate that all three Delegates are in the Harness UI.

With the hard stuff out of the way, time to add the Kubernetes clusters to Harness to be deployed to and create a Harness Workflow to deploy to all three.

Harness Kubernetes Wirings

Harness has a concept of a CD Abstraction Model where the resources and steps are abstracted away. Kubernetes clusters fall under the Cloud Provider abstraction which three Cloud Providers, one for disparate each cluster, will need to be set up. Since the Harness Kubernetes Delegate can inherit details from where they are deployed to, wiring Kubernetes clusters to Harness is a breeze.

Navigate to Setup -> Cloud Providers +Add Cloud Provider. Select Kubernetes as the type. For the EKS cluster, can have the display name be “eks-cluster” and can Inherit the details from the “eks-delegate”.

Click Test to validate and click Next to submit.

Once added, the EKS cluster will show up on the list.

Similar flow for the AKS and GKE clusters.

Adding the AKS cluster:

Adding the GKE cluster:

With both added, both the AKS and GKE clusters will be available as Cloud Providers.

With the Kubernetes wirings completely done, lastly, we need to create a Harness Workflow to deploy all of the goodness.

Harness Workflow

If you are familiar with Harness, the lifeblood of any deployment is a Harness Application which houses all the abstraction your deployment needs. Creating an Application, can navigate to Setup + Add Application. Can give the name “Grand Kubernetes”.

Next, we will wire all three Kubernetes environments together with a Harness Environment.

Setup -> Grand Kubernetes -> Environments + Add Environment. Give it the name “Hybrid Kubernetes”.

Next, we can add the Infrastructure Definitions, one for each of the clusters. To set up EKS, can give the Name “EKS” with the Cloud Provider as “Kubernetes Cluster” with a Deployment Type of “Kubernetes”. Make sure to match the Cloud Provider to the “eks-cluster” which was wired in the above steps.

Rinse and repeat for the AKS and GKE clusters.

Next step is to define the Harness Service e.g what you are going to deploy. You can create a Service by going to Setup -> Grand Kubernetes -> Services + Add Service. Let’s deploy Nginx with the Deployment Type of “Kubernetes”.

With the Service created, wire in the location of an Nginx image. Public Docker Hub works fine.

Click on +Add Artifact Source and leverage the default Source Server [Harness Docker Hub] and “library/nginx” as the image.

Once you click submit, Nginx is wired to be deployed.

Next, we will create three Harness Workflows or steps, one for each Kubernetes environment.

Setup -> Grand Kubernetes -> Workflows + Add Workflow. Name the Workflow “Deploy EKS” with a Workflow Type of “Rolling”, the Environment as “Hybrid Kubernetes” with the Service “Nginx” and the Infrastructure Definition of “EKS”.

Once you hit Submit, the Workflow is saved. Let’s rinse and repeat again for the AKS and GKE clusters.

Navigate back to Setup -> Grand Kubernetes -> Workflows and add two more Workflows.

AKS Workflow:

GKE Workflow:

Once you hit Submit, you should see all three of the Workflows.

To stitch these Workflows together, we can create a Harness Pipeline to execute these sequentially; Harness Pipelines have a few execution models but we will be leveraging the sequential model for simplicity.

Setup -> Grande Kubernetes -> Pipelines + Add Pipeline

We can add one Pipeline Stage per Kubernetes environment/provider to demonstrate the power of the Pipeline. Click on “add Pipeline Stage”.

Define an Execution Step with Step Name “EKS” [uncheck Auto Generate Name] and Execute Workflow set to “Deploy EKS”.

Once you hit Submit, repeat the process for AKS and GKE.

AKS:

GKE:

With the Kubernetes providers lined up, you should have a three-stage Pipeline.

Now you are ready to deploy across three Kubernetes providers!

Watch the Magic

There are a few places to kick off the deployment for example directly from the Pipeline you created. Though can navigate on the left-hand navigation to Continuous Deployment -> Deployments -> Start New Deployment. Select Pipeline, then the Application and Pipeline that was just created. Select a Nginx tag [version].

Hit Submit and watch the magic!

Just like that, you have successfully deployed across the three major public vendor’s Kubernetes platforms.

Continuing the Journey With Harness

Deploying across EKS, AKS, and GKE might be a cliche example. The Harness platform is robust and easy to consume to help you and your organization around your cloud modernization goals. Feel free to sign up for the Harness Platform today.

Cheers!

-Ravi

Sign up now

Sign up for our free plan, start building and deploying with Harness, take your software delivery to the next level.

Get a demo

Sign up for a free 14 day trial and take your software development to the next level

Documentation

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

Case studies

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.

Sign up for our monthly newsletter

Subscribe to our newsletter to receive the latest Harness content in your inbox every month.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Continuous Delivery & GitOps