Thinking of deploying microservices using Kubernetes? Chances are you'll run into Istio at some point. Here's a quick run down of Istio and how Harness supports this new technology.
“Just stick your app in a container and deploy it — its super easy, fast, scalable and portable” said everyone two years ago.
Turns out deploying microservices in containers isn’t as easy as we first thought.
“No problem, just use Kubernetes to manage your containers” said everyone a year ago.
Turns out managing Kubernetes isn’t exactly easy, unless you have a deep penchant for YAML.
Today, the latest
buzz word hot technology is Istio.
What is Istio?
Istio is an open source service-mesh.
What is a Service Mesh?
Think of it as a management wrapper around all your microservices (and infrastructure) so you can control their connectivity and security.
Imagine a bouncer/doorman watching the door to each of your pods inside all your Kubernetes clusters. This is basically what an Istio Envoy/Proxy is.
Istio also claims to do observability (monitor, log, trace) but then again, so does everyone else these days :))
Istio Use Cases
From the Istio website core functionality is defined as:
- Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
- Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
- A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
- Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
- Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.
Istio has also been designed for extensibility, more specifically to support a wide range of microservices deployment patterns. It therefore comes as no surprise why Harness has decided to support Istio, and make it a first-class citizen inside our Continuous Delivery platform.
Example: A Blue/Green Canary Deployment
Lets build a simple blue/green deployment that uses canary phases to control how application traffic is migrated from the green environment (live current version) to the blue environment (staging new version) using Harness, Amazon EKS and Istio.
Step 1 – Create a new deployment workflow in Harness
We give our new workflow a name, a canary deployment type and an environment to perform the deployment.
Step 2 – Create One or More Canary Phases
Next, lets create our canary phases which will migrate traffic for our blue/green environment setup using Istio.
Click ‘Add Phase‘ and select the artifact you wish to deploy and the service infrastructure (environment instance) you want to deploy to.
To configure Istio click ‘Kubernetes Service Setup‘
Now click ‘Service Setup‘ and select Load Balancer from the drop-down
Click ‘Ingress Rules‘ and select Istio from drop-down and enter your Gateway/Host information:
Finally, click ‘Upgrade Containers‘ and enter 100% for desired blue environment instances and 100% for instances for the green current environment. This basically means two environments will have the same number of instances running, but all traffic will remain directed to the green (current) environment.
To split the traffic we simply enter the % of traffic we wish to move to the blue environment (new version instances) for our canary phase, in this case 10%.
Lastly, once we redirect traffic to our blue environment we can add one or more verifications to our canary phases to validate the availability, performance and quality of the traffic/users that are using this environment.
We do this so that if verifications fail, we can easily roll back all traffic to the green environment in seconds. If verifications succeed we can then move to the next canary phase and split more traffic via Istio to the new blue environment.
Once we define our canary phases we end up with a deployment workflow that looks like this:
Phase 1 sets up up the blue environment with 100% of the instances that are available in the green environment and deploys the new artifact to the blue environment. All traffic at this point is still directed to the green environment.
Phase 2 then redirects 10% of the traffic to the blue environment using Istio. If verification succeeds then deployment continues to Phase 3, else the deployment rolls back all traffic using Istio to the green environment.
Phase 3 then redirects 100% of the traffic to the blue environment using Istio.
The good news is that you can templatize this deployment workflow using workflow variables to parameterize the inputs (artifacts, environments, …) so that multiple teams can leverage a standard blue/green or canary deployment.
Step 3 – Run the Blue/Green deployment with Canary Phases
Once we’ve built our new deployment workflow we can now run it 🙂
Below is the real-time deployment workflow that illustrates every step and output of the blue/green deployment.
In the below screenshot we can see canary phase 2 of our blue/green deployment and what actually happened from the Kubernetes Controller and Istio perspective by viewing the actual console output from the traffic splitting. We can clearly see that 90% of traffic is pointed to green (current version) and 10% is pointed to blue (new version).
We can also see the verification steps that followed once the traffic split was made. In the example above we kicked off a load test on the blue environment, then validated metrics in Prometheus, New Relic and also log files in Sumo Logic. Lastly, a manual approval was then required to move to Phase 3 of the canary deployment.
Thats it, all done, building a blue/green/canary deployment with Harness and Istio should take you no more than 5 minutes to create/run.
I’ll shortly be creating a video which walks through the above steps (I’m recovering from a cold so my demo voice is a bit weezy right now).