Product
|
Cloud costs
|
released
November 1, 2021
|
3
min read
|

The Journey to Microservices & Deployment Strategies

Updated

Day to day, I speak with dozens of customers and prospects about their software delivery strategy. In the gamut of topics, one of the single most important discussions that comes up is where they are in their monolith to microservices journey. Some are starting out greenfield, building a microservice architecture from the ground up. Others built - or inherited - monolithic applications that have been tried and true for years, but have inherent limitations that slow down their ability to deliver at the speed their business is asking for. Most are somewhere along the journey from monolithic to microservice, working through all the possibilities and obstacles that arise.

Let’s dive into that journey, define our terms, and talk about what the mythical state of microservices looks like, where teams are able to deploy smaller components on demand with no downtime.

Basics of Microservices

Why Microservices?

I spent a lot of my college years unloading trucks as a way to pay the bills. Imagine if my crew of 5-10 people all congregated in the same part of the warehouse, attempted to grab boxes at the same time, from the same pile, and take them to the same place. Chaos? Maybe. Inefficient? Definitely. Any system, whether software, hardware, or mechanical breaks out duties to gain efficiency. Microservices are a systematic approach to segmenting duties within a software system. 

What Are Microservices?

At the most basic level, a microservice is a software component that has a single distinct purpose within the greater system. Examples of this purpose could be handling payments, shopping carts, or users in an e-commerce application.

Ask five software engineers to define microservices and you’ll get six or more definitions. With that said, there’s a general consensus on the why, and what common traits are possessed. There remains continuous debate over where lines are drawn in service duties, how to properly apply DRY (don’t repeat yourself) principles, and more. 

Carefully Scoped: The duties should be boiled down to a basic set of tasks around a single responsibility (generally mapped to a business capability). Too broad and you end up with Monolith Lite™, too granular you end up with a system that feels like you built an entire house out of Legos.

Loose Coupling: The interaction between two or more services should follow a black box approach. Regardless of the knowledge a developer of service A knows about service B, they only interact based upon an agreed upon contract (API). Under this principle, you are treating another microservice the same way you treat any third-party service you may integrate with: you assume they will do only what they have published in their documentation and specifications.

Highly Available and Fault Tolerant: These applications are narrow in scope, but should be self-contained and able to handle adverse events without disrupting the great application. 

In general, good microservice design follows the principles of good code and application design. A given component should be able to perform regardless of what other services are doing, so long as they are available and providing their agreed-upon service.

Difference Between Monolithic Apps and Microservices

In practice, the most important benefit of microservices is to get away from the all-or-nothing approach that often must be used in a monolithic architecture. Within a monolith, some components can be swapped out or patched, but absent a good system to manage that process, you often end up with nearly as many headaches as you would if you reset the whole system during each deployment.

Scaling is another area where microservices can really shine. When an app is at capacity, it rarely needs more of “everything,” as opposed to specific services. For example, an e-commerce app could be getting slammed on checkouts, but not need additional search capacity. Having smaller components that can be horizontally scaled to where the need is allows for more efficient use of resources and faster responses to specific needs.

Finally, microservices allow for greater agility. If you are adding a new capability, it may be a new service entirely. This gives it freedom to be designed optimally and without the constraints of existing services. Teams working on it won’t have to learn a new codebase and could even opt to use different programming languages more suited for the service. Existing services can be updated, or replaced with less pain as teams have a clearly defined contract (most commonly in the form of a RESTful API, using a ubiquitous format such as JSON or XML) the service must fulfill. Contrast this against cases where you, as a service owner, don’t often have full visibility into how every other part of a monolithic application is implementing (and often misusing) your code.

Deploying From Monolithic Apps to Microservices

Story time again! I used to deploy a monolithic application with Jenkins. We were held to an SLA of 99.7% uptime, and had to do blue-green cutovers. This was early Jenkins, pre-Kubernetes days, when microservice applications were not in vogue. The process to deploy was slow and took 2-4 hours to stand up, apply patches, and prep ahead of cutover using over 100+ scripts along the way. There are many words to describe this, some of which I’ve been told I’m not allowed to use on the company blog. This is not everyone’s experience with monolithic applications, but it’s a common tale.

Few companies jump straight from what is described above to the promised land of the perfect microservice application. Transitions often look like breaking off services and aspects of the application that can more easily stand alone. As time goes on, that big monolith gets smaller and fewer components must be deployed at the same time, everytime. The ideal state is where each service is small, quick and able to be deployed on demand, scaled up to meet capacity, and able to roll back quickly.

Breaking Down a Microservices Deployment

The end state of the microservices journey will vary depending on the goals of a given organization. The capabilities will be the same in most cases. Agility, flexibility, and high availability (or as I call them, the ‘ilities’ of great tech) are just some of the benefits. When it comes to deployments, this lightweight approach can manifest itself as shipping features weekly, or even fearlessly deploying a given service multiple times a day.

Microservices Deployment Strategies

Before you get into the weeds of executing your deployment, it’s worth a quick discussion of common platforms you will be deploying to, and where the services ultimately reside. 

Kubernetes

Without getting into the intricacies of managing clusters (that’s another book for a different day), Kubernetes allows for a high level of freedom in deployment and scaling. Services can be declaratively distributed among nodes, regions, etc. to maximize availability, performance, and scaling. Additionally, service meshes such as Istio provide for the ability to efficiently use sidecar patterns, traffic routing, and more.

Physical/Virtual Machines

While Kubernetes is taking the DevOps world by storm, millions of deployments to server infrastructure happen everyday, both monolithic and microservices applications. These include direct use of traditional HTTP servers, Docker containers, and orchestration solutions such as Docker Swarm. In the context of deploying directly to servers, the common patterns include deploying at least one instance of each service to a host, or dedicated hosts, for each service. The advantage of the latter is that it allows for easier packaging of an individual service, with the downside being that scaling horizontally can take time depending on how optimized the process is.

Serverless

Serverless is another common design pattern that is gaining popularity. AWS Lambda being one of the most common examples, along with Azure Functions, and Google Cloud Functions. With this pattern, there are no servers to maintain - you provide code and it is executed somewhere. There is no OS to maintain, and state is managed by way of databases and other storage, so this works well for services that execute tasks by way of interacting with other services. While I’ve seen some customers employing a pure serverless approach, most use serverless as one component within a larger system. There’s a whole lot more to say on serverless, so for now I’ll leave you with this.

Strategies

Rolling

Call it what you want, but services that don’t have to be constantly available still utilize a pattern of swapping in new for old and hitting the restart button. The simplicity aspect is compelling, the disruptions to end users are less so. With allowances for outage windows getting smaller, this is falling more and more out of favor. 

Microservices Deployment: Strategies - Rolling

Blue-Green

The most straightforward minimal downtime strategy to implement, Blue-Green, is where you stand up an identical service of the same capacity as the service running, then switch the traffic at the load balancer to the new service. The key benefits are minimal disruption to end users during cutover, not having to design your service to handle multiple versions running side by side, and the ability to revert instantly by way of shifting traffic back to the previous deployment. The downsides are that in the case of failure, 100% of users are affected until the traffic is shifted back, and the cost of capacity to have identical sets of the same service running.

Microservices Deployment: Strategies - Blue-Green

Canary

While the first two strategies could be executed manually, Canary is a strategy that only works well as part of a CD (Continuous Delivery) solution. Canary rollouts are where a small set of the new service is spun up, a fraction of the traffic is directed to that service, and once a verification gate is passed, the update is fully rolled out across the service pool. The most obvious advantages are that application issues will only affect a subset of the user base, and backing out is redirecting traffic back to the still-running stable service. Until the last few years, executing a Canary strategy consistently was extremely difficult, but Harness was one of the first (or the first) products that provided scriptless Canary deployments right out of the box. Outside of complexities in implementing Canary rollouts, other reasons some may elect not to use them include applications that cannot handle multiple versions of the same service running, or having a lack of verification infrastructure to see the full benefit of the strategy.

Microservices Deployment: Strategies - Canary

Comparing and Contrasting

These strategies and their variations are the industry standard today, and from my experience, where a company is on their microservice journey tends to correspond with the strategy of choice. Early on, rolling-style strategies are simple to implement and get the job done. As requirements for uptime and delivery increase, so does the need for a more complex strategy. Blue-Green is simple in concept, has great rollback, and minimal downtime in happy path cases. Canary deployments tie together the best of all worlds for minimizing downtime, fast rollback, and minimal disruptions in the case of failed deployments. Canary-style patterns can be extended even more with multiple stages, as well as targeting specific traffic with meshes such as Istio.

The right strategy for you should come down to a number of factors:

  • Uptime requirements (SLAs/SLOs);
  • Quality requirements;
  • Service maturity (design, fault tolerance, compatibility);
  • The tool you use to implement the rollout.

Deploying Microservices with Harness

Harness provides all these strategies without any scripting required. The same service could even be deployed to multiple environments with different strategies. If Canary deployments are overcomplicated and Blue-Green deployments are too expensive in your development environment, rollout deployment it is! Want your users to go on about their day without realizing you deployed something new? Canary and Blue-Green sounds like what you’re looking for. 

Conclusion

Harness was designed to be able to build out pipelines with complex strategies in minutes. Don’t believe me? Book your demo today! And if you’re not ready for a demo yet, keep on learning. Download our eBook on Pipeline Patterns to learn more about the strategies mentioned above - and how to use them correctly.

Sign up now

Sign up for our free plan, start building and deploying with Harness, take your software delivery to the next level.

Get a demo

Sign up for a free 14 day trial and take your software development to the next level

Documentation

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

Case studies

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.

Sign up for our monthly newsletter

Subscribe to our newsletter to receive the latest Harness content in your inbox every month.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Continuous Delivery & GitOps