No items found.
January 29, 2024

Mastering Continuous Delivery: A Closer Look at How Harness Engineers' Deploy Software

Table of Contents

In the ever-evolving world of software development, achieving efficient and reliable continuous delivery is a priority for modern organizations. At Harness, this is the creed we live by. In this blog post, we'll explore the issues Harness faced and how we revolutionized our pipelines to overcome these hurdles. Let's delve into Harness' Continuous Delivery Pipeline Evolution and the remarkable benefits it brought to their software deployment processes. 

Challenges Faced:

Migration from Harness First Gen to Harness Next Gen Platform: We used our own software to deploy Harness for consumers, this meant that like our users we needed to also migrate our own pipelines and workflows to Harness Next Gen for deployment. This resulted in some debt because we migrated things as is without taking advantage of the benefits of Harness CD Next Gen capabilities. This resulted in 128 Pipelines being brought over.

Complexity due to Multiple Pipelines: We have a staggering 67 services, each with its own pipeline consisting of 7 input sets. The abundance of pipelines made management and consistency challenging, leading to potential errors and inefficiencies.

Lack of Standardization: Services were being deployed in a very similar manner, often using the same logic. However, with separate pipelines for each service, maintaining uniformity became cumbersome and prone to errors. This problem became more apparent as we scaled from a single product company to a multi-product company. This meant that we needed to spend time on a repeatable and scalable process where developers can quickly onboard their microservices without reinventing the wheel on how they deploy.

Pipeline Management: Pipelines and Templates were managed within the Harness UI, leading to difficulties in version control and collaboration. This approach lacked the benefits of using version-controlled systems like Git.

Inconsistent Validation: Sanity Pipelines were run ad hoc and on demand, resulting in inconsistent validation of newly deployed services. This lack of consistency made it difficult to identify and fix issues promptly.

The Redesign:

The Pipeline

The pipeline is very simple, we call it the Golden K8s Pipeline internally. This is the common pipeline we use to deploy all services across all product lines. All of our services have the same inputs for the service, manifest style, and deployment type which made it easy for them to leverage the same template.

 Our tech stack for reference:

  • Kubernetes Deployment Type
  • Kubernetes Manifests stored in Git
  • Container Images are in GAR
  • Google Secrets Manager to manage deployment secrets
  • Slack as our Pipeline Notification and developer notification
  • JIRA for our Ticketing System
  • Harness Approvals to manage deployment gates
  • Mix of Java Applications and Go Applications
  • All Harness Products are managed in 1 Project called Operations

Below Screenshot is the breakdown of the Stage Template we use:

Input Sets

We want to reduce the set of inputs our developers need to know, and input sets are a great way to reduce that need. Most of the inputs are pre-configured so developers who need to manually run pipelines, can select a pre-canned input set to reduce the number of manual inputs. We maintain these input sets in git

We have an input set per environment per service. We currently manage 7 environments so each dev team owns 7 input sets they can provide when they need to manually deploy a pipeline.

For triggers with input sets, it’s always pre-set to the environment the trigger is intended to automate the deployment for.

Triggers

We use a lot of Github Webhook triggers to initiate PR and QA Pipeline deployments based on a specific branch cut. This ensures that we can automate and quickly validate features we launch quickly without any manual intervention from our developers. 

We also leverage a cron trigger to keep our lower environments in sync at all times. The cron trigger always deploys the latest version of our Pre-QA (dev) environment so we can validate and test our features before the QA branch cut on the latest up to date version. This reduces any version mismatch or inability to test issues for our developers. 

Benefits of the New Changes

Harness.io embarked on an exciting journey to optimize their Continuous Delivery Pipelines, which brought about a host of remarkable benefits:

Version-Controlled Pipelines: Pipelines and Templates are now backed up in GitHub, providing version control, collaboration, and easy rollbacks, significantly improving the development process.

We manage the Pipeline YAML in Github and maintain different branches as versions for the pipeline. 

For the template, we also manage it in Github and maintain versions of it via the Harness based version control. 

Uniform Deployment Process: All Microservices are now deployed using the same CD Stage Template. This standardization simplifies the deployment process and ensures consistency across services.

Governance through Policy as Code: To enforce STO Security Scan results in a Deployment Pipeline, Harness implemented governance through Policy as Code. This step ensures security compliance across all deployments. We used the Policy Step to enforce this.

Single Unified Pipeline: Instead of managing multiple pipelines, the Harness team now uses a single pipeline to deploy all services. This streamlines the deployment process and reduces complexity.

Automated JIRA Updates: A common JIRA Automation is now in place to automatically update JIRA tickets and status, reducing manual overhead and improving collaboration.

Unified Slack Notifications: The team implemented a common Slack notification system to update deployment and release progress, enabling real-time communication and visibility.

Consistent Deployment Style: All applications now follow a rolling deployment approach, ensuring smooth and reliable software rollouts.

Sanity Pipelines for Regression Testing: With sanity pipelines now running for every deployment, regression and breaking changes can be quickly caught and addressed.

Efficient Multi-Service Deployment: The team can deploy multiple services through one stage to a single or multiple environments, streamlining the deployment process further.

Granular RBAC to ensure developers have access to their own resources: The team’s CD configurations live inside 1 Harness Project called Operations. This makes management easier for us because we use the same resources across all the products. We add various user groups which align to the development teams. Using Okta to authenticate, users are permitted access to our internal Harness instance.  We then configure Resource Groups and Roles to scope down which user has access to their service, pre-prod and prod environment, templates and connectors. 

What's Next for Harness CD Evolution?

Here at Harness we are not resting on its laurels; we have an exciting roadmap for further improving their Continuous Delivery processes:

Transition to Helm Charts: The team will be moving from Kubernetes manifests to Helm Charts. This move promises better modularity and scalability for deployments. We want to move more configuration to our Helm Charts and maintain different values.yamls per environment. We also decouple the management of manifests with artifacts, for us it will be easier to manage the Helm Chart as the deployable unit rather than managing both artifact (docker image version) and Helm Chart version.

Unified Deployment Strategy: The team aims to deploy their SaaS offering similarly to oue Self Managed Platform offering, using Helm Charts for consistency and simplification. 

Artifact Management with Google Artifact Registry: To further enhance artifact management. We are planning to store our Helm Charts in an OCI Compliant Repository in Google Artifact Registry

Simplified Developer Inputs: Reducing developer inputs in running the pipeline will save time and minimize manual intervention, streamlining the development process. We want to move more configurations to the service and the environment via the overrides feature. This will reduce the number of inputs a developer need to provide because Harness can automatically compute them in the backend.

Harness.io's Continuous Delivery Pipeline Evolution is a testament to the team’s commitment to continuous improvement. Its our core value! By addressing the challenges they faced head-on, Harness has created a streamlined and efficient software deployment process. With version-controlled pipelines, governance through Policy as Code, and unified deployment strategies, the team is now equipped to deliver software with greater consistency, security, and agility. As we embark on further developments with Helm Charts and simplified inputs, we expect more performance and development productivity gains. Stay tuned for further updates on our journey!

You might also like
No items found.

Similar Blogs

No items found.
Continuous Delivery & GitOps