May 12, 2020

Spinnaker to Harness: A Conversion Story - Part 1

Table of Contents

    One of the biggest challenges that anyone faces when trying to adopt a new methodology is figuring out how their old methodology compares. What pieces of the old methodology can be converted, if desired, or if that is even possible? What will user adoption be like for those who liked the old methodology? How much effort will it take to convert everyone and everything over?

    Some will choose the “Cold Turkey” approach, where they cut off the old completely, in one swipe. Others will follow the “Happy Path” approach, where the only official and supported process is the new one, but others may choose to build and support their own process, as long as their team is productive according to certain predefined metrics. Lastly, others will pursue the “Slow Conversion” approach, where they will pick a certain group or project to start on the new process, and then slowly add other groups or projects over time, while letting everyone know what will be coming in the future.

    Depending on your style of approach (and there is no one-size-fits-all with the adoption of new methodologies), the best start is to understand the set-up and hierarchy differences between the old and new, the conversion process between the old and new, and, most importantly, the conceptual differences between the old and new.

    For this post, we will be discussing the set-up and hierarchical differences between Spinnaker and Harness. This is intended to help the reader grasp the required effort when moving over and why everything is not always a 1-to-1 process.

Spinnaker

    The first piece to understand is what the Spinnaker architecture is, what is required from an infrastructure perspective, the maintenance involved in setting up and maintaining Spinnaker, the install process to make it Production-ready, and the different entities in Spinnaker.

Architecture

Spinnaker is made up of 11+ different microservices

Deck

Gate

  • API Gateway
  • Talks to Front50, Igor, Echo, Orca, Clouddriver, Rosco, and Kayenta microservices
  • Minimum resource requirements: ~250m CPU and ~1Gi Mem

Orca

  • Orchestration Engine
  • Talks to Front50, Fita, Clouddriver, Rosco, and Kayenta microservices
  • Minimum resource requirements: ~1 CPU and ~4Gi Mem

Clouddriver

  • Mutating calls to Cloud Providers and indexing/caching all deployed resources
  • Talks to the Fiat microservice
  • Minimum resource requirements: ~ 1 CPU and ~4Gi Mem

Front50

  • Metadata persistence of applications, pipelines, projects, and notifications
  • Talks to the Fiat microservice
  • Minimum resource requirements: ~250m CPU and ~4Gi Mem

Rosco

  • The bakery for immutable Images or Image Templates for cloud providers
  • Minimum resource requirements: ~250m CPU and ~1Gi Mem

Igor

  • Continuous Integration Tool integration and triggering
  • Talks to the Echo microservice
  • Minimum resource requirements: ~250m CPU and ~1Gi Mem

Echo

  • Event Bus
  • Talks to the Front50 and Orca microservices
  • Minimum resource requirements: ~250m CPU and ~1Gi Mem

Fiat

  • Authorization Service

Kayenta

  • Canary Analysis
  • Minimum resource requirements:  (at least ~1 CPU and ~4Gi Mem because it requires Orca, but it might be more)

Halyard

  • Spinnaker Configuration Service
  • Halyard CLI talks to Halyard Daemon
  • Minimum resource requirements: ~200m CPU and ~2Gi Mem
  • NOTE: To have Halyard update Spinnaker, it will spin up a headless Spinnaker to update your Spinnaker.

    Each of these services has its own version and must be updated properly, typically handled by Halyard, since there is little/no backward compatibility between the different microservices.

    Some of the other things that Spinnaker requires to make it Production-ready is to set-up scaling for the different microservices, having an HA/DR set up for Spinnaker (in case it goes down), and monitor Spinnaker using your own Metric or Log solution/

Install Process

    To start with the install, Spinnaker requires a separate install, configuration, and update management appliance called Halyard, which must be installed on a separate machine than the cluster that you are putting Spinnaker in. In fact, all production-capable deployments of Spinnaker require Halyard if you want support from the community.

    Once Halyard is installed, you’ll begin to set up your Accounts (Cloud Providers) where your applications will be deployed. These cloud providers are required for Spinnaker to do anything. FYI, this is done via Halyard, before a user gets Spinnaker actually stood up.

    After the Cloud Providers/Accounts have been added, you will need to tell Halyard what type of environment Spinnaker needs to be installed in. a Distributed install, like Kubernetes, is the recommended approach for productions, especially because scaling will be required to handle the traffic.

    The next required piece for Spinnaker is storage, where the user will set up an external storage system for Spinnaker to store any data that needs to persist beyond upgrades. This data is extremely important and it is recommended that the user chooses a storage option that has redundancy.

    After Halyard is installed, the Cloud Provider is set up, the Environment and distribution type is chosen, and the external storage provider is created and connected, you can now install Spinnaker and access the UI.

    Although that is the general install process, if the user is wanting to set up the image bakery, back up the configuration, enable security, set up your CI connections, enable monitoring, etc.

Entities

Application

    Spinnaker defines an Application as “a collection of clusters, which in turn are collections of server groups. The application also includes firewalls and load balancers”. This means that the user will require their teams to reduce their applications, services, or microservices based on the environments that they will deploy too. This will make set up significantly easier in the long run, since pipelines are rather static.

Cluster

    A Spinnaker Cluster is a manual grouping of Server Groups, not a Kubernetes cluster. This is a result of Spinnaker originally being built for AWS AMIs by Netflix.

Server Group

    Not to make it too confusing, a Server Group identifies the deployable artifact and configuration settings for the artifact. Again, this is a result of Spinnaker originally being built for AWS AMIs by Netflix. What this means for Kubernetes users is that a Server Group is a group of pods related to an artifact.

Load Balancer

    The Load Balancer is the ingress protocol and port range that balances traffic across Server Groups (with optional health checks). In the world of Kubernetes, this would relate to Ingress and Services.

Firewall

    The Firewall is network traffic access.

Pipeline

    A Pipeline is the main deployment definition for Spinnaker. Chains together actions (stages) or other pipelines.

Stage

    A Stage is any action in Spinnaker and requires a Pipeline for it to be executed. Typically the user will string multiple stages together to complete the deployment. It is important to note that Stages are relatively static, meaning that if you create a deployment pipeline with multiple stages in it and you want to make that pipeline available across multiple environments you will need to clone each of the workflows for every service that has multiple environments.

Administrative Requirements

    The last important piece related to Spinnaker is the administrative requirements associated with supporting and maintaining the solution and the use of the solution. Of the many different customers that either tried or used Spinnaker for their CD process, one of them recently shared that their entire team was dedicated to the administrative effort around Spinnaker. Additionally, a Professional Services company that solely works on Spinnaker stated:

But while the open-source platform is technically free, installing and managing Spinnaker has a real cost of its own. Spinnaker consists of ten sub-services (plus additional external dependencies like S3 and Redis). The initial setup, installation, and implementation of these services can be a challenge, while ongoing maintenance, upgrades, and scaling of the platform can all consume significant attention and resources. Moving to microservices and running continuous delivery at scale means using many other services that are also open source, which also require your engineers’ attention. If you have even three engineers dedicated to managing Spinnaker (and we often see Global 2000 companies with many more), you are easily spending $600,000+ per year on Spinnaker.

Stu Posluns - Apr 15, 2019

    Spinnaker carries a real cost-of-ownership that most companies don't consider when starting out. The assumed Open-Source-Software synonym is "Free", but the hidden cost-of-ownership can quickly become a burden that is difficult to bear.

Harness

    The next piece to understand is what the Harness architecture is, what is required from an infrastructure perspective, the maintenance involved in setting up and maintaining Harness, the install process to make it Production-ready, and the different entities in Harness.

Architecture

    When setting up Harness, there are only two pieces to consider: Manager (Harness Cloud) and Delegate (your VPC)

Install Process

    To start with the install process, go to your Harness Account, navigate to the Harness Delegate section in Setup, download the Delegate, run the install command in the desired location.

Entities

Application

    In Harness, similar to Spinnaker, an Application is a logical grouping of the other entities. However, the Application in Harness is intended to be highly templatized, making for applications that can be shared across all Cloud Providers, Environments, and other integration points.

Service

    In Harness, a Service is your artifact and base configuration. In the case of Kubernetes, this would be the core Artifact (Docker Image) and the manifest. Additionally, the Service is intended to be templatized as much as possible so that it can be used across any environment and any workflow or pipeline.

Environment

   The Environment in Harness is a logical grouping of infrastructure(s) to deploy to. The easiest way to understand the Environment is based on RBAC groupings. If I want a developer to deploy to the Dev Environment on-demand across multiple clouds or clusters, I would create a Dev Environment, specify the development infrastructure(s), and then grant the permissions to execute deployments to Dev. Additionally, the Environment is intended to be templatized as much as possible so that it can be used across any Service and any workflow or pipeline.

Infrastructure Definition

    The Infrastructure Definition in Harness is exactly that; how do you define your infrastructure? Sometimes this is related to the namespace in a cluster or a cluster as a whole, other times this is a set of VMs/Hosts, etc. Additionally, the Infrastructure Definition is intended to be templatized as much as possible so that it can be used across any Service and any workflow or pipeline.

Workflow

    In Harness, a Workflow is the deployment definition. This would be any pre-deployment, deployment, and post-deployment steps, as well as notification rules, failure strategies, and Workflow Variables. Additionally, the Workflow is intended to be templatized as much as possible so that it can be used with any Service, across any Environment, and reusable in a pipeline.

Pipeline

    The Pipeline in Harness is the idea of grouping the Workflows and/or Approvals into a logical process of deployments. This might be the case where the user wants to deploy across multiple environments, maybe multiple services in a dependency sequence, or even deploying a set of services for a new customer/environment. Additionally, the Pipeline is intended to be templatized as much as possible so that it can be used across any Environment, any Service, and reusing Workflows and Approvals.

Triggers

    The Trigger in Harness is exactly as it sounds, which is that it Triggers a Workflow or Pipeline to be executed. There are many different types of Triggers, but one of the main benefits with Triggers is that the user can pass data through the Trigger into the Service, Environment, Workflow, and/or Pipeline.

Infrastructure Provisioner

    Many users are leveraging some form of Infrastructure-as-Code solution (i.e. Terraform or CloudFormation). In Harness, you have the ability to leverage your Terraform or CloudFormation configurations in your Delivery processes. This might be provisioning new environments, creating ephemeral environments, or bootstrapping environments. Additionally, the Infrastructure Provisioner is intended to be templatized as much as possible so that it can be used across any environment and any workflow.

Template Library

    As users introduce more automated testing or even require some extra bootstrapping steps that IaC is not able to do right now, Harness gives the user the ability to add those scripts to the Template Library, templatize them, and reuse them in any Workflow as needed.

Administrative Requirements

    The last important piece related to Harness is the administrative requirements associated with supporting and maintaining the solution and the use of the solution. It should be assumed that there is no solution available that is completely administration free, and this is true for Harness. There are requirements for some power-users that know how to navigate Harness, as well as some infrastructure requirements. However, when the customer has a dedicated Customer Success team, then the training and enablement process, the design process, and even troubleshooting or ticketing is something that is easy to go through. Think about Harness in this way: You have a team of 100+ dedicated Engineers working on your CD process so you don't have to. Just look through these amazing Customer Success Stories to find out more!

Summary

    If you have gone through the process of setting up Spinnaker, scaling Spinnaker, upgrading Spinnaker, maintaining and administering Spinnaker, you will notice two major differences in Harness. First, the set-up, maintenance, upgrade process for Harness is SIGNIFICANTLY easier than Spinnaker. And second, the mentality of how to structure your Delivery Flow in Spinnaker is more snowflake-oriented (more on that in a future post) whereas in Harness it is intended to be as templatized and reusable as possible, forgoing the snowflake requirements.

You might also like
No items found.

Similar Blogs

No items found.
Continuous Delivery & GitOps