Good chances if you are reading this post that you / your team might have a hand in the Continuous Delivery process. Continuous Delivery is the culmination of several application and infrastructure disciplines in an automated fashion.
Build, release, a plethora of testing, monitoring, approvals, and the distributed infrastructure/platforms that power our software are just part of our Continuous Delivery journey. Each one of those disciplines requires expertise but as the burden of software engineering is increasing, how do we organize to deliver and own Continuous Delivery?
Continuous Delivery Onus
A great set of topology diagrams by Matthew Skelton are the DevOps Topologies digging into how DevOps teams are organized. Matthew goes into detail polling from organizational/cognitive sciences such as Conway’s Law and Cognitive Load. If you are unfamiliar with Conway’s Law, tl:dr is that system design mirrors communication/team structure. Cognitive Load’s tl:dr is the level of effort for someone to learn and retain something and the processing capacity as such.
As an example of Conway’s Law and Cognitive Load at work, your Application Security Engineer would gravitate towards DevSecOps vs infrastructure automation. Most likely because the AppSec Engineer is on a security team [Conway’s Law] and sharing expertise on a security domain on a new project is easier to learn for that person [Cognitive Load].
Automating application security is just one potential pillar of Continuous Delivery. Start to add on the disciplines and the host of software engineers that rely on the Continuous Delivery pipelines, ownership starts to get muddy. Each one of the below teams has some sort of skin in the game with Continuous Delivery ownership; Continuous Delivery crosses so many boundaries.
Development Tools Teams
Natural fit making sure that tools that further software engineering efficiency is there. For example, development tools teams are the teams that manage efficiency items such as source code management and developer environments. As a SaaS model, a lot of the operational work is taken out of the equation in the Harness Platform. Depending on the size of an organization there might not be a team focusing on engineering efficiency.
Platform Engineering Teams
Depending on the organization this is similar to the development tools team. For example, platform engineering teams are the teams that build common infrastructure such as your platform-as-a-service or Kubernetes cluster(s). Platforming engineering teams would also give guidance on items like logging, observability, and performance monitoring. Taking a look at Netflix’s model around Full Lifecycle developers, platform engineering discipline is key.
As one of the main consumers of the pipelines, leaner organizations might look to development teams for ownership and self-service over tools. Because of the physical or cloud infrastructure needs of the applications would need some assistance from an operations team. The Harness Platform is designed to orchestrate and integrate multiple disciplines which equal Continuous Delivery. For development teams that don’t have access to certain expertise, the Harness Platform is opinionated to guide you along the way and designed for you to try different scenarios to get your pipelines right.
With the lines always blurring with the DevOps movement, operations teams historically focused on infrastructure and monitoring of infrastructure that applications run on top of. Operations teams can have insights around trouble brewing early with the insights provided by Continuous Delivery pipelines residing in the Harness Platform. The Harness Platform can also provide insights as a feedback loop to other teams across the organization. The operations teams are certainly experts in infrastructure which can help guide other teams in infrastructure requirements / templating out in the Harness Platform.
Release Engineering Teams
Throughout my career, I have interfaced less and less with a release engineer to a point today that I don’t personally know one anymore in my previous firms. The onus has shifted to the teams owning the applications to release their software in an efficient and safe manner. The Harness Platform is similar to giving you an extremely talented release engineer. If your organization still has a release engineering team, Harness can lift the burden of the release engineers needing to babysit individual deployments.
This mythical creature of a team is also a natural fit. Though going back to the DevOps Topologies, there are many ways to organize DevOps organizations and teams. Viewed as a catch all around anything other that feature development and testing, the burden is certainly high on DevOps team members. Where did all those release engineers go? Lots of them went into DevOps teams and are still focusing on release engineering problems. Depending on which DevOps Topology your organization fits in, there could be members of multiple teams involved which further increases the burden which Harness can help reduce.
Each organization is different. The delivery of software falls on so many teams, intrinsically as an engineer we can take action. Especially for myself, I tend to act on data but there are times in the SDLC that when we are building something for the first time we are pretty reactionary.
When do you take action?
Once my build cleared my local machine the next time I would take action is when storm clouds appear over my Jenkins Build failing some random rule. Though by the time the storm clouds would appear, I would be frustrated getting feedback later [or more-so earlier] in the build pipeline than I would expect aka a reactionary response.
Failing early should be the model we follow. When there is not a lot of visibility minus a storm cloud, early on can be frustrating in my case that we might have failed too early for not a good reason. If we have the ability to get information and metrics earlier on in the development or deployment cycle, we can certainly take action.
A large mantra these days is “shifting left” towards the development team. For example security shifting left with code and vulnerability coverage moving towards your development environment. Democratizing and shifting left your Continuous Delivery pipeline is giving you a clear rationale why something failed or passed.
Delivering software is a lot of trial and error. Submitting to a pipeline multiple times before we get something right is ok. As an engineer, we do have ultimate responsibility to our craft and the features we build.
Starts with You
As cliche as this sounds, as an engineer, we strive to better our craft and Continuous Delivery starts with you. As I navigated my software engineering career, having a sense of ownership is important. Though pressure surrounds us as the ever quicker push to time to value, disparate and granular platforms we deal with, and shortening times on projects makes our feature delivery jobs ever more difficult.
As a software engineer, intrinsically [Cognitive Load again] want to move the needle for what is best for our features/applications we develop. Though we are facing a lot of pressure with decreasing time on projects and Sprints that seem like we can’t survive the marathon.
With anything after you do something a few times, you get better at your craft. Continuous Delivery with the Harness Platform brings multiple disciplines needed for modern software delivery in an easy to consume and share format. Like a tree falling in the woods and no one is there to hear it, are our features there if no one can have access to them? The Harness Platform allows us to standardize and democratize our Continuous Delivery goals and pipelines.
Democratize with Harness
The mission statement for Harness is to Democratize Continuous Delivery. The power of having a robust, flexible, and transparent Continuous Delivery Pipeline in the hands of those who need that is the power of the Harness Platform. If you have not taken the Harness Platform for a spin, feel free to sign up today. Also, participate in our Community so we all can better Continuous Delivery together.