To say that I have been around the block with respect to working in the Federal Government is an understatement. I have had the privilege of working with – and for – some of the most secretive projects, and quite honestly, the most brilliant people in the industry. During my 15 years spanning the public and private sectors working for tech powerhouses like AWS and Red Hat, I can honestly say the capabilities produced by our Intel, DOD, and Military entities are nothing short of astonishing. While our military and intel apparatus are unmatched, there lies an ominous issue in getting those mission capabilities out the door to meet an ever-growing technological threat. 

For the United States Government, the original IT software process is changing rapidly. It is imperative for our public entities to adopt new technology to bridge the gap between the modern application development methods of their counterparts in the private sector. I guess the old adage “better late than never” is relevant here.

That being said, the way in which applications are being built, tested, and deployed today are worlds different than they were a decade ago! Many of our federal entities are… well… overdue (to be tactful) for a change in the way developers, IT operators, and security deliver their capabilities and requirements. I dare say though, the importance for Government organizations to embrace relevance, adopt a new culture of consistent collaboration, and institute a Cloud Center of Excellence is paramount to achieving a true digital transformation.

Challenge in Federal Government #1: Culture

Culture: the customs, arts, social institutions, and achievements of a particular nation, people, or other social group. – Webster’s Dictionary

If we apply that definition to our DevOps organizations, then how we adopt a new process, approach automation, and institute new tooling shapes the very culture and approach to the software process in that organization. Let me provide a practical example. 

I like to correlate the DevOps concept to the following (bear with me on my story of becoming a rockstar): I like to play guitar. If I expected to go to the PRS guitar factory and asked Mr. Paul Reed Smith to build me a new guitar with the best pickups, straightest bridge, etc. but never practiced my craft, I really just bought an expensive capability that I don’t know how to use.

The same can be said about buying or implementing software at scale! If you don’t implement modern SDLC tooling, repeatable processes, and attainable goals, you’ll never improve. If the expectation is to succeed by sprinkling magic developer dust over bad operations processes and cobbled-together automation frameworks, that will not do. Knowledge says, well I just bought this {insert shiny object here}, it’ll satisfy all my needs – but wisdom says new products and toolchains alone will not solve the issue at the root cause. We need a culture change!

Challenge in Federal Government #2: Technical Debt

This leads me to our next challenge that the US government is currently facing… Let’s talk about debt for a second!

Technical Debt: A concept in software development that reflects the implied cost of additional rework caused by choosing an easy solution now instead of using a better approach that would take longer. – Somewhere on the webs

Debt stinks, and the easiest solution to debt reduction is to change your approach to the decision making process itself. Jez Humble said in a twitter post 2 years ago, “Code is a liability, not an asset.” That is an interesting way of looking at it. In the finance world, liabilities are sought to be mitigated or offset by assets. If we pair that concept to our technical capabilities, people or Developers are our assets and code is just the stuff that is produced, aka, the liability. So how do we retain and protect our assets while balancing the liabilities they produce? By adopting automated architectures that work on behalf of the developer without adding complexity – that’s how!

The Solution: Automated Orchestration

Automated orchestration tools were designed to improve and alleviate the complex mundane process around the delivery of a given artifact. Harness’ CEO and co-founder, Jyoti Bansal, saw while he was managing a team of DevOps Engineers that they had the knowledge and the capability to accomplish anything. However, they did not have the time to keep up with ever-changing market demands and tech stacks. In general, the PaaS industry figured out a better way to package, store, and run code at scale through microservice architecture using container platforms like Kubernetes, but the problem still existed around how to get said code from letters and symbols in some repo in the interwebs, to a running a reusable revenue generating offering, fast. 

So, How Can Harness Help?

Harness fills the gap in the monolith to microservice delivery space. The journey of building smaller deployable units has been going on for a while. Going from a monolith to microservices is a journey that, in the end, yields more deployable artifacts than ever before. An entire application or platform can be decomposed into smaller functional areas, which can independently scale and be deployed. As applications are decomposed into smaller pieces (microservices), deployment complexity increases. The more pieces you have, the more you have to deploy.

Harness has built self-service and convention-based deployment mechanisms that make scaling deployments easy. It erases complexity around deployment strategy, verification steps, and rollbacks with the click of a button – or push of code to a repo.

If you are interested in how Harness can accelerate your software process, and you are interested in getting ship done, schedule a demo and get a free t-shirt! The t-shirts are pretty awesome too.