I will predicate this post that I have been bringing down services in production before Chaos Engineering was an acceptable practice. As our systems grow more complex and distributed, the “fog of development” will certainly appear. Similar to the fog of war which is a military term for uncertainty in situational awareness e.g attaining precision and certainty. In the fog of development, as we move faster by increasing agility, understanding confidently how our complex systems behave becomes more challenging as we try to make changes. 

Not to be quagmired into analysis paralysis at some point we have to make a change and learn from the change we made. Though as humans we come and go on projects and our systems we work on are the aggregate of potentially years or decisions before and after your time. Building systemic confidence in the end-to-end system which could have been touched by many opinions before your time e.g cutting through the fog is Chaos Engineering. Chaos can take many forms and a popular form is in the form of a black swan. 

What is Chaos? – Hello Black Swan

A popular piece of reading for Site Reliability Engineers [SREs] is Nicholas Taleb’s The Black Swan:  The Impact of the Highly Improbable (2007). Taleb introduces the black swan metaphor in his piece. Taleb would classify black swan events such as a sudden natural disaster or in business at the time of publishing his piece Google’s astounding success.  A black swan event has three characteristics; it’s unpredictable, it’s impact is massive, and when it’s over we devise an explanation that makes the black swan seem less random. 

When dealing with the fog of development, we are prone to the fallacies of distributed computing which are a set of pivotal assertions made by computer scientist Peter Deutsch. Some of the top fallacies are the network is reliable, latency is zero, bandwidth is infinite, and there is only one administrator. Distilling the fallacies down your services will be consistent and available at all times. As we know systems and services come up and down all the time but when getting into the minutia of developing the unknown we can easily forget this. 

Let’s take for example we are building some features that rely on Amazon S3 for object storage. If we are building features for a service that does complex processing and the final output is writing or updating an object in S3, we as engineers might assume that S3 will be there. We test our features up and down and provide less sophisticated test coverage to the S3 portion. Amazon Web Services had a black swan event of its own in 2017 when S3 suffered an outage. Something that we assumed that would be there [even with a lowered performance/write SLA] was not and the fallacies of distributed computing came back to bite us. 

The S3 outage really helped to shine a light on making sure we touch all parts of our stack even if the parts we touch don’t seem obvious perhaps due to our perception/fog around the fallacies of distributed computing Chaos Engineering brings controlled chaos so we can shake these types of events out. 

Estimating Chaos – Hello Chaos Engineering

Chaos Engineering is the science behind intentionally injecting failure into systems to gage its resiliency. An informative creed, the Principles of Chaos Engineering, emphasizes the need for a method of comparing a baseline to a hypothesis on what will happen as chaos is injected. 

Some cursory Googling if you are early on your Chaos Engineering journey will come up with a tool called Chaos Monkey. Created by Netflix in 2011, Chaos Monkey is the tool credited for bringing Chaos Engineering into the mainstream. Chaos Monkey, for example, would terminate running Amazon Machine Images to help test application resiliency in the public cloud. Fast forward to today there are entire Chaos Engineering platforms such as Gremlin that help us package up lots of Chaos Engineering science.  

I am always a big fan of “awesome lists” on GitHub and for those learning about Chaos Engineering, there is an awesome list, Awesome Chaos Engineering. As we continue to move towards highly distributed architectures, the number of moving parts increases. More mature testing methodologies such as load testing are there stress our systems but Chaos Engineering shines lights into different areas.  

Are Load Tests similar to Chaos Engineering Tests?

Certainly, load can bring on chaos per se. We commonly design are systems to be elastic in multiple pieces e.g spinning up additional application, compute, networking, and persistence nodes to cope with the load. That is assuming that everything comes up at the same/appropriate time so we can get ahead of the load. 

In the computer science world, the Thundering Herd problem is not new but manifests itself more commonly as we moved towards more distributed architecture. A Thundering Herd problem at the machine level as a large number of processes are kicked off, another process becomes the bottleneck e.g the ability to handle one and only one of the new processes. In a distributed architecture, a Thundering Herd might be your messaging system is able to ingest a large number of messages/events at a time but processing/persisting those messages might become a bottleneck. If you are overrun with messages, hello Thundering Herd.    

A load test would certainly help us prepare for a Thundering Herd as one type of stress but what if part of the system was not even there or late to the game? That is where Chaos Engineering comes in. A very hard item to test for would be a cascading failure without Chaos Engineering. Historically more equated with the power grid, a cascading failure is a failure of one part that can trigger failures in other parts. In distributed system land, this is us trying to find a single point of failure and making sure our application/infrastructure is robust enough to handle failures. 

A large part of the investment where Chaos Engineering and Site Reliability Engineering intersect is trying to have more control over how problems/failures manifest themselves to users. The adage “slow is the new down” is very true as user expectations continue to rise. Investments in Chaos Engineering will continue to rise as we as an industry continue to raise the bar on our distributed systems engineering craft. A fantastic part to run Chaos Engineering tests is in your Continuous Delivery Pipeline. 

Chaos Engineering as part of your CD Pipelines

As newer ways of looking at building confidence in your systems start to gain traction, your Continuous Delivery Pipeline is a great spot to be orchestrating the confidence-building steps. The key is to be flexible to have the ability to run a robust pipeline. Crucial to Chaos Engineering philosophy is the ability to run experiments.  

Chaos Engineering vendor Gremlin has a perspective on Chaos Engineering strategy in a pipeline. Gremlin re-affirms that experimenting is a very important part of Chaos Engineering; as you are testing your hypothesis or bubbling up new findings that you have not thought about before. 

Depending on your development or organizational philosophy, Chaos Engineering tests/experiments can be used as judgment calls, for example, to either promote or fail a canary in a canary release. If that is too big of a leap to move forward with, you can certainly design a Harness Pipeline to include a Stage to execute experiments to gather results so in the future can make decisions based off of the Chaos Engineering tests. 

Harness Here to Help

The Harness Platform is a very robust platform that is purpose-built for orchestrating confidence-building steps. Like in any experiment, a pillar to Chaos Engineering is having a baseline. Though imagine you are new to a team or an application or part of a team such an SRE team that has coverage for dozens of applications that you have not written yourself. Running Chaos Engineering tests for the first time would require either to isolate or spin up a new distribution of an application and associated infrastructure to experiment without production impacting repercussions. 

If your applications are not deployed through a robust pipeline, creating another segregated deployment could be as painful as the normal ebbs and flows of deploying the application normally. Moving along the Chaos Engineering maturity journey, as Chaos Engineering tests are viewed as mandatory coverage, integrating them into a Harness Workflow for the judgment call or failure strategy is simple by convention. Feel free to sign up for a Harness Account today as you dip your toes into Chaos Engineering! 

Cheers,

-Ravi

Keep Reading

Give Us A Shout

Use the form below to drop us a line.

"We reduced deployment effort by 16 hours per day, saving $290,000 a year."
"By implementing automation with Harness, we eliminated the need to incur $500,000 in DevOps costs."
"We achieved a 10x return on investment within first few months, while reducing deployment time from 2 days to 2 hours."

Contact Sales

Please fill out the form below and we’ll get back to you directly.

"We reduced deployment effort by 16 hours per day, saving $290,000 a year."
"By implementing automation with Harness, we eliminated the need to incur $500,000 in DevOps costs."
"We achieved a 10x return on investment within first few months, while reducing deployment time from 2 days to 2 hours."

Request a Price Quote.

Our goal is to help you deliver applications to production safely, and reliably with effective release management tools. Fill out the form below and we’ll get back to you quickly.

"We reduced deployment effort by 16 hours per day, saving $290,000 a year."
"By implementing automation with Harness, we eliminated the need to incur $500,000 in DevOps costs."
"We achieved a 10x return on investment within first few months, while reducing deployment time from 2 days to 2 hours."

Get Started

Harness is easy to trial, easy to use, easy to love.

By signing up, you agree to our Privacy Policy and our Terms of Use.

Try Harness

To join an existing Harness account, please enter:

❮ Go back

Thanks for Contacting Harness

We received your information and we’ll be in touch shortly.

Meanwhile, keep up-to-date on Harness by checking out our blog.

    Request a Demo for Access to our Trial

    For On-Prem, we'll set up a brief demo to discuss getting you started.