Harness Chaos Engineering (HCE) simplifies chaos engineering for enterprises by leveraging the open-source LitmusChaos project and offering a comprehensive, free plan with features like a cloud-native approach, extensive fault library, centralized control plane, and native integration with Harness pipelines. This enables structured experimentation, observability, and hypothesis validation to build resilient applications, supported by governance enforcement, detailed analytics, and guided chaos experiment execution.
Harness Chaos Engineering (HCE) is powered by the open source CNCF chaos engineering project, LitmusChaos. HCE adds additional features to make chaos engineering for enterprises easy. Harness offers a free hosted LitmusChaos, which includes features equivalent to LitmusChaos and also bundles Harness platform features such as RBAC and hosted logging—all for free.
Watch: Getting Started with Harness Chaos Engineering
---
Getting Started with Harness Chaos Engineering
Build resilient applications using the following steps:
1. Choose or build your application
2. Configure the chaos control plane:
- Set up an environment
- Set up chaos infrastructure
3. Create chaos experiments in your application
4. Execute the chaos experiments
5. Analyze the results
Chaos experiments need appropriate observability infrastructure to validate the hypotheses around the steady state. The practice of chaos engineering consists of performing experiments repeatedly by injecting various potential failures (chaos faults) to simulate real-world failure conditions against different resources (targets).
Harness Chaos Engineering simplifies chaos engineering practices for your organization. The diagram below describes the steps to induce chaos into an application.
---
The standard chaos experimentation flow involves the following steps:
1. Identify the steady state of the system or application under test and specify its service-level objectives (SLOs)
2. Hypothesize the impact a particular fault or failure would cause
3. Inject this failure (chaos fault) in a controlled manner (with a pre-determined and minimal blast radius)
4. Validate whether the hypothesis is proven, if the system meets the SLOs, and take appropriate actions if a weakness is found
---
HCE goes beyond fault injection, helping you set up a fully operational chaos function based on the original principles of chaos and addressing several enterprise needs, including:
Harness Chaos Engineering Availability
Requirements and project-level permissions to execute chaos experiments:
1. Right permissions: Chaos Resources Role Permissions in Access Control
2. Permissions on the cloud account/Kubernetes cluster/VM: Kube RBAC, IAM Roles.
3. Enable necessary Feature Flags
4. Prepare target systems: VMs or K8s.
5. Prepare network connectivity, identify proxy requirements, firewall rules
6. Identify application/infrastructure steady-state parameters: Using APMs or logs.
7. Image registry requirements: Set up the registry with secrets.
8. Specific needs for Kubernetes: Namespace quotas, workload-specific labels, annotations, resource limits, proxy environments, and permissions for advanced use cases (SCC, IRSA, etc.).
9. ChaosHub requirements and connectivity to Git sources
To add a new user to a project:
1. In Harness, select a project
2. Expand the Project setup menu and select Access Control (This page lists all the users added to the current project)
3. Select New User and then
4. Select the User Groups and roles to enforce access permissions.
5. Select Apply.
---
In the chaos faults reference, you'll find fault-specific requirements listed in the Use cases section of each fault, as shown, for example, in the use cases for the Kubelet service kill fault.
---
The table below lists the chaos infrastructure execution plane components and the required resources. Install these components in your target cluster to allow the chaos infrastructure to run experiments.
---
Step 1: Create a project
TIP: You can also select one of the environments from the list of environments if it is available instead of creating an environment.
4. This will lead you to a page where you can select an existing infrastructure or create a new infrastructure. Select On New Infrastructures and select Continue.
5. Provide a name, a description (optional), and tags (optional) for your chaos infrastructure. Click Next.
6. In this step, choose the installation type as Kubernetes, access type as Specific namespace access (click Change to display the Specific namespace access access type), namespace as hce, and service account name as hce. Select Next.
TIP: The Cluster-wide access installation mode allows you to target resources across all the namespaces in your cluster whereas the Specific namespace access mode restricts chaos injection to only the namespace in which the delegate is installed.
8. It may take some time for the delegate to be set up in the Kubernetes cluster. Navigate to Environments and once the delegate is ready, the connection status displays as CONNECTED.
Once you are all ready to target our Kubernetes resources, you can execute the simplest fault, Pod Delete. The "pod delete" chaos fault deletes the pods of a deployment, StatefulSet, DaemonSet, etc, to validate the resiliency of a microservice application.
❯ kubectl apply -f https://raw.githubusercontent.com/chaosnative/harness-chaos-demo/main/boutique-app-manifests/manifest/app.yaml -n hce
❯ kubectl apply -f https://raw.githubusercontent.com/chaosnative/harness-chaos-demo/main/boutique-app-manifests/manifest/monitoring.yaml -n hce
❯ kubectl get pods -n hce
12. To list the services available in the hce namespace, execute the command below
❯ kubectl get services -n hce
13. To access the frontend of the target application in your browser, use the frontend-external LoadBalancer service.
14. Similarly, you can access the Grafana dashboard. Login with the default credentials, that is, username admin and password admin, and browse the Online Boutique application dashboard. Currently, all the metrics indicate normal application behavior.
Step 5: Construct a chaos experimentSince the target application has been deployed, you can now create a chaos experiment. You will target the pods of the carts microservice with the pod delete fault. Currently, the cart page is healthy and accessible from the front end, as seen in the /cart route.
16. Specify the experiment name and a description (optional) and tags (optional). Choose the target infrastructure that you created earlier, click Apply, and click Next.
17. In the Experiment Builder, choose Templates from Chaos Hubs and select Boutique cart delete. This allows you to create a chaos experiment using a pre-defined template that already has a pod delete chaos fault configured to target the online boutique application. Select Use this template to continue.
18. Your target is the carts microservice. Hence the appropriate hce application namespace and the app=cartservice application label have been provided here. Also, the application kind is deployment. You can discover these entities from within the UI using the search dropdown menu for the respective inputs.
19. Choose the Tune Fault tab to view the fault parameters. Here, you can tune the fault parameters. Set Total Chaos Duration to 30, Chaos Interval to 10, and Force to false. You can leave the Pods affected perc empty for now. The values for Total Chaos Duration and Chaos Interval indicate that for every value of 10 seconds, the cart microservice pod(s) are deleted for a total of 30 seconds. By default, at least one pod of the cart deployment is targeted.
20. Navigate to the Probes tab. Here, you can either create a probe or select a pre-defined probe. Click Select or Add new probes. In this tutorial, you can select a pre-defined probe and add it to your chaos fault.
21. To add a pre-defined probe to your chaos experiment, click the filter button and search for http-cartservice. This cartservice validates the availability of the /cart URL endpoint when you execute the pod delete fault.
22. Click Add to Fault.
NOTE: Under probe details, you can see that the URL is http://frontend/cart and the response timeout is 15 ms. As a part of the probe execution, GET requests are made to the specified URL. If no HTTP response is found within 15 ms, the probe status is considered as 'failed'. If all the probe executions pass, then the probe status is considered as 'passed'. You can find other probe details in the properties field.
24. This will close the probes tab, and now, you can click Apply changes to apply the configuration to the chaos experiment.
Step 6: Observing chaos execution
26. You can see that once you click Run, an experiment run is scheduled. You can see the status of every step in the tab.
27. Select Recent experiment runs to view the runs of an experiment. The latest experiment is displayed in the last bar with the status as RUNNING.
28. To check the status of the cart deployment pod, execute the command below. The pod delete fault terminates the cart pod and replaces it with a new pod, for which a container is yet to be created.
❯ kubectl get pods -n hce
29. As a consequence, if you try to access the frontend cart page, you will encounter the following error which indicates that the application is now unreachable.
30. You can validate this behavior using the application metrics dashboard too. The probe success percentage for website availability (200 response code) decreases steeply along with the 99th percentile (green line) queries per second (QPS) and access duration for the application microservices. Also, the mean QPS (yellow line) steeply increases. This is because no pod is available at the moment to service the query requests.
Step 7: Evaluate the experiment run
NOTE: You can see that the value expected and the value obtained don't match. Hence, the probe fails.
Congratulations on running your first chaos experiment! Want to know how to remediate the application so as to pass the experiment run and probe checks? Increase the experiment pods to at least two so that at least one deployment pod survives the pod delete fault and helps the application stay afloat. Try running it on your own!
For users and fans of LitmusChaos, this is an opportunity to enhance and upgrade your Chaos Engineering journey by migrating to LitmusChaos Cloud. Sign up for FREE to experience the ease of resilience verification using chaos experiments. The free plan allows you to run a few chaos experiments at no charge for an unlimited time, boosting Chaos Engineering for the community.
Harness' Chaos Engineering ROI Calculator helps estimate business losses from outages and evaluates the ROI of chaos engineering practices. By simulating failures and optimizing recovery, it improves system reliability and reduces downtime, providing a clear financial benefit to organizations.