Discover why traditional end-to-end testing based on assumptions falls short and learn how data-driven testing using real user behavior can improve software quality. Explore modern testing frameworks, challenges, and solutions for creating more effective E2E tests that prevent production bugs.
A common belief in the testing community is that quality test cases can be designed by predicting the end user’s behavior with an application based on an internal understanding of how the software will be used. As a result, end-to-end tests tend to be heavily dependent on assumptions about the user’s actions. You might expect the user to act in a certain way when using your app, but the end user’s flow is almost always unpredictable. And it’s the unpredictable interactions that most end-to-end tests miss that end up being the most significant bugs and customer escalations.
This article will explain how end-to-end testing is done today, common frameworks used in testing, what can go wrong when testing is based on internal teams, and how creating end-to-end tests based on data from actual user behavior rather than plain assumptions can improve an organization’s overall quality efforts.
End-to-end testing, or E2E testing, simulates a user’s workflow from beginning to end. This can mean tracking and recording all possible flows and test cases from login to logout. The input and output data are identified and tested based on the conditions provided as well as the test cases.
E2E Testing Framework
A testing framework, or a collection of actions performed on an application, can be automated and used to improve the product. There are many open-source testing frameworks that can be used for E2E test automation. A framework helps make test automation scripts reusable, maintainable, and stable. It is important to have a test suite that tests several layers of an application.
Here are some popular open-source test automation frameworks for end-to-end testing.
There are also frameworks like Citrus, OpenTest, WebDriverIO, and Galen that offer a wide range of support for different programming languages and testing.
E2E tests are expensive in terms of efficiency, time, and money. A QA engineer has to prioritize optimum flows based on experience and presumptions, but this often means the users report more bugs post-production because their environment and usage patterns are very different than those of internal QA teams. For example, for an online store, QA engineers typically test login, add to cart, and checkout as separate tests. Real-world customers, on the other hand, may perform these actions in a complex sequence, such as login → add to cart → go back and search for new products → read reviews → replace previously added items with new ones → add more products to the cart → checkout. Many software issues are encountered only during such complex interactions. However, creating tests for such complex E2E flows is extremely time- and effort-consuming, and such flows are, therefore, rarely tested before release.
Textbooks and traditional testing courses have taught engineers to create E2E tests based on assumptions about the end user’s behavior; these assumptions are based on the engineer’s experience, app requirements, and pure guesswork. Complex user interactions like the ones mentioned above are very hard to envision without user behavioral data. It is no surprise that users find bugs in a product after it goes live, raising concerns about a team’s testing capabilities. The tech industry changes with the times, and it is heavily focused on user experience. Positive UX, therefore, is hugely important.
The gap between the test coverage and the bugs reported by consumers increases when engineers predict how consumers will use the application despite having no solid data. The tester’s job is to base test cases on the stories and original requirements by the product owner or a business analyst.
There could be multiple factors behind an increase in post-production bugs:
There is an increased emphasis on real-time data because E2E testing based on engineers’ guesswork can lead to longer-term problems.
End-to-end tests are a collection of tests involving multiple services that are not isolated; they require mock services to test multiple flows. If the QA team does not analyze the tests based on the user data, it often leads to a break-fix loop that increases product downtime and damages the production cycle. This frustrates every member in the software development process, from developers to customers.
Harness AI addresses the above problems by generating E2E tests from real-time user flows, eliminating the need for guesswork. It analyzes the application from the customer’s standpoint, providing a more proactive way to detect and fix UX bugs before the customer ever sees them. This increases developer productivity as well as customer satisfaction.
In addition to real-time user sessions, Harness AI automates test runs in a CI/CD pipeline that can be viewed on a live Kanban board. Release readiness can be determined by the customer impact of broken flows and code changes that affect the overall quality score.
There are other ways of testing that have proved to be successful. The following methods are based on data collected from UX and actions on various applications.
Consumer-driven contract tests, or CDC tests, are used to test individual components of a system in isolation. They can be essential when testing microservices. Contract tests that are based on consumer behavior ensure that the user’s expectations are met. The tests verify and validate if the requests are accepted by the provider and return expected responses.
In this approach, the consumer drives the contract between the consumer and the provider (the server). There are APIs that fit the actual requirements and handle the payload effectively.
In addition to collecting user data via analytics, insights, and live user testing, heatmaps—which demonstrate where and how often users click on various areas of a site—are an important way for a tester to enhance an existing E2E test suite.
Heatmaps and the data derived from them show positive and negative trends of how customers are using an application, in terms of which areas are most popular and which may need more attention. The concentration of clicks, user flow, breakpoints, scrolling behavior, and user navigation trends can help create the right set of contract tests and a solid E2E test suite. Such tests would rapidly decrease the bugs reported in production since the test coverage would be a subset of user data.
Click tracking and device preference are helpful for the QA team when dealing with midsize applications.
There should be a significant decrease in bugs reported by customers once E2E tests are based on user data. Real-time user data offers a wide range of user flows that do not involve any guesswork. Harness AI, for instance, utilizes real-time data from user sessions, records the flows, and generates E2E test scripts for even overlooked areas of an application or site.
Properly writing and managing E2E testing requires less guesswork and more hard data, based on metrics, including test coverage, availability of the test environment, progress tracking, defect status, and test case status. No matter the method used to create test cases, the test environment needs to be stable in order to run E2E tests. Most importantly, the requirements must be constantly updated based on user data.
You can improve your E2E testing with a solid testing framework and a method for measuring user behavior data. Harness AI offers a CX-driven DevOps platform that uses live user sessions, determines real user flows, and validates tests based on such flows. It generates E2E tests to verify the flows in a CI/CD pipeline, which decreases the likelihood of bugs post-production and ultimately improves the quality of the final product. Harness enables you to proactively resolve bugs and boost engineering productivity by focusing on and optimizing your testing cycles.