Testing. One word that can strike some sort of love/hate emotion into your developers. Like development vs operations silos, in days gone by there were additional silos with development teams vs quality assurance teams. Today in modern development teams that adversarial relationship is going away with QA teams and a solid example of that is the SDET or software development engineer in test role. SDETs are highly specialized engineers providing test automation strategy/coverage and remedy expertise to the team.
In my academic and early professional career, I used to rank development ahead of quality assurance and the QA team as the team that “holds up my progress” finding all sorts of wonky bugs that would clearly never happen. The longer I am in software development the greater respect I have for QA teams and with the advent of peer programming, I am in aw of the skillset required to move the testing needle forward.
Like many facets of technology, testing can have a wide paintbrush. When developing a feature, as software engineers we are constantly executing the feature as we create iteratively. Once we get the feature right in our heads, the traditional approach would be to send the feature off for testing. In modern development organizations, we are constantly testing and even our behavior while developing is to eventually consistently test the feature. Though testing typically falls into two buckets, what you did and the impact of what you did.
Testing can generically follow one of two buckets. The first bucket is testing what you made e.g feature testing. The second bucket is testing what impact you made e.g integration/system testing. These two buckets can be really wide for example environmental and performance tests can fall into both buckets depending if you are making application infrastructure or just application changes. Typically in the journey your features make towards production is you test what you have made then you test the impact.
Software development is certainly an initiative exercise. We can be laser-focused on a method of two getting our function just right in our head. Even the best-written design document is still left to interpretation by the developer. As humans we are a sum total of our experiences and person to person, we interpret items differently. To make sure that we are all on the same page, having the ability to objectively test the masterpiece you created is important.
One of the first measures when talking about quality is code coverage. Usually measured in percents, code coverage is a measure of source code being executed during testing. There are several measures such as function or condition coverages. Code coverage usually focuses on what we write vs what we leverage; having deep code coverage on third party open source can be difficult since we are assuming a certain level of quality and functionality.
Code coverage is an aggregation of unit tests that are being performed. Unit tests focus on method/module execution e.g the core of what we write as software engineers. Let’s say we are working on a calculator application and we already have functionality to add now we add a method to multiply numbers, we would write unit tests to test out new multiplication method.
Open source is a pillar to many of the applications we write. Though this leaves us with an interesting problem; if we did not write the libraries how do we test the libraries or assume a certain level of quality back to the code coverage pillar. An adage that Open Source ages like milk not like wine proves true by dealing with the velocity that libraries are included and contributed to.
As engineers, we take a lot of pride in what we create, and once we are satisfied with the quality of our feature, time to integrate the feature into the greater fold of the application/platform. A big focus of integration tests is validating and measuring the impact you made to the greater application.
Software is rarely created in a vacuum and your features have to go somewhere. Testing the impact on the broader application or platform is crucial. If as humans we are a sum total of our experiences, our systems are sum totals of all of our contributions throughout the years.
Merging and integrating your features into the fold is key to software development. Integration testing focuses on the combination of different modules and environments. Though traditional testing methodologies might discern integration vs system testing [e.g taking in the environment] as separate concerns.
Taking in the production load into account, soak testing is used to validate system behavior. Soak testing is similar to load testing where you are placing load on the system but soak testing is for a more prolonged amount of time. Soak testing is different that performance testing because soak tests are designed to find weakness at the edge of capacity.
There is a direct correlation between performance and conversion. Performance testing is garner system performance and baselining for SLAs. When building features, knowing how the end to end performance will look taking in environmental infrastructure and concerns is an unknown until you get into those aspects. Like any other testing, there is feedback and the ability to tune with the results of the performance test.
Because of the complexity of the systems we work on, a concept called the Fog of Development steps in; similar to the Fog of War that situational awareness of your changes is difficult in distributed systems. With Chaos Engineering, you are injecting failures to build up system resilience. More so injecting failure in parts of the platform/system/application where you might not have thought failure was possible.
With so many testing approaches and methodologies out there, in the waterfall days gone by, waiting for development to be completely over to start to execute or even build the test cases can drag out the project times and delay valuable feedback. Part of breaking down the development vs quality assurance silos is the embracing of test driven development methodologies.
A big part of breaking down the development vs quality assurance silo is testing being more ingrained in the DNA of development teams. Test driven development, or TDD, focuses on the requirements be test case centric as a development methodology. If we agree on how to test something before we develop something, the features being developed should more closely match the requirements. Like any pillar in computer science, we are just shifting complexity around. TDD puts a lot of emphasis on the quality of the test cases and “building to pass tests” might limit leeway given to the development team.
As an engineer also might seem that you are spending a lot of time building test cases. Some of my early challenges trying to achieve code coverage were due to mocks (mock objects). Creating almost as much code to test as the number of lines of code in my method seemed weird and the mocks would go off into the ether after we finished the project. Though with the advent of pipelines, you can keep all your hard work surrounding tests to be leveraged long after your time on a project.
Orchestrating your testing/confidence-building steps is a crucial part of your software delivery pipelines. Test automation vendor Mabl talks about DevTestOps embracing the importance of your test suite as part of a pipeline. The rationale there is feedback and even the engineers knowing that they are subject to as they roll off and on projects help shorten the learning curve and increase quality.
When organizing your tests, fine-tuning the radio dials between innovation and control is sometimes hard to get right. You don’t want to stifle innovation but you also want to ensure quality, consistency, and controls. In the below graphic, at Harness [epscially at our Harness Universitys] we do preach about a “tightening the screw” model. As you transverse each stage/environment, increase the rigor of the test suite.
By increasing the rigor, innovation and remediation can occur without major roadblocks. Harness is here to partner with you to get your test suites, today and future, into a pipeline.
The benefit of leveraging the Harness Platform is orchestrating the new and the old paradigms in software delivery. As new testing methodologies come about, for example, Chaos Engineering, you can orchestrate these confidence-building steps with the Harness Platform.
The Harness Platform by design and convection allows you to tie your release strategy to the outcome of your test suites. The above workflow even has the benefit of Harness’s Continuous Verification which triggered a rollback during the production deployment. If you have not already, feel free to sign up for a Harness Trial and check out our newly minted Harness Expert section of the community for tips and tricks from the field.
Cheers,
-Ravi