Continuous Testing Strategy
May 26, 2022

A Successful Continuous Testing Strategy Is All About Planning

Continuous Testing
Automation

No matter the year, Perfecto’s State of Test Automation shows that teams continue to face challenges around sustainable test automation year over year. But how is that possible in light of all the advancements made in test automation frameworks? The main factor preventing teams from automating at scale is a flawed continuous testing strategy. 

Chart showing challenges in testing year over year
Challenges around sustainable test automation are recurring year over year regardless of advancements.

Our most recent State of Test Automation found that test reporting and analysis, test coverage and prioritization, testing instability, and resources or skills are the main blockers for continuous testing at scale. None of them blame a specific testing framework like Selenium, Cypress, Appium, and others. Everyone understand that the tools and frameworks are just tools. Rather, the responsibility falls upon practitioners and testing teams to make proper use of these frameworks and other emerging testing technologies. If teams can better plan and consider these pitfalls prior to every sprint or software cycle and validate them between releases, the continuous testing strategy would be much better.  

In this blog, I highlight a few areas that — when planned properly — can address most if not all the above reasons for degraded test automation and/or software quality.

Planning a Continuous Testing Strategy 

Prior to starting any kind of testing activity, both developers and SDETs must understand the scope of the release. By leveraging market data trends across supported platforms (web browsers, mobile devices) with knowledge of future changes, teams can better plan their cycles.  

For example, if you are building a mobile app, looking at which devices across which geographies are mostly used can help teams set up their device lab and cover the right platforms. Sites like Statcounter are great at providing global browser and mobile coverage together with market driven reports, such as our annual Test Coverage Guide.

Metrics showing various mobile OS market share.
Leverage global browser and mobile coverage together with market driven reports.

Furthermore, the right test cases based on the sprint scope (both manual as well as automated) create a perfect test plan that will exercise the right code against the right platforms. While this sounds simple, it is more difficult in a recurring three-week sprint. Test cases as well as platforms change often due to product updates, new browser and mobile OS versions, and more. 

Test Flakiness and Overall Stability 

While this area may seem unrelated to test planning, it is actually all about test planning. A test scenario that was automated and was not properly certified and validated to work continuously and deliver the same result unless it is a clear defect is a bad test. Especially around continuous testing, certifying each test case and ensuring it is properly created and can provide the relevant insights back to the user is critical to the entire pipeline stability (CI/CD), as well as to keeping the team schedule as planned. When the pipeline is suddenly flooded with too much “noise” and false negatives and positives, it creates a huge impact on the schedule, the resource utilization, and adds quality risks to the product. 

When it comes to test creation, I recommend going by the famous “three-strikes-out" rule. A test that does not pass three times consistently (of course, unless it uncovers a real issue) has no room in the official test suite. 

Graphic saying three strikes and you're out.
A test that doesn’t pass three times consistently has no room in the official test suite.

Test planning and scoping that does not actively look to catch flaky scenarios causes damage to the entire product team. As proven above, it starts with planning and looking at the test reporting and data from the early test development phase. 

Planning for the Right Testing Types 

As the agile manifesto states, testing should be done across teams and cover all testing types. That means shifting left and automating as many of the relevant API, Unit, Functional, Performance, Accessibility, UX, and other testing scenarios for your product that you can exercise inside the software iteration. If you do not plan and scope properly, you will disrupt the entire software quality cycle and result in finding bugs late in the cycle.  

Finding an accessibility, performance, security, or functional defect outside the sprint or outside of the major CI/CD cycle can cost a lot to the team. Covering all testing types within the main test cycle as early as possible is a critical requirement. 

A list of testing priorities.
Testing should be done across teams and cover all testing types.

Test scoping of all testing types is part of a proper continuous testing strategy that is baked into the CI/CD pipeline. 

Skills and Frameworks 

Implementing all testing types requires the involvement the entire feature team in an agile/DevOps process. The team needs to match their test framework and tools to the team’s objectives and skills. 

For example, using a mix of frameworks like Selenium and Cypress for web application testing would be a great approach. One framework complements the other and each has its unique benefits. Being able to analyze and realize the value in each technology can be a huge benefit to the overall implementation of your continuous testing strategy. 

Logos of software for integration.
One framework complements the other and each has unique benefits that the other doesn’t have.

It should be noted that regarding planning, there should be a proper timeline for each practitioner that accommodates proper test creation, maintenance, stabilization, and execution. Having a timeline will prevent the process from being rushed at the expense of quality. 

Planning, Test Data, and Mock Services 

For most application development processes, there is a great and ongoing dependency on test data. Test data can help test wider scenarios, feed mock services, and cover testing in early stages when other areas of the product are not yet available. To allow the creation, usage, and maintenance of test data throughout the software iteration (sprint), teams need the right environments and tools to support their testing goals. 

From generating synthetic data for functional and performance testing to generating test data for mock and virtual services, teams need to plan accordingly. Whether teams are looking to create data or use the data in a variety of forms, such as a CSV or database format, teams need to incorporate their plans for data into their continuous testing strategy. 

Chart showing difference between test data and service data.
Data can and should influence what’s next in your testing plans, regression, new test cases, and more.

Another important aspect of quality planning is the historical data and incidents that your product has experienced in the past. This type of data can and should influence your upcoming testing plans, regression, new test cases, and more.  

Product Metrics and KPIs 

To continuously improve and plan, team leaders must also look at measurements based on goals and objectives. As previously mentioned, understanding test data to establish a solid and stable pipeline is great. But to improve the process, increase test automation coverage, improve team velocity, and achieve other agile related KPIs, leaders must set measurable goals to reach and enhance going forward. 

List of continuous testing metrics
Certain KPIs should be part of a retrospective teams can use to eliminate bottlenecks and enhance their productivity.

Metrics such as average test case execution duration, number of broken builds, length of a BAT (build acceptance test), and others should be part of retrospectives that teams can use to eliminate bottlenecks and enhance their productivity. 

Bottom Line 

A successful continuous testing strategy depends on many different elements that include process fitness, technology, and the practitioners themselves. Bringing these elements, with their underlying complexities, into a working orchestra one release after the other requires proper planning with transparency and visibility into what is working. 

Solving all the pillars at once is not a recommended practice. Instead, identify all the above areas and gaps in your own project, start tackling these items one by one, and gradually improve. 

There needs to be something here tying it back to Perfecto. Something like an H3 saying “Start the Journey to Continuous Testing With Perfecto” and then some copy underneath like “Start your free trial of our continuous testing platform today.”

 Free Trial