test impact analysis
September 15, 2020

How Test Impact Analysis Can Accelerate Release Cycles

Automation
Continuous Testing

Test impact analysis (TIA) is defined as “a modern way of speeding up the test automation phase of a build. It works by analyzing the call-graph of the source code to work out which tests should be run after a change to production code,” according to Martin Fowler.

Keep reading to learn all about test impact analysis and how it can help you release faster.

Back to top

Benefits of Test Impact Analysis

The benefits of adopting and maintaining TIA include both the cost savings of software development, as well as the risk reduction around escaped defects. When properly implemented, TIA can focus the regression testing and the overall scoping of a test cycle upon each and every code change. 

As an example, for a standard mobile banking application that runs on iOS and Android devices, upon a change to the “check deposit” functionality, a TIA system can help scope the relevant test scenarios that can both test this functionality, as well as cover potentially-impacted areas within the app. Such an assessment will result in a triggered test cycle running on relevant devices to provide feedback as quickly as possible to the developer.

Back to top

How Can Test Impact Analysis Produce the Proper Outputs Upon Code Changes?

While there are various ways to implement and use TIA, the most efficient ones use either:

  • AI/ML models.
  • Simple tagging.

Tagging helps classify the tests based on features, user journeys, testing types, and more.

Among the key challenges teams have with adopting TIA are the implementation and maintenance of such tools. Modern software uses open source libraries, APIs, and other dependencies in addition to the developers’ code. Continuously understanding code impact on quality requires sophisticated analysis of the code repositories early in the cycle and upon code changes.

As if that is not complicated enough, the entry points are the old code coverage or TIA, since there was no analysis made on the new build and there was no new test automation code developed. TIA must work retroactively as well as on current repos. If an uncovered area is detected between software iterations, regression tests must be executed in the nearest cycle.

Back to top

Test Impact Analysis Using Tagging

Tagging methods are extremely powerful in planning and scoping test cycles. Such tags have infinite classifications and categories that when given enough thought can dramatically impact the productivity of an entire DevOps team — that includes developers, testers, DevOps engineers, and executives.

Built into the Selenium test framework, there are already-supported annotations that can help classify and prioritize test scenarios. Such annotations or tags can be fundamental in test impact analysis.

Below are some high-level categories to consider when implementing a tagging strategy that can be used by the triggered event (e.g. code commit) to run these from CI or other scheduler tool. Again, this is not driven by any ML or AI algorithms. These are part of the Perfecto reporting SDK and are built into a testNG/Maven project to run as part of Selenium/Appium scripts.

Tagging Maintenance Is Key

It is imperative to maintain tagging strategies across teams in order to have the latest and greatest impact on such a tool, and to avoid any escaped defects.

Also, tagging is a way of eliminating flakiness from the entire test suite. Marking a test case with a flaky keyword can classify it and others in the category for re-factoring and exclude them automatically from the next cycle.

On the other hand, classifying a test with a tag of “regression defect” or something similar can automatically increase its priority. It ensures it is included until there are further changes in any cycle, since such tests were effective in detecting critical defects in the past.

This section did not cover and AI and ML, but it still can be a semi-automated productivity solution for TIA and other data-driven decisions by DevOps leaders. The next section will show a fully AI/ML solution for TIA.

Back to top

Test Impact Analysis Using ML & AI

There is a significant use of ML/AI models to determine which tests should run, when, and for how long. By knowing which tests to run, how much time such test cases will take to be executed, as well as how much time can be saved per software iteration, executives and DevOps managers can scope their ongoing roadmap with greater accuracy.

There are already a few solutions out there that can provide an AI-based TIA, including SeaLights. The following is how SeaLights defines the impact of its TIA engine upon each code change when integrated into the workflow:

“Test Impact Analysis engine correlates every test in every test suite right down to the actual method code each one is testing. This engine then automatically analyzes each build for new and changed code. This results in correlating the test and build to identify only those tests needed to verify the code changes for that particular build.”

Back to top

How Test Impact Analysis Works With SeaLights

SeaLights implements TIA by providing deep insights into the alignment between tests and the application code. The platform dynamically instruments the application at runtime using byte code instrumentation techniques and no changes to the application are required.  

As the tests are executed against the application, SeaLights ML learns the paths and “footprints” that each test step takes across the application. This, aligned with the Build Scanner which identifies the code deltas for every build, allows SeaLights to identify which tests are impacted by any code change.

By identifying these impacted tests and applying additional policies to build out the dynamic test suite (e.g. new tests, recently failed tests, mandatory tests), SeaLights executes only the tests that need to be run for each build.

By applying risk scoring and organizational policies, SeaLights is able to provide quality governance by preventing untested or risky changes from reaching production. Through the tool, clients can define a build (production, e.g.) as a reference build, which serves as a benchmark or comparison build for pre and future builds.

Through an executive dashboard, developers and testers get the ability to drill down into the technical aspects of each build or test suite and evaluate the specific quality risks as presented in the below screenshot. The highlighted quality risks show the file and code commit that occurred but did not undergo any type of testing, such as functional, regression, end to end, or even manual tests.

Additionally, teams can drill even deeper into the gap analysis and understand the bigger test coverage risks in order to make data-driven decisions.

Back to top

Bottom Line

Test impact analysis is not a new method, but without AI and ML it often is not fully utilized because it requires tedious maintenance and implementation. With the rise of AI and ML, such a TIA strategy can be embedded into CI/CD and help boost overall team productivity and quality.

It is also important to mention that software quality varies based on the target application (mobile, web), the dependency on third-party code, open-source libraries, and more. Therefore, implementing code coverage and TIA using AI and ML will become the new normal in DevOps going forward.

Related Content

Back to top