How ML & AI Will Shape the Future of Analytics for App Testing
Testing in the era of the waterfall methodology was relatively simple. There was a single defined testing cycle which usually involved one group per test result. There were fewer digital platforms to test. And the window for testing was much wider.
The era of DevOps, also known as “continuous everything,” requires a totally different testing pace. Testing cycles happen multiple times a day, with multiple teams and stakeholders pushing code simultaneously.
There are 20-50 times more platforms included in test coverage metrics on average today. And the release windows vary from 3-4 hours to daily release up to 1-3 weeks.
As you can see, that’s a lot of testing to cover in a much shorter timeframe.
New Opportunities & Risks
Continuous testing and test automation at scale are the answer for this fast-paced DevOps ecosystem. But these new methodologies also bring along massive amount of test result data. This sea of data, just like any other sea, creates opportunities and risks.
The risks include:
- Data analysis takes too long and prevents organizations from triaging quickly and getting to the root cause issue in time.
- Analysis is conducted sequentially and does not allow prioritization (fixing the most critical /impactful defect) as well as correlation between issues given the large volume of data that is being analyzed.
These risks prevent teams from expanding their test automation beyond the point they’re at. If you cannot analyze your test results and find defects in a reasonable time frame, what’s the point of adding more automation?
What Can Be Done?
The opportunity here is to utilize technology for an efficient way of analyzing test results.
Teams also need to ask the following questions:
- What is the most impactful problem? Do a certain amount of tests have a common denominator of failure — a “failure commonality?” Can we identify a pattern? Is it related to the stability of the backend, to the quality of the testing code? Or is it an orchestration problem and a capacity concern around the testing lab?
- If we know that there is a certain blocker for our test automation, can we automatically fix it? Or can we at least start with a smart recommendation about what needs to be fixed?
The Future of Analytics: Fail Fast, Fix Fast
The increasing level of visibility into testing allows leaders to fail fast but to also fix fast. This is where testing starts generating value based on smart analytics. With visibility, managers can measure test automation success and identify critical regression testing problems fast enough to impact quality within the cycle.
And this is all happening as machines become more helpful to human-based software testing. AI in software testing is advancing. The timing is perfect — the only thing that experts can agree on is that testing will probably get harder.
To that end, AI and machine learning (ML) will offer four big promises around the ability to:
1. Finding Patterns Through Smart Data Analysis
AI and ML will be able to analyze large sets of data fast and identify patterns, conducting the deep learning of large data sets across multiple verticals.
In testing, one of the biggest challenges will be normalizing the data that is flowing from multiple types of testing and understanding the structure of a test.
2. Executing on Rule-Based Actions
We will also be able to train an engine to conduct action based on a set of rules. If a test failed due to “X” let’s mark it as “Y” or alternatively do action “Z” in order to fix it.
3. Creating Tests Through AI
AI and ML have big potential for test creation. Traditional test creation requires lot of time and highly-skilled automation engineers. But with AI, they can understand the structure of the page/app and accordingly identify all the objects associated with it with no need for coding.
If machines have enough notion about the technical structure of the service — as well as defined business logic — test creation can happen at the speed of light.
4. Maintaining Tests & Self-Healing
Test flakiness is one of the biggest problems of the testing industry. Even if you created a very good automated test, chances are pretty high it’ll fail soon. This is especially true if something changes in the UI, interrupting the object identification process (as some objects will change).
Organizations spend 15-25% of their time maintaining automated test cases. That is a significant amount of time that holds innovation (such as the creation of new test cases) back. The automated ability to observe these changes and adjust the automated test (self-healing) is key.
This is where the industry is at today. Test managers are sitting on large sets of data and asking themselves things like:
- What shall I do now?
- How can I use it to optimize my automation?
- How can I make it more reliable?
AI and ML can provide managers with more optimized and trustworthy testing. And the insights they receive from analytics can provide them with direction on what to do next.
How Test Automation Will Become Smart
So, the two areas that the combination of test automation and machines are looking at are:
1. Test Design Automation
- User journey/model-based testing and RPA — How the app/service is structured and associate a test scenario for all potential models/user journeys.
- DSL model language for testing — By describing several domain and platform models, AI can be leveraged to improve the specification, execution, and debugging of functional tests.
- AI-based language — Combine the first two approaches by training the AI engine to record user journeys/models and use DSL to capture user interactions and intents.
- Test case optimization — The AI engine (language that is created alongside the recording of journeys) learns previous test execution results as well as changes in the page models/user journeys and adjusts the tests accordingly.
- Impact testing — This AI-based testing methodology analyzes the impact of the changes in the deployed application or service and marks the affected area in regression testing. It takes also under consideration past failures, type of platform, or other smart correlation that may mark the affected area as high risk.
2. Test Maintenance/Optimization
- Traceability — Test requirement -> test cases -> test execution. This is the basic flow of testing. All three areas are growing exponentially in DevOps and continuous testing. With thousands of tests per build today, AI-based traceability is positioned as the only way to move over the matrix while maintaining all the enterprise-grade compliance needs either forward to execution or backward to requirement.
- Code change impact analysis on testing and version control smart correlation — this testing methodology is leveraging AI for calculating the Diff. of code changes between versions and recommending which tests should be executed in order to build the ultimate regression test suite (maximize testing coverage). There might also a situation where tests are missing based on new code that was added.
Data becomes an enabler for valuable and successful continuous testing. Effective use of data should help with quick noise filtering (false negatives) and focus on actual failures that may resemble risk to the business.
The role of machines in the testing space grew in the last decade — mainly by replacing manual testing with test automation. Looking forward, it seems like human beings will remain close to testing, but we may see an interesting shift in the next decade from SDET (Software Development Engineer in Test) toward DSET (Data Scientists in Test).
This decade, smart test reporting and analytics will become more critical than ever before. Try Perfecto — with smart analytics, you’ll get the fast feedback you need to move at DevOps speed. Start testing with Perfecto today for free — and see how smart analytics can take your web and mobile app testing to the next level.