View all web browser and mobile devices available in our cloud-based test lab.
AI and machine learning in testing has made great strides. Now, we’re seeing it infiltrate the DevOps space with new categories of testing tools.
In this blog, we will classify smart ML and AI automation tools into five different categories, and explain how they solve common challenges faced by dev and testing teams. As the market continuously evolves, there will be more of these tools that fit these categories.
Prior to mapping out these tools, it is important to know that some tools classify themselves as:
But at the end of the day, all of these tools use levels of ML/AI to solve the most common challenges and pains that DevOps teams face.
Tools leveraging AI and ML algorithms aim to proactively and automatically identify code quality issues, regressions, security vulnerabilities, and more. This is done through code scanning, unit test automated creations, and more.
If your team lacks skills to address the above objectives or does not have the time to continuously address these tasks, consider some of these options. The outcome will be faster releases, improved quality through fewer escaped defects, and better productivity for developers.
Let’s look at DiffBlue as an example. DiffBlue connects into your source control repository (Git, Perforce, etc.) and creates a base line of unit testing automatically through AI. Once a regression is found, a flag will be thrown reporting the issue. The motivation for DiffBlue to create their solution was mostly to improve code quality by helping developers who do not like to own test creation.
In a similar context, Launchable looks at code automatically upon a code pull request and performs a kind of code impact analysis that adapts to the recent code changes. It then selects only the most relevant subset of your regression suite to save time to approve the code changes and integrate them into the pipeline.
Lastly, Facebook’s Infer project also enables better code quality through its AI algorithm.
The AI engine from Facebook can automatically find null pointer exceptions, memory leaks, concurrency race conditions, and more in Android and Java code. Similarly, it can also find the same issues together with wrong coding conventions or unavailable APIs in C, C++, and iOS/Objective C code.
As opposed to the differential tools, visual testing addresses a user experience layer of testing and scales the validations and look and feel of a UI (user interface) across digital platforms (mobile and web mostly).
Visual AI testing tools address the pain of constant changes made to the UI layer together with an ever-growing number of platforms, screen sizes, and configurations that make testing coverage a nightmare for test engineers and developers.
Some AI/ML tools that fall into this category are:
For both Applitools and Percy, the developer and/or test engineer will need to embed an SDK or pieces of code into the test automation (Selenium, Appium, others) to establish a baseline of visuals for the web/mobile app. Upon the next executions across all target platforms within the test bed, the tools will highlight differences between the actual and the baseline, turning the responsibility to the test owner to either report a defect or ignore the issue.
Declarative tools have different use cases from the others but still aim to enhance test automation productivity and stability. Declarative tools that leverage ML and AI have significant abilities related to NLP, DSL, RPA, and MBTA methods.
The common ground between the methods is to eliminate tedious, error-prone, repetitive actions through smart automation. While in this category we list RPA, this specific method is not solely around automation of testing, but also around automation of processes and tasks done manually.
Focusing on declarative testing, we can take as an example tools like:
These are only a subset of available tools in the ever-changing market. And each of the abovementioned tools has a different method to create test automation using AI.
For example, Eggplant AI uses models that are built to mimic the application under test, and then the AI engine automatically goes through the model flows and creates test automation scenarios.
Since this blog is not about comparing tools, I will not specify their pros and cons. However, even with AI, test engineers need to consider maintenance, management of test resources over time, and execution at scale. If such tools support all of these, then that’s great, or else there might be bumps along the way.
Other tools listed above, especially Functionize, specify leveraging NLP to create test automation scripts without any coding skills or development languages.
The major benefits of this tool type are as follows
The downsides of such tools are:
These types of tools should solve problems for the right persona depending on the skillset available. Without proper strategy and consideration, the overall benefits mentioned above will be missed.
If we were to name one of the top reasons why AI and ML have emerged in the space of test automation, it would be due to test automation flakiness, reliability, and maintenance.
Code-based test automation is by nature less stable. It requires tuning constantly per platform or environment, and its entire foundation is the application objects. These objects tend to either change every few weeks, or worst case they are used inefficiently (e.g. XPATH vs. Object ID, etc.).
For that purpose, a new era of tools has evolved where test maintenance is assisted by machine learning. In these tools, the main ML engine resides in the self-healing of the recorded scripts.
Some of the tools are as simple as a web browser plugin installation (Mabl, Testim). Some tools that assist in test maintenance with machine learning are richer in their abilities and are integrated into an end-to-end continuous testing solution (Perfecto, Tricentis).
At the heart of these tools there is a ML algorithm that upon each execution and in between them “learns” the website and/or application under test. It scores the element locators from each screen in the app based on reliability and probability to be found successfully.
Regardless of which test automation framework or solution you’re using for your web and mobile apps, it should be quite clear that when scaling your software releases, you’re also scaling the test data and reports that you generate.
The test data originates from multiple sources: test automation engineers, developers, security and ops engineers, analytics, and others. Teams need to be able to make sense of all these sources and make data-driven decisions fast.
ML in reporting helps sort through the data, slice and dice it, and in advanced cases, also automatically classify the root cause of failures and boost team productivity.
By employing a reporting solution that leverages ML, teams can worry less about the size of the data, and allow machines to sort things automatically for them, which removes the noise from the pipelines so they can release faster and with confidence.
Get started with the web and mobile automated testing platform that more than half of Fortune 500 companies trust to power high-quality apps. Start the 14-day free trial today!Start Trial
Get started with the web and mobile automated testing platform that more than half of Fortune 500 companies trust to power high-quality apps. Start the 14-day free trial today!
The key for successful adoption of AI and ML tools relies on a few factors and considerations.These tools must not disrupt existing workflows. They need to integrate into processes and tools. They don’t replicate what existing tools do, but solve problems or challenges that existing tools struggle or fail in.
To get people, processes, and technology to work seamlessly with efficient orchestration and scale, I recommend you start small by identifying key scenarios to apply AI/ML. Align tools to complement the skillsets of different personas, such as business testers and developers. And be sure to understand how you can scale the test automation suite going forward and connect to CI/CD.
DevOps Chief Evangelist & Sr. Director at Perforce Software, Perfecto
Eran Kinsbruner is a person overflowing with ideas and inspiration, beyond that, he makes them happen. He is a best-selling author, continuous-testing and DevOps thought-leader, patent-holding inventor (test exclusion automated mechanisms for mobile J2ME testing), international speaker, and blogger.
With a background of over 20 years of experience in development and testing, Eran empowers clients to create products that their customers love, igniting real results for their companies.