Graphic showing computer screen with features
August 1, 2022

Software Testing Best Practices from QA Legends

DevOps
Digital Experience

As the potential for technological capabilities continues to expand like a supernova, it is vital to keep up with the changes to ensure the web or mobile application you are developing is the best that it can be. Outdated testing strategies will hinder your app as soon as it is deployed. Luckily, we assembled a roundtable, in collaboration with TechWell, of QA industry legends to share the latest and greatest on current software testing best practices and predictions. 

Michael Bolton (Lead Consultant, DevelopSense), Janna Loeffler (Director of Engineering, mParticle), James Bach (Founder and Principal Consultant, Satisfice, Inc.), and Eran Kinsbruner (DevOps Chief Evangelist and Senior Director, Perfecto) are some of the testing industry’s brightest minds and the folks who have written the books on how we define modern quality today. Collectively, they shared a few key aspects of what software testing best practices look like in 2022.

This blog will highlight some of the major software testing best practices they discussed, but be sure to watch the full webinar to hear everything they covered. 

Use NPS to Measure Success 

Measuring success is not as simple as the often-cited “Pass” or “Fail.” Rather than measuring success, consider assessing success as part of your software testing best practices. You do not simply measure a situation — you assess the situation to determine the next course of action. A good way to assess the success of an app is to look at the Net Promoter Score (NPS). The NPS is a tool used to examine customer experience and predict business outcomes. 

Tracking data through customer experience and customer service is one of the best indicators of how successful your app is for your intended audience. Hard data is certainly useful, but what is equally useful is asking yourself a few subjective questions: 

  • Are there still hidden risks left undiscovered by your tests? 
  • Does the business have enough information to address risk? 
  • What exactly does success mean to you? 

Keep your ear to the ground. Leverage feedback received through social media, learn from the data coming out of pre-production, parse out good testing data from the useless data, and optimize your tests accordingly. 

Explore the Impact of AI & ML Testing

For many in the testing industry, the idea of AI- and ML-assisted testing can be a sensitive subject. Many believe the end goal of AI and ML testing is to push human testers out of jobs. Little do these naysayers know that AI and ML are two of the strongest software testing best practices that testers have at their disposal. 

While there have been advances in AI and ML within the testing industry, they still have a long way to go. These tools are excellent for leveraging by humans, but they are just that: tools. AI and ML testing are not magic solutions to your testing strategy. Consider these instances in future tests where AI and ML can be alongside you in the passenger seat, not the driver seat: 

  • Test Creation. Use AI and ML to help create better test cases. Allow intelligent algorithms to go through the apps and model the user journeys while creating the initial testing baseline for engineers. 

  • Test Flakiness. When tests reveal themselves to be faulty, the blame game starts: it must be the framework, it must be the engineers, etc. By using AI for data mining, you can make sense of what went wrong and where. Carry that information into streamlining test creation. You can also leverage the ability to self-heal faulty test scenarios through object identifications, error classifications, auto root-cause analysis determinism, and more. 

  • Assessing Current Standards. Frameworks are constantly improving upon themselves. For example, single codebase solutions like Flutter make maintenance a breeze. Yet that begs the question: How do you test against different binaries that are compiled from a single code base? The different platforms used today are tested by different test frameworks. That means having a modern and intelligent solution that can test against all platforms (web, mobile, desktop) would be extremely powerful. Single codebases will be able to help scale and usher in a new wave of AI and ML testing. 

Like with any powerful tool, there can (and likely will) be a downside to how AI and ML testing gets used in the future. There are some in the testing industry that have become deluded by the power of these tools and see them as a fix-all solution. In other words, it is the illusion of a solution. 

How can these advanced tools ever hope to determine what is considered a successful outcome without the supervision of humans? The short answer is they cannot. Humans decide what is the correct model for your testing strategy. It is important to understand the distinction that AI and ML can be used as a tool within testing — not some omnipotent testing tool. 

Set Benchmarks for Better Testing

One of the most asked questions in the testing industry and one of the most oversimplified is as follows: How do you test better compared to what? To answer this question, you must set a standard or a benchmark to which you can compare your testing.

Are you: 

  • Better than yourself over time? 
  • Better than the average tester? 
  • Better than lazy testers? 

No tester can promise to find every bug, but every good tester will learn from every bug found later down the line and apply it to future tests. To be a better tester — to test better — you must look inward at yourself. Just like health and exercise, there is no shortcut or easy solution to becoming a better tester. 

Over time, it is a helpful software testing best practice to be deliberate about practicing your tests.  But what does that mean exactly? You can affectionately refer to the practice as performing your own “test-opsy.”

Analyze the testing you just completed by recording yourself and then playing it back once you’re done. Watch what you do and what you don’t do. If you get into the habit of performing a test-opsy on your tests once a month, you will be amazed at what you can learn from yourself. 

Another aspect of testing that some testers find difficult is articulating what they are doing and why they are doing it. Just like examining the playback of the tests you perform, get in the habit of speaking aloud the finer points of your testing approach. This software testing best practice instills confidence within your team and in yourself.

Do not talk in absolutes (we are looking at you, Pass/Fail test results...do not do that). Instead, explain what problems you found. Take the problems you found and map out their path to risk. Doing this provides a much richer overview of the state of your application. 

Bottom Line

Your web or mobile app is only as good as the tests you put it through. In 2022, what is considered an effective test can be radically different than even a year ago. So, why not partner with an industry-leading testing platform? Perfecto partners directly with you to address your testing needs — no matter how big or small — to ensure that a high-quality, 5-star app is your end result. 

Test smarter, not harder. Find out how Perfecto can help you revolutionize your testing strategy. 

Start Trial