View all web browser and mobile devices available in our cloud-based test lab.
Test reporting is essential for making sure your web or mobile app is achieving an acceptable level of quality.
Done right, test reporting and analysis can add true value to your development lifecycle by providing the right feedback at the right time.
In this blog, I’ll share:
A test report is an organized summary of testing objectives, activities, and results. It is created and used to help stakeholders (product manager, analysts, testing team, and developers) understand product quality and decide whether a product, feature, or a defect resolution is on track for release.
Beyond product quality, a test report also provides insight into the quality of your testing and test automation activities. Organizations typically have four high-level questions about their test automation.
Finally, test reporting should help you understand the achieved value of testing. For example, are you testing anything unnecessarily? Are your tests stable? Were you able to uncover issues early in the process?
A good test reporting process gives insight and answers to all these important questions. You can not only improve quality of an app, but you can accelerate your releases.
Agile, DevOps, CI/CD — these hallmarks of modern development have changed the requirements for a “good” test report. Below, a few issues that can get between you and a timely, accurate test report.
Traditionally, a test report was compiled and summarized (using spreadsheets!) as one of the final stages of a waterfall development process. Releases were few and far between, so there was time to compile results, create a report, and make decisions.
The fast release cadences made standard by Agile and DevOps movements have dramatically changed this. Testing needs to happen quickly. Decisions about quality need to be made not in the timeframe of months, but within weeks, days, even hours. If that feedback isn’t available in time, the release is either stalled or shipped with questionable quality.
Today’s testing teams generate mountains of data from tests. Mountains created, in large part, by both test automation (more testing) and device proliferation (more devices, browsers, and versions).
The more data, the better, right? Yes and no.
Yes, if it’s actionable. No, if it’s not. Many organizations suffer from too much testing data. In that case, it is difficult to make sense of what is valuable and what is just noise.
Noise is created from flaky test cases, environment instability, and other issues that cause false negatives for which we don’t understand the root cause. In today's reality, digital enterprises must go through each failure that is being marked in the report.
Reporting, then, is burdened by high volumes of irrelevant information.
Another issue, particularly for larger organizations, is due to the number and variety of teams, tools, and frameworks.
Without a uniform way to capture and sort this data across the organization, good test reporting becomes dangerously difficult.
What goes in a test report? That depends on the mix of stakeholders using it as well as the sophistication of the team.
Regardless, its contents should supply fast, actionable feedback. Everything should be described (or displayed in a test automation tool) as simply as possible — but not too simply. It needs the right granularity in the right areas to be useful.
Remember, the test report is used to analyze quality and make decisions. If it is too simplistic, important nuances can be lost and result in poor decisions. If it is too granular, you and the team will have difficulty getting a sense of the overall quality picture.
A very basic test report for a small application or organizations should include, at a minimum, the following:
For a larger organization, or for an organization implementing more sophisticated testing, the minimum will not be enough.
Each of the test reports must include sufficient artifacts like logs, network traffic (HAR Files), screenshots, video recording, and other relevant data to help the reviewer make data-driven decisions. Test history — including defects found by the test, problematic platform or feature in the product — can provide immense value to the test reporting reviewers around next steps, test impact analysis, and test coping for the next cycle.
When you’re releasing quickly, often, and with the help of test automation — as most modern organizations do — smarter testing and analysis is a necessity.
To start, you need to time testing activities so that reporting and analysis are delivered at the most relevant time in your development pipeline.
In the example below, you’ll see unit, smoke, and regression testing timed to align with when they are relevant to the team. For example, conduct unit testing too late (or get the feedback too late) and you risk delaying a release. Sync regression tests on a nightly basis, so the team can get feedback and take action the next day.
Good test reporting is delivered to the right teams at the right time.
Aside from that, you will want a test reporting dashboard that is perfected for the pipeline. This would include the following:
Executive Overview —Highlighting real-time trends for testing in the Continuous Integration pipeline.
Heatmap of Focus Areas — Mapping emerging issues (risks or other areas).
Cross-Platform Visual Validation — To quickly see functional/UI defects across browsers.
Single Test Report — For detailed root cause analysis that includes the list of the above mentioned artifacts.
Report Library — For effective triaging (slicing and dicing of data).
Test reporting has become quite a bit more sophisticated than in the early days of waterfall development. But the end goal — getting actionable feedback — hasn’t changed. To find bugs faster, you need to filter out noise and false negatives. That way, you can focus on the genuine issues for a quick MTTR (mean time to resolution). An efficient test reporting platform, like the one that comes with Perfecto, helps you achieve all the above.
Join Perfecto experts as they share insights into better test reporting and analysis.
See how test reporting works with Perfecto. Sign up for a demo today.
What Is Test Data? How to Prepare For Scenarios With BlazeMeter
DevOps Chief Evangelist & Sr. Director at Perforce Software, Perfecto
Eran Kinsbruner is a person overflowing with ideas and inspiration, beyond that, he makes them happen. He is a best-selling author, continuous-testing and DevOps thought-leader, patent-holding inventor (test exclusion automated mechanisms for mobile J2ME testing), international speaker, and blogger.
With a background of over 20 years of experience in development and testing, Eran empowers clients to create products that their customers love, igniting real results for their companies.