Get ready to challenge everything you think you know about testing from those who wrote the books on modern quality.

Michael Bolton, James Bach, and Eran Kinsbruner have helped define the industry as we know it. Hear what they have to say about timely & timeless topics: AI, tooling, data, continuous testing, shift left, and much more.

Watch this panel webinar with QA industry oracles Michael Bolton, Eran Kinsbruner, and James Bach as they discuss the current state of testing and automation.  

  • What qualifies as modern testing.  
  • Criteria for supporting testers with tools. 
  • The state of agile & continuous testing. 
  • How to use data, AI, and more.  
  • What to prepare for in the future. 

Enjoy a frank discussion on timely and timeless topics in the world of testing. 

Try Perfecto Today

Try continuous testing in the cloud with Perfecto. Get started with your free trial today.

Try Perfecto

Your Good Testing Questions

Answered by Eran Kinsbruner

1: Are dedicated testers a dying field?

No, not at all. Dedicated testers are valid — and will continue to be valid. Who else will be the voice of the customer? Who else will bring unique creativity, problem-solving skills, and an exploring testing mindset? 
 
But, like any career, testers will be wise to continue to adapt and flex to changes in, for example, new ways of testing and releasing. But they need to integrate their work into the new Agile and DevOps processes. They need and to be part of the team from the early stages. And they need to use tools that closely align with their objectives, so they are not diverting the end goal of the team. 
 
At the heart of this question is another, more useful, question: How do testers continue to adapt and provide value?

Read the full discussion here.  

2: How do I introduce automated testing into my company?

You want to implement test automation but aren’t sure where to begin? Here are four steps to get started.  

  1. Get management onboard  

The best way to introduce automated testing to decision-makers is by creating an end goal that is relevant to the business. The goal should solve a problem, fit the organization’s profile, and be measurable. 

  1. Find the right tool 

There are a lot of tools out there, but you must choose the right one for your team. The right tool is within budget, supports the technologies used in your app, and has good reporting capabilities.  

  1. Start small 

Pick a starter project with a small number of test cases and use it as a pilot. Automate the first few tests, integrate them into CI with the proper value-add to your teams, and get going.  

  1. Keep up 

Maintaining automated testing is the best way to maximize your tech. You can’t just implement automated testing and walk away. It’s an ongoing investment. It requires maintenance, debugging when tests fail, and other new capabilities that the automation framework can introduce.

Read the full discussion here. 

3: How will AI and traditional QA go together?

Let’s be clear – AI will not replace software testers – but they could make us better or make our jobs easier.   

AI is a great advanced option to solve unique and challenging problems. These problems might include reducing time to scan through massive logs and test data or self-maintaining and fixing issues within tests (like element locator changes). AI makes it much easier to scale and achieve these kinds of goals.  

At the same time, AI has its limits. 

While AI has improved testing, there are many problems that must be solved with the human mind. Autonomous processes still need human oversight, for example. 

Testers should always be suspicious and curious about the platform that is under test. They should challenge the AI components that are part of the scope for testing.  

Remember, AI is dependent on high-quality and maintainable data. If the data is not solid, it puts the decision-making process at risk. With AI merging into the testing space, we will be introduced with new types of defects such as bias and ethics, clustering, and others.  

In the end, traditional testing will continue to lead the testing efforts. AI will only allow some advanced capabilities to support the QA teams, that’s it. 

Read the full discussion here

4: How can we be certain that a tool is right for our problem?

Finding the right tool is key to any project’s success. To get it right, there has to be a strategy.  

A team should prep a list of requirements they want in a testing tool, including essentials and luxuries. Some of these requirements might include:  

1. Supported development languages of the tool 
2. Ability to automate advanced certain scenarios 
3. Reporting abilities 
4. Scaling and parallel testing 
5. Support community behind the tool 

Based on their list, the team should decide if a tool meets part or all of their “must-haves.” 

Keep in mind that the criteria may change as the product matures and the requirements for testing advance. That’s why continuous validation of tools ensures your team gets the maximum value from the available technology. 

Be curious, learn more, and stay open to new testing tools. 

Read the full discussion here. 

5: What is the best ratio for testers to developers?

It depends.  

The average market number is about 1-2 testers per 5 developers.

But this number won’t fit every team.  

The number isn’t the driver here, the project is.  

Hiring and training new skilled testers isn’t easy. It takes time and should be based on the product roadmap and quality requirements.
  
To ensure a product team has enough testers, organizations need to figure out their:  

1. Objectives 
2. Budget  
3. Automation coverage goals 
4. Timeline (release velocity) 
5. Skillset and application types 
 
I recommend starting with 1-2 testers for a new project. You can always grow as needed.  
 
Shift your focus from the developer-to-test ratio to test automation. If your testers are efficient, then you’ll know when you need to add more.  
 
Is this a way to cut back on staff? No.  
 
It’s a way to make your team more productive and your product more reliable.  

Read the full discussion here. 

6: In this age of test automation, why isn't there dialogue about who accepts the risk of faulty software? Isn't testing about risk mitigation?

It’s not about pointing fingers. It doesn’t matter if you’re a tester or a developer.  

In the age of Agile and DevOps, quality is the responsibility of the entire team.  

Every individual is responsible for the product in a unique way.  

Yes. Testers must challenge the product so they can ensure it behaves as expected.  

Testing identifies software faults and mitigates risks to the business.

However, this isn’t the main objective of testing. 

As Michael Bolton writes, “The point of a demonstration is to confirm, to validate, to reassure. The point of a test is to challenge the product and discover things; to reveal problems, bugs, errors, limitations, inconsistencies, misunderstandings, missing features; to invalidate unwarranted assumptions.”  

The point being, testing is more than just covering product issues and reducing risks.  

It’s about discovering new pitfalls in the software and exploring.  

So be curious, learn more, and stay open to new testing tools. 

Read the full discussion here. 

7: What are your thoughts on incorporating testing in agile development?

Testing is already part of Agile.  
  
Through methods like BDD, TDD, ATDD, etc. testing is done within the sprints themselves. Sometimes even earlier in the design phases.  
  
Making testing continuous and reliable for each iteration is a big challenge. Testers must ensure the relevance of their test suite and close the gap between the last cycle and the current one. This way they can make new discoveries with each iteration and increase overall software quality.  
  
Automation within agile supports continuous testing and faster feedback.  
  
Also, remember that the types of testing are not just unit, API, or functional, but a mix of all. With every code change, coverage becomes better and the findings more meaningful.  

Read the full discussion here. 

8: What do you say to companies that haven't embraced test automation?

Test automation isn't a matter of good or bad. It’s a matter of software speed.  

Organizations that invest in automated testing want to speed up their release processes. Test automation allows for faster releases. It also adds a layer of robustness, scalability, and reliability, since manual testing can be error prone.  

I would turn the question around and ask instead, “What are the organization’s goals for testing, release cadence, and test coverage?”  

Over the past 2 decades, I’ve learned that there are 3 things driving testing and test automation: time, cost, and quality criteria. 

An organization needs to define its top priority and go from there.  

For example, if the cost of testing is too high since it is 100% done manually, then automating the cases could save the cost of manual labor. Not to mention it could resolve some of the time barriers and feedback loops. 

Read the full discussion here. 

9: Is testing considered a science?

In my opinion, software development and testing aren’t sciences. Yes, they both depend on science and technology. But on their own, neither one is a science.  

Read the full discussion here

10: A team applies a post-release hotfix to resolve an issue. But, this hotfix resulted in more bugs so management asks the testers, "What can we (dev and QA) do to improve to prevent this from happening again?" How would a good tester respond?

A good tester’s answer should:  

  • Be based on data.  
  • Suggest ways to close the gaps within the testing process and coverage. 
  • Show that the team is learning from this incident.  
  • Give management confidence that the team is ready for the future.  

Hotfixes and bugs constantly escape to production.  

Some are more critical and impactful than others. That’s just the reality in software development.  

Dev and testers must cover as much ground as they can within the sprint. Then they must report their discoveries to a central repository. This way there’s a learning system for the future, as well as a data-driven decision-making process. Based on this, the team can decide whether to release the new version or not.   

Regressions are not unique to just hotfix releases. They happen for many other reasons too.   

A good tester can learn from these incidents, identify the root cause, close gaps, and create patches for the future.  

Read the full discussion here. 

12: What are the best tools for ADA testing?

Testing accessibility is becoming more and more important for web and mobile apps. 

For web testing, I recommend using AXE by Deque as it provides great coverage. It’s the leading solution since it’s open-sourced. AXE is fully integrated with Selenium and Cypress frameworks.  

For mobile testing, there are native mobile app scanners for iOS and Android. Perfecto supports these scanners and allows them to run on top of Appium and the BDD open-source Quantum framework. 

Here are some more resources: 

Read the full discussion here

13: What is the difference between an SDET and a Tester? 

As we said in the webinar, testing is testing.  

It’s supposed to highlight new insights and risks within a company. 

Each tester has a different skill set. So, there are different “titles” associated with these personas.  

There are testers (SDETs) that are testing the apps through writing the scenarios in code, like Java. 

And there are testers that create and execute their tests manually together with exploratory testing (Testers or business testers).  

It’s tempting to lump SDETs and Testers together, but they have different skills. Where are you in the world of SDETS and Testers? 

Let us know here

14: What are some of the challenges testers will face in the future? How can we overcome them and stay relevant and in the job market?

There is no doubt, testers are relevant today and will be in the future. But to adapt, testers should adopt new strategies and tools.

In the future, testers will face a variety of challenges. There will be more complex environments, more test scenarios, and more test data.

These challenges require better collaboration between testers and developers. 

As applications become more complex, so does the testing associated with them. Testers should break barriers between their practices and developers'  to adapt to Agile and DevOps cycles. 

And as organizations mature and scale, the time per cycle shrinks too. Testers should perform functional testing with other methods like performance and security. Doing this expedites feedback and identifies issues as early as possible. 

In short, testers must be more efficient, test earlier, and join development earlier in the cycle.

What's the biggest challenge testers will face in 2022? Let us know what you think here.  

15: Why aren’t there any college courses or degrees for testing like there are for programming? 

There are some great online courses and universities for software testers. Here’s my list:  

BlazeMeter University: https://lnkd.in/gC_GabK

Perfecto University: https://lnkd.in/euQ69Aqb 

Test Automation University by Applitoolshttps://lnkd.in/ed6PQrY 

I believe that courses like these must be updated frequently to stand the test of time.  Technologies evolve and the people getting these certifications should be trained with the most up-to-date material.  

What are your favorite resources for testing? Let us know here. 

16: What are a QA manager’s considerations for a good test case/scenario?

A good test:  

1. Provides stability and reliability across multiple runs and platforms.   
2. Covers current business flows and finds issues (regressions e.g.).  
3. Maintains a fully optimized test case based on best practices/design patterns. 
4. Has a proper test environment and current test data to avoid false negatives.  
5. Is easy to understand and debug in case of failure or inconsistent results (e.g., throws proper logs and output messages).  
6. Keeps the end-user in mind when designing the test case. This way it’s as close as possible to the real user experience.   

In short, good test cases are expectable, reusable, maintainable, and with clear results. They create a trustworthy testing environment. And ultimately, better collaboration between testers and developers. 
 
What else makes a good test case? Let me know here

17: Thoughts on no-code or low-code testing?

The short answer: If you have a good working code-based test suite, stick with it, and grow it. 

But, if you’re frustrated with flaky and unreliable test automation, some of these new smart tools will give you a good head-start.

The longer answer: The market is trending towards intelligent testing techniques. These trends include AI/ML in testing and low/no-code solutions. 

Open-source frameworks like Selenium, Cypress, and Playwright already have a low/no-code version of their tools, and this trend will continue to evolve. 

The idea behind these advancements is not to replace test engineers and business testers. It’s to make sure they’re ramping up testing and shortening their learning timelines. 

These AI/ML and low/no-code tools also impact the stability and reliability of testing scenarios. 

Selenium’s Healenium brings self-healing abilities to the test code. Other smart tools, like Perfecto Scriptless, are maturing to complement teams. 

Remember! Adopting these smart tools should be done on a needs basis, not just because they’re trending. 

Here's more on that

18: Sometimes, management focuses too much on regression testing at the expense of exploratory testing. Developers test to prove it works. QA teams test to prove the product is broken. This distinction is being lost. What do you think about this?

As recommended by the old and famous testing pyramid, software testing teams must embrace all testing types to be successful.  
  
Functional, non-functional, API, exploratory, accessibility, and other testing types must be done earlier in software development through automation.  
  
If you can include the key test cases from each testing type and form a golden suite of regression, you will be in a good place. You’ll be able to identify new bugs and expedite the overall release cycle through faster MTTD/MTTR.  
  
Combining test automation with expert-based exploratory testing is KEY for releasing good software. There is no replacement for the human eye.  
  
Remember, not all tests can or should be automated. A mix of automated regression tests, smart manual tests, and exploratory tests is perfect for great testing coverage.  
  
The key to blending exploratory and manual testing in the cycle? It starts with proper planning, communication, and trust. This allows for efficient processes, so feedback is fed to the developers at the same time as automated testing occurs. 

Last thing: Maintain your GOLDEN test suite to realize the value from your testing investment.  

See the discussion on social here

Related Reading >> What is Product Roadmapping? How to Make a Product Roadmap that Keeps Your Team on Track

Presenters

  • Eran Kinsbruner
    Eran Kinsbruner
    DevOps Chief Evangelist & Sr. Director at Perforce Software, Perfecto

Eran Kinsbruner

DevOps Chief Evangelist & Sr. Director at Perforce Software, Perfecto

Eran Kinsbruner is a person overflowing with ideas and inspiration, beyond that, he makes them happen. He is a best-selling author, continuous-testing and DevOps thought-leader, patent-holding inventor (test exclusion automated mechanisms for mobile J2ME testing), international speaker, and blogger

With a background of over 20 years of experience in development and testing, Eran empowers clients to create products that their customers love, igniting real results for their companies.

  • Michael Bolton
    Michael Bolton
    Lead Consultant, DevelopSense

Michael Bolton

Lead Consultant, DevelopSense

Michael Bolton is a consulting software tester and testing teacher who helps people to solve testing problems that they didn’t realize they could solve. In 2006, he became co-author (with James Bach) of Rapid Software Testing (RST), a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure. Since then, he has flown over a million miles to teach RST in 35 countries on six continents.

Michael has over 30 years of experience testing, developing, managing, and writing about software. For over 20 years, he has led DevelopSense, a Toronto-based testing and development consultancy. Prior to that, he was with Quarterdeck Corporation for eight years, during which he managed the company’s flagship products and directed project and testing teams both in-house and around the world.

Contact Michael at [email protected], on Twitter @michaelbolton, or through his website, http://www.developsense.com

  • james bach
    James Bach
    Founder & Principal Consultant, Satisfice, Inc.

James Bach

Founder & Principal Consultant, Satisfice, Inc.

James Bach is founder and principal consultant of Satisfice, Inc., a software testing and quality assurance company. In the 1980s, James cut his teeth as a programmer, tester, and SQA manager in Silicon Valley in the world of market-driven software development. For nearly fifteen years, he has traveled the world teaching rapid software testing skills and serving as an expert witness on court cases involving software testing. James is the author of Lessons Learned in Software Testing and Secrets of a Buccaneer-Scholar: How Self-Education and the Pursuit of Passion Can Lead to a Lifetime of Success.