What not to test in your two-week sprint

public://pictures/mary_thorn.jpg
Mary Thorn, President, Mary Thorn Consulting, LLC

Sometimes testers allow themselves to drown in work. During retrospectives, they complain that they just don't have enough time to test everything, so they work overtime during the last week of a sprint to ensure that they accomplish the definition of "done."

In one case, a tester told me, she worked 20 hours the first week of a two-week sprint and 60 hours the second. This is the wrong approach. Risked-based testing starts with the realization that you can't test everything—ever!

Always test the most critical areas for the business, including technically complicated items, in each sprint. Then be transparent as a tester. Tell your team what you will be testing—and what you will not be testing—and why.

Then ask if everyone is comfortable with that. If they think the risk is too high for some ideas you're not testing, then the team needs to assign other members to test those things.

So what should you not test? The best risk-prevention technique is test automation. However, automation skills are sometimes hard to find, and they take a while to teach them, as well as to convince developers that they can pick up these tasks.

So, until your team either learns those skills or opens itself up to the mindset that the entire team owns quality, here are three risk-mitigation techniques to teach your team to test smarter, not harder.

World Quality Report 2018-19: The State of QA and Testing

1. Just-in time testing

I was trained by Rob Sabourin, who specializes in just-in-time testing (JIT)—risk-based testing that happens just when it's needed. This technique is as easy to use as an Excel spreadsheet.

You put in all the ideas for your user stories for your sprint. Then you discuss the risk factor for each idea with the product owner and developer. Based on the risk factor, you then test the highest-priority ideas that you have capacity for; do not test the lower- and medium-risk ideas.

2. Pareto testing

I use Pareto testing to decide what not to test. According to Wikipedia, "The Pareto principle (also known as the 80/20 rule, the law of the vital few, or the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes."

This holds true for most software applications, where 80% of the users only use 20% of the application, or where 80% of the defects occur in the same 20% of the code. Using this rule, you prioritize the highest-risk areas, and don't test the areas not used.

[ Webinar: Agile Portfolio Management: Three best practices ]

3. All-pairs testing

All-pairs, or pairwise, is a combinatorial method of software testing. For each pair of input parameters to a system—typically, a software algorithm—the technique tests all possible discrete combinations of those parameters.

When I tested financial software and the complicated business logic it introduces, I used all-pairs to help me reduce the possible testing combinations to a set that was logical and that I could complete in a two-week sprint.

Use risk-based techniques now, automate later

So why do testers enable the bad behaviors of "scrummerfall," or create a lack of whole-team ownership when it comes to quality across the team? They do it because no one has taught them another way.

As a test leader and agile coach, one of the first things I do is review the flow of work that testers have in their sprints. To do this, I use a cumulative flow diagram, a tool that helps you see scrummerfall behavior and the bottlenecks in your process.

Most of the time, testing tasks at the end of a sprint are the bottleneck. This is by definition scrummerfall—mini-waterfall cycles in a sprint, where the developers hand off their stories for testing halfway through or at the end of a sprint.

At the end of the day, the entire team—not just the testers—is responsible for the quality of the potentially shippable increment. You will have to implement some risk-based techniques until your test automation strategy rolls out.

So pull out these "old-fashioned" risk-based mitigation techniques and apply them to your two-week sprints. They will at least help you tread water until you can start to swim.

Mary Thorn presented "Help! I am Drowning In 2 Week Sprints....Please Tell Me What NOT to Test!" on October 3, 2018 at the software testing conference STARWEST