You are here

STAREAST preview: 3 testing trends to watch

public://pictures/Matthew-Heusser-Managing-Consultant-Excelon-Development.jpg
Matthew Heusser, Managing Consultant, Excelon Development

Alan Page, who wrote How We Test at Microsoft and hosts a podcast about testing, once famously observed that testing, as a field, seemed stagnant. He proved his point by reviewing the sessions from a conference and showing how they were interesting today—even though that conference had taken place 10 years earlier.

Today, things in the software testing field are moving along quickly. And those major changes are evident at the upcoming STAREAST 2019.

Emerging themes this year include the rise of artificial intelligence (AI), self-service test environments, and a shift from testers as a role to testing as an activity. If you can't make STAREAST but want to know the takeaways, or if you just want to plan which sessions to attend, this conference preview is for you.

[ Is it time to rethink your release management strategy? Learn why Adaptive Release Governance is essential to DevOps success (Gartner). ]

Smarter test tools, smarter testing

The smartest testing strategy might just be getting the software to test itself for you. While that may sound like magical thinking, Jason Arbon, CEO of test.ai, doesn't think so. His argument: The combination of technology change, investments in AI, and the reality that AI abstracts and builds on itself make the coming test singularity a question of when, not if.

Right now Arbon’s company is teaching software to do things like recognize a shopping cart icon, learn general flows, and adapt the flow of testing as the app changes. "This is the year that AI hits the streets," Arbon said. If you’d like to hear more from him on AI, you can listen to his interview on the Testing Show Podcast.

Angie Jones, a senior developer advocate at Applitools who specializes in test automation strategies and techniques, is running three sessions at STAREAST. These include a half-day session on having the computer do visual validation, a smarter visual comparison of portions of a screen. The tools vendor incorporates ML to ensure that changes are perceptible to a human and can ignore invisible rendering, size and position differences. You can combine this with a human to do a quick up-or-down vote on whether a change was a defect or just a change.

Also known as "cyborg testing," this approach allows the computer to do more while keeping human judgment in the driver's seat. While that isn’t complete self-testing systems, it does leverage AI and does exist today. Speaking of AI, Jones recently published a deep dive case studyon how to test a recommendation engine.

[ Angie Jones: How to tidy up your test code, Marie Kondo style ]

On the theme of making testing smarter, Paul Grizzafi, a principal automation architect at Magenic, will be speaking on adding randomization to automated testing. He'll start by explaining fuzzing, a way of throwing random data at applications, often used for security.

Grizzafi is also covering high-volume automated testing, which is a family of testing techniques that allows a tester to execute and evaluate arbitrarily many tests. This can help find memory leaks or hidden issues with application state.

Finally, Grizzafi will introduce his link-clicker concept. This is a tiny application that goes through a website, clicking on random links over an extended period of time. A massive number of random walks through the application is likely to discover an "unplanned route," which is problematic.

If those are external to the domain, the checker can still make sure the links are valid, reporting if a site goes down and a link now leads to a 404 or 500 error. He will explain the pseudo-code, making it possible for anyone to write it in a few hundred lines of Python or Ruby.

[Paul Grizzafi: Kill more bugs! Add randomization to your web testing ]

True self-service test environments

Ten years ago (and at some companies, even today) testers might spend half their time waiting for a specific feature to be added to "the test environment." Some large percentage of the time the build wouldn't have the feature, or would have the wrong version of the feature.

Virtual machines and containers changed all that. Suddenly every build could have a test environment. Or two, or 10, all in trade for a very small amount of computer power. Melissa Benua, a technical lead at mParticle, said that this ability can reduce cycle time (or the cost of a regression test cycle) by an order of magnitude. That's roughly a 90% decrease in time and effort.

Her presentation on continuous testing using containers will teach you how to use containers to accelerate testing even on legacy projects, where isolating code into a container could be challenging.

[ Melissa Benua: Doing continuous testing? Use containers. ]

That also means that every build can run a battery of checks before it is explored by a human. Benua suggests that the selection of tests running in that pipeline and how they are designed need to change to make them effective. Her other half-day tutorial, on creating a test design for a fully automated build architecture, will show you how to do it.

Grab an OS image, grab your code—whatever you need to deploy—put your bits on it and see what happens. It can be difficult to get done in production, but to replace all your test environments is easy...er."
Melissa Benua, senior technical lead, mParticle

Once those containers move to the cloud, they need something to run on. Kubernetes is a cluster manager that can provide that cloud. Glenn Buckholz, a technical manager at Coveros, is presenting on leveraging Kubernetes for testing.

[ Glenn Buckholz: A tester's guide to leveraging Kubernetes ]

He'll focus on using Kubernetes to spin up the environments you want—operating system, browser, device simulator, and so on—then having your tooling run against them at the same time, in parallel. Once they're implemented, rerunning the combinations is a matter of running a script or making a few mouse clicks.

[ Get Report: The Top 20 Continuous Application Performance Management Companies ]

The move away from the tester role

Alberto Savoia's famous 2011 "Testing Is Dead" presentation is closing in on the decade mark. Despite that common refrain, however, attendance at STAREAST, the world's largest software testing event, continues to rise. Not only is the conference growing, but a host of smaller competitor events are cropping up, including Agile Testing Days USA, Test Bash San Francisco, the DevOps Days Conferences, and many small, regional events.

Several software vendors also hold their own events. The challenge today is not finding a testing conference but deciding which to attend.

The second surprise is the titles of the speakers. At the first STAREAST conference in 2004, the job titles of the keynote speakers were test architect, director of quality, professor of software engineering, and testing consultant. This year's keynotes include Jeffrey Payne, CEO of Coveros, Jason Arbon, CEO of Test.ai, and Tania Katan, creator of the #ItWasNeverADress campaign.

Speakers Lisa Crispin and Angie Jones, are no longer super-testers but have crossed over, working as developer evangelists for test tooling companies.

Testing is more alive than ever; what's changing is who does it. Whole-team testing changes the game such that everyone on the team does a little bit of testing every day. That makes it an activity of interest for senior management.

If you attend STAREAST this month, you'll learn about self-service test environments, AI testing, and whole-team testing. Given the expanded interest in testing, the person in the seat next to you just might just be a CEO.

This year's STAREAST software testing conference takes place in Orlando, Florida, from April 28 to May 3. TechBeacon readers save $200 on registration fees by using promo code SECM. Can't make it? Register for free for STAREAST Virtual to stream select presentations.

[ Get Report: Buyer’s Guide to Software Test Automation Tools ]