Beware of the Lotus Eaters: Exploratory Testing

Drawing from the Greek mythology of the lotus eaters, Anne-Marie Charrett warns testers to be weary of enjoying early success too soon upon finding high impact bugs.

 

Like many professions, the lure of the “low hanging fruit” in software testing is very appealing. In this case, fruit refers to finding high impact bugs quickly.
Exploratory Testing (ET) combined with Risk based Testing is a useful strategy for finding these bugs. Of course ET can deliver a whole lot more than that, but the opportunity to demonstrate an immediate impact is often a compelling reason to use Exploratory Testing.

Testers are not the only ones who like this fruit; it is also welcomed by the rest of the team, in particular project managers. Some bugs found early in the testing phase are, in general, welcomed by most team players.

Consequently, exploratory testers often tailor strategies to find and deliver these types of bugs. This means cheap and easy tests are run first. There is nothing wrong with this. It’s a good and effective strategy. There’s nothing like a few serious bugs at the start of testing to lull a manager out of a false sense of security.
It becomes a problem though if it becomes the only model used to perform testing. Testers eat the lotus flower of early success and are seduced and satiated with its bewitching nectar and stop testing. Any inner doubts are squashed with arguments such as “there is no such thing as 100% coverage” or “exhaustive testing is expensive and impractical, so why try?”

This is especially true when faced with complex and hard to understand software. It can be a big problem, because Exploratory Testing is tester centric—making the tester in control of how much testing to perform. In general, there are few team members who will object to the decision to stop testing and jump ship.
So, testing is declared as “Done Enough.” To have “Done Enough” testing is the poor man’s cousin to “Good Enough” Testing. “Done Enough” testing stops when the tester has had enough. Some bugs have been found and any more testing either requires imagination, technical expertise or some hard work on the tester’s part. “Done Enough” Testing is bad, folks. It reflects poorly on test teams and does little to educate the wider community on the benefits of Exploratory Testing.
Exploratory testers (well, any tester for that matter) need to be aware of the lotus flower and continue to test. And while testers may not be able to perform exhaustive testing, it shouldn’t preclude extensive testing.

Extensive testing goes beyond early success. Testers don’t dwell on the satisfaction of finding bugs early. They test beyond that. Project managers might be reasonably content (yes, they may have been seduced by the lotus flower, too) but testers need to wake up and keep testing until testing is “Good Enough.”
It’s more expensive to continue testing and it is harder to achieve at a reasonable price. This is where automation or tools come in. A good toolsmith can change where tools used effectively will reduce the cost of testing to allow more testing to take place. This results in a better standard of “Good Enough.”
I had a firsthand experience of the importance of tools in exploratory testing.

Eusebiu Blindu, a Czech tester, recently came up with an interesting online challenge. He created a testing exercise in finding bugs and patterns. The application drew a polygon with the number of sides determined by a number you entered into the field. You can see it here: http://www.testalways.com/2010/07/05/find-bugs-and-patterns/

There was no indication of the limit of numbers this field could take. I decided to test using a variety of numbers as testing every number seemed beyond my capabilities. I took the following numbers: 0 ~ 10, then I jumped to 15, tried a few random numbers to 99, 100, 999 and then 1000 and finally I threw in a couple of negative numbers to see what would happen. I noticed some relationships in the colors, a few irregularities, but beyond that nothing else stood out.

So what did I do? Did I read books to find out more? Did I seek outside knowledge, ask someone, or question. Did I try and change my model? Did I examine my assumptions?

No, instead I limited my testing. I put the puzzle aside. I decided I had “Done Enough”

Until I saw the following tweet:

@jamesmarcusbach “A good exercise for tools. RT @charrett: Eusebiu sets a challenge. Anyone care to answer the call?”

I was surprised; I didn’t think that this puzzle merited a tool. It was a polygon puzzle with only one input. What was the benefit from automating one field? Anyhow, how could you automate the testing of something like this?

I contacted James Bach on Skype and asked him what he meant by the tweet. We had a coaching session on the topic. Incidentally, James and I do a lot of IM coaching. In fact, we’re in the process of writing a book on the subject.

James explained to me he had tested 1200 inputs. I was impressed; regardless of the time it took to enter so many numbers, how could he remember the differences between each of the polygons?

It turned out his solution was to write a Perl script to automatically enter in a number, then take a snapshot of the result and save it in a directory. This was then followed by some deft blink testing. I followed his approach and very quickly, patterns and bugs became apparent.

I realised how limiting my approach to testing was in this scenario. When faced with a problem beyond my immediate capability, instead of figuring out how to fix the problem, I narrowed my model.

I allowed myself to be seduced by the deadly combination of early success and fear of complexity.

I reduced the scope because of the complexity of inputting large inputs. Rather than solve the problem of entering large amounts of inputs, I decided to test less. This was a major flaw in my testing strategy.

The lesson I learned was not to be limited by what seems to be impossible or hard. If it’s not possible to achieve (or if it’s too tedious to achieve) through manual means, then find another way of doing it. In this case automate it!

Testers need to be able to suspend belief of the impossible and put aside a skewed view of what can and cannot be done. They need to shine torches not only where others don’t want to look but also where we fear to tread

Beware the lotus eaters!


 

Anne-Marie Charrett

Anne-Marie Charrett is a test consultant and runs her own company Testing Times. An electronic engineer by trade, software testing chose her, when in 1990 she started conformance testing against European standards. She was hooked and has been testing since then. Anne-Marie blogs at http://mavericktester.com and her twitter id is @charrett .
Anne-Marie Charrett
Anne-Marie Charrett is a test consultant and runs her own company Testing Times. An electronic engineer by trade, software testing chose her, when in 1990 she started conformance testing against European standards. She was hooked and has been testing since then.

The Related Post

  Mobile analytics experts Julian Harty and Antoine Aymer have teamed up to deliver a 161-page handbook designed to help you “enhance the quality, velocity, and efficiency of your mobile apps by integrating mobile analytics and mobile testing”.
Removing the barriers to move and better mobile testing. Over the last decade, application testing has continually proved itself to be an important concern. When done well, testing can drastically reduce the number of bugs that make it into your release code (and thus actually affect your users). In addition, good testing approaches will help your ...
What you need to know for testing in the new paradigm This two part article analyzes the impact of the Internet of Things (IoT) product development on traditional testing. Part one of this series starts with a wide view on the IoT, embedded systems and device development aspects of testing. Part two, to be published ...
A sampling of some free, online, and easy-to-use mobile device emulators that can help get you started with testing. ScreenFly A free, customizable tool to test your website on any screen size, including desktops, tablets, televisions, and mobile phones.
Steps that will enable you to identify the weaknesses of your new app, its vulnerabilities and strengths. So you’ve just finished developing a nifty, customisable app that can help farmers track their produce from source to market via their mobile phone. You’re elated and want to get started marketing it right away. Not to burst ...
The most significant facet of mobile testing is understanding the mobile ecosystem. Mobile applications are growing in use and sophistication along with the speed of the networks and the increasing power of the devices. To be sure, mobile application testing can be done with minimal automation, but a high level of test automation will help ...
Manual testing teams may not be able to test all the processes with each build Test automation of applications has been around for many years. There are many of us in the automated testing field that started very early in the test automation phase, but the introduction of mobile devices has brought on a new angle ...
25% of Americans own a tablet. Up from 11% of U.S. adults in July of 2011 to 18% in January of 2012. – Pew Internet & American Life Project Nigeria has close to 100 million mobile phone lines, making it Africa’s largest telecoms market. – Nigerian Communications Commission Google plans to sell 200 million Android ...
LogiGear Magazine, September 2015: Mobile Testing
In the last issue on testing the SMAC stack we talked about the social and mobile aspects of testing. We will be referring to them in this article. In this issue part 2, we focus on the Analytics and Cloud aspect. The goal of this article is to understand a simple landscape of analytics and cloud.
Great mobile testing requires creativity to find problems that matter. I’d like to take you through the thought process of testers and discuss the types of things they consider when testing a mobile app. The intention here is to highlight their thought processes and to show the coverage and depth that testers often go to.
This article will cover 10 common mobile app testing mistakes to avoid when you are a software tester working in a mobile app testing and development environment. The 10 points may help you to start your mobile testing activities if you are new to mobile testing or they may help you to recap your existing mobile testing ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe