Jungle Testing

As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of differentiation between the different types of bugs that I think is essential for testing success.

To set the stage I would like to distinguish between different categories of bugs (For a more extensive differentiation of bugs look at the very interesting presentation by Giri Vijayaraghavan and Cem Kaner “Bug taxonomies: Use them to generate better tests“). The main categories of bugs I would like to introduce are:

  1. Coding bugs – Coding bugs are things that were implemented differently than intended or specified
  2. “Jungle bugs” – Jungle bugs are unexpected situations, not anticipated in the specifications and therefore not handled well in the code

The coding bugs are commonly the more straightforward ones to find and fix. Good unit testing should be able to catch many of them; and a lot of tests can be designed by following the requirements or specifications that are available.

The unexpected situations are harder. If they were not hard the situations would not be “unexpected”. Examples in this category are unanticipated user actions (“a user would never do this”), failing environment (an interrupted TCP/IP connection), unexpected data, etc. Some of these can also be malicious, like the common buffer overflow trick, where a hacker sends a deliberate extremely long value that will overflow an internal buffer and, by overwriting a return address, redirect a function call to malicious code.

A special category of coding bugs are what we could call “indirect bugs”. Indirect bugs are where one part of a system has a coding bug that leaves a bad value in a table or a variable that will cause a failure or crash in another part of the system. Even though the issue is a coding bug, to the affected part of the system it is an unexpected situation. Whether this is a jungle bug would, in my view, depend on how the un-expected value or situation is handled. If it causes a crash I would consider it a jungle bug. It is better for code never to crash regardless of what data is accidentally fed into it.

Can thinking about bugs this way help in finding and preventing them? My answer would be “yes”. It gives an extra differentiation in your test design. Not only would the design for the two categories be different, but the steps you take find the bugs can also be different.

The first suite of tests would aim for coding bugs. I would like to call them “functional tests”, which would include most of the unit testing. Such tests would be designed in a straightforward manner, directly related to system requirements and/or functional specifications. For example, one or more test cases are defined for each requirement, and a tester who knows the subject matter under test well should be able to produce most of them without involvement of others. For this category it can also make sense to measure code coverage, for which good tools are available.

Another suite of tests should look for jungle bugs. To name the kind of testing that caters to jungle bugs we could use the term “jungle testing”. For a test designer this is an ambitious task. You will have to look for potentially unexpected conditions or combinations of conditions. Some of the things that you could do include:

  • Focus more on the business than the requirements trying to find out which unusual events and circumstances can happen.
  • Talk to people who know the business or system under test well, like end users and business analysts.
  • Work together with other testers and discuss ideas in meetings.
  • Go for “depth” over “breadth” looking for hidden bugs in specific areas rather than broad coverage (which is more of an objective for the first category of tests that looks for coding bugs).
  • Ask and discuss “what if” questions like what if a user enters letters instead of numbers, etc.
  • Apply risk analysis to determine what should never happen with the system under test, and what conditions could cause such a result.
  • Use “exploratory testing”, a technique that lets testers, in pairs, work with a system interactively (see: What is Exploratory Testing? by James Bach). Since we are dealing with “jungles”, any technique with “exploration” in the name should be a good fit…

When designing jungle tests, work with the designers to determine the kind of situations the tests are going to address, thus allowing the design to be updated to cover the situations, or even to learn that handling a certain situation has no priority, and does not have to be tested. It is also good to have a generic requirement for unexpected situations: how resistant should a system be, and how should it typically respond. I can imagine that for most systems the requirement for an unusual and invalid input would be: (1) don’t crash, (2) give the user a message which can simply be a message like “invalid data”.

I feel that treating unexpected situation testing (“jungle testing”) as a separate category in test development can give additional focus, thus exceeding the aggressiveness that is usually achieved with regular requirement based test cases. Per situation a decision can be made if and to what extent this would be worth the efforts.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

For mission-critical applications, it’s important to frequently develop, test, and deploy new features, while maintaining high quality. To guarantee top-notch quality, you must have the right testing approach, process, and tools in place.
LogiGear Magazine – July 2011 – The Test Methods & Strategies Issue
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Article Synopsis There are many misconceptions about Software Testing. This article deals with the 5 most common misconceptions about how Software Testing differs from other testing. Five Common Misconceptions Some of the most common misconceptions about ...
I’ve been intending to write a book review of How We Test Software At Microsoft, by Alan Page, Ken Johnston, and Bj Rollison, but for whatever reason I just never found the time, until now. In general, I like this book a lot. It’s a nice blend of the tactical and the strategic, of the ...
This article was adapted from a presentation titled “How to Optimize Your Web Testing Strategy” to be presented by Hung Q. Nguyen, CEO and founder of LogiGear Corporation, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006). Click here to jump to more information on ...
First, let me ask you a few questions. Are your bugs often rejected? Are your bugs often assigned back to you and discussed back and forth to clarify information? Do your leaders or managers often complain about your bugs?
Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one ...
LogiGear Magazine March Issue 2018: Under Construction: Test Methods & Strategy
They’ve done it again. Gojko Adzic, David Evans and, in this book, Tom Roden, have written another ‘50 Quick Ideas’ book. And this one is equally as good as the previous book on user stories. If not even better.  
Please note: This article was adapted from a blog posting in Karen N. Johnson’s blog on July 24, 2007. Introduction The password field is one data entry field that needs special attention when testing an application. The password field can be important (since accessing someone’s account can start a security leak), testers should spend more ...
From cross-device testing, to regression testing, to load testing, to data-driven testing, check out the types of testing that are suitable for Test Automation. Scene: Interior QA Department. Engineering is preparing for a final product launch with a deadline that is 12 weeks away. In 6 weeks, there will be a 1 week quality gate, ...
Do testers have to write code? For years, whenever someone asked me if I thought testers had to know how to write code, I’ve responded: “Of course not.” The way I see it, test automation is inherently a programming activity. Anyone tasked with automating tests should know how to program. But not all testers are ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe