I once consulted for a company to give a week-long course on testing and QA. It was a survey course covering a wide range of topics. I was setting up and chatting with students in the room. One man came over to me and said: “I have been testing for 6 months and I am completely bored. I plan on getting a different job in software, either in the company or outside—but it won’t be in testing. I know testing is important—very important—but it’s so boring. It’s not for me. This is my last chance: I hope I can learn something from this class that makes testing more interesting or challenging.”
This exchange is atypical—although I have met people in testing who find no challenge in what we do, meeting someone at that breaking point is rare.
The problem he faced was multi-faceted: The company was seriously underusing their testers—the testers were restricted to doing simple happy path validation checking, rather than including tasks for quality improvement and focusing on customer experience. Also, the more technically interesting automation was given “in-your-spare-time” priority, meaning it never got done. However, where this company was most deficient, was in encouraging their test team to find interesting bugs and design issues. What they needed to do was embrace the test team as software development engineers who test.
The team had no knowledge of the efficiency of data driven testing. There was no optimization of tests by parameterizing expected results to be able to more efficiently drive more data through a minimal number of tests and get greater coverage. There also was a lack of knowledge of Soap Opera testing and its unique goals, or its superiority to other types of tests to find certain classes of issues such as race condition and concurrency issues.
The company above was deficient in training as well as failing to give the necessary time budget required to do so. It also needs to be stressed that the individuals on the team also bear responsibility in not knowing our craft to the degree where they’d be able to advocate for smarter, higher-quality testing, more responsibility and the necessary allotment of time to improve quality.
Blame perspectives on quality, the intrinsic value of the test team, time budgeting, or care for customers, but all that aside—this team needed an entire course just on test design. They really were clueless as to what was involved in test case design and test development.
Simply put, test design is the engineering of a test to accomplish your quality goal. It takes knowledge, intelligence, understanding, vision, business acumen and a certain variety of mental sharpness that most people usually do not associate with testing.
If someone is going to find engineering “boring,” then perhaps they are not cutout for a career in testing. Consider this, testing leads directly to customer satisfaction. Test teams are increasingly collaborating on design, UI, UX, product capabilities and time estimates. All of these factors are crucial to product success! Testing can be interesting and exciting in a variety of ways, and a key factor of this is test design. Test design is central to both effective, efficient testing and the engineering of tests, as well as a crucial element of every successful test automation project.
In this issue, I have written “Making the Case for Better Test Design,” and our blogger of the month, Julian Harty, talks about pushing the boundaries of test automation. Justin Hunter has written a great book review on Elizabeth Hendrickson’s book, Explore it! Randy Rice’s article, “TestStorming™—A collaborative Approach to Software Test Design,” delves into the nuances of Test Design techniques and Han’s Schafer’s “Are Test Design Techniques Useful or Not?” focuses on the importance of black-box and white-box testing. I’m pleased to announce that we also have a new feature series, TestArchitect Corner, which explores different ways to use our flagship product.
We hope you find this issue of LogiGear Magazine useful and a great reference over time for excellent test design.