Use Case Risks in Test Automation

People who know me and my work probably know my emphasis on good test design for successful test automation. I have written about this in “Key Success Factors for Keyword Driven Testing“. In the Action Based Testing (ABT) method that I have pioneered over the years it is an essential element for success. However, agreeing with me in workshops and actually applying the principles in projects turns out quite often to be two different things. Apart from my own possible limitations as a teacher, I see at least one more reason: The way the testing is involved in the development projects.

A typical development project will start with a global understanding of what the system needs to do, which is then detailed out further, for example into use cases. These use cases have proven to be helpful in implementation and various testing efforts, but I’m getting more and more the impression that they may also pose a risk for good test design when they are the only source of information for the test team. There are two reasons:

1. They tend to have a high level of detail
2. They usually follow the end-user perspective

Re 1: The level of detail of use cases is primarily aimed at the developers, and the information they need to know. More often than not it is implicitly assumed that this is also a good level for information for testers to develop test cases from (for the sake of simplicity I will not discuss the usefulness of techniques like exploratory testing, I will just assume that test cases are made and their execution is automated).

Re 2: In many tests it matters how a system handles transactions, and provides the correct and complete follow up actions and information. The end-user interacting with the UI is then not always relevant, and I would not like to see it explicitly specified as part of test cases (in ABT the UI specifics would be hidden in the ‘actions’). However, having the UI oriented use cases as the primary source of information makes it hard to focus on the transaction handling and other aspects of the system.

My message would be this: don’t start creating test cases from use cases, or similar developer oriented sources of information, before you have had a chance to create a high level test design, in which you specify which test products you’re going to create and what level of detail they would need to have.

Article by Hans Buwalda, CTO, LogiGear Corporation

 

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

Introduction A characteristic of data warehouse (DW) development is the frequent release of high-quality data for user feedback and acceptance. At the end of each iteration of DW ETLs (Extract-Transform-Load), data tables are expected to be of sufficient quality for the next ETL phase. This objective requires a unique approach to quality assurance methods and ...
LogiGear Magazine September Issue 2020: Testing Transformations: Modernizing QA in the SDLC
Looking for a solution to test your voice apps across devices and platforms? Whether you’re new or experienced in testing voice apps such as Alexa skill or Google Home actions, this article will give you a holistic view of the challenges of executing software testing for voice-based apps. It also explores some of the basic ...
Investing in Test Automation training will increase your team’s productivity. The availability of reliable jobs in a competitive US market seems to be constantly embattled with competition and replacements of artificial intelligence (AI). In 2016, Foxconn replaced 60,000 employees with robots. However, the growth of Test Automation as an occupation has highlighted an intriguing option ...
When it comes to performance testing, be smart about what and how you automate Listen closely to the background hum of any agile shop, and you’ll likely hear this ongoing chant: Automate! Automate! Automate! While automation can be incredibly valuable to the agile process, there are some key things to keep in mind when it ...
One of my current responsibilities is to find ways to automate, as much as practical, the ‘testing’ of the user experience (UX) for complex web-based applications. In my view, full test automation of UX is impractical and probably unwise; however, we can use automation to find potential UX problems, or undesirable effects, even in rich, ...
< Michael Hackett sat down with EA’s Stephen Copp to discuss the world of integrated test platforms.
“Testing Applications on the web” – 2nd EditionAuthors: Hung Q. Nguyen, Bob Johnson, Michael HackettPublisher: Wiley; edition (May 16, 2003) This is good book. If you test web apps, you should buy it!, April 20, 2001By Dr. Cem Kaner – Director of Florida Institute of Technology’s Center for Software Testing Education & Research Book Reviews ...
The challenges with any automation effort is to know your capability. I’ve seen too many automation efforts begin and end with a tool decision. Generally these tools are very complex pieces of software that do many more things then we would ever use in our normal everyday testing. It even adds more misery to the ...
Many organizations rely on HP Quality Center to design test plans and track test results. TestArchitect’s Quality Center integration makes working with QC as easy as pie. TestArchitect (TA) is a three-in-one tool for Test Management, Test Development, and Test Automation. Users can create and manage test assets, execute tests, track and analyze test results, ...
As our world continues its digital transformation with excitement in the advancement and convergence of so many technologies- from AI, machine learning, big data and analytics, to device mesh connectivity, nor should we forget VR and AR- 2017 promises to be a year that further transforms the way we work, play and take care of ...
The following is a transcript of a May 7, 2008 interview with Hung Q. Nguyen, founder and CEO of LogiGear Corporation and coauthor of the best selling textbook Testing Computer Software. Interviewer: When it comes to software testing, what concerns or issues are you hearing from software developers? Hung Q. Nguyen: The most pressing concern ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe