Effective Management of Test Automation Failures

In recent years, much attention has been paid to setting up Test Automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of Test Automation: What do we do when the Test Automation doesn’t work correctly?

Testing teams need to develop a practical solution for determining who’s accountable for analyzing Test Automation failures, and ensure that the right processes and skills exist to effectively do the analysis.

There are 3 primary reasons why your Test Automation may not work correctly:

  1. There is an error in the automated test itself
  2. The application under test (AUT) has changed
  3. The Automation has uncovered a bug in the AUT

The first step whenever a failed test occurs in Test Automation is to figure out what happened. So who should be doing this?

Too often in testing organizations, it’s the case that as soon as a Test Engineer runs into a problem with the Test Automation, they simply tell the Automation Engineer “Hey, the Test Automation isn’t working!” The job of analysis then falls to the Automation Engineer, who is already overburdened with implementing/maintaining new and existing Test Automation.

How can we push this analysis ‘upstream’ to the Test Engineers who execute the Test Automation? In order to do this, we must first look at why the Test Engineers don’t feel that they can or should analyze the issues.

In a typical ‘scripting approach’ to Test Automation, the Test Engineers will first write a verbose test case, typically in Word, Excel, or some sort of in-house or 3rd party test case management tool. Once that task is completed, the Test Engineers effectively “throws it over the wall” to the Automation Engineer. The Automation Engineer thens create a scripted version of the test case, and “throw it back over the wall” to the Test Engineer, who then executes the automated test.

More often than not, the Test Engineer will not understand the scripted test very well. If something is broken, they rely on the Automation Engineer to figure out what went wrong. This situation violates the very principle of the four fundamental tasks that an experienced Test Engineer must be able to do:

  1. Design/write tests.
  2. Execute tests and identify/seek out failure.
  3. Analyze a failure for reproducibility and ideas to incorporate into new tests.
  4. Report a failure and/or bug.

At a minimum, the Test Engineer should be able to analyze the results of the automated tests, and figure out if a failure is due to an actual bug in the AUT. If there is no apparent bug, then the Test Engineer should be able to determine whether or not a change occurred in the application. Finally if there is no apparent bug or changes in the AUT, then they may confidently consider that the issue was caused by an error in the Automation.

So how can you empower the Test Engineer to analyze Test Automation failuress? It’s simple really. If your Test Engineers can create automated tests themselves, then they will be empowered to analyze those tests when they don’t work. In our experience, a Keyword-Driven Test Automation framework is the best way to enable your test engineers to effectively own the analysis of Test Automation failures.

With a properly implemented Keyword-Driven Test Automation framework, the analysis of a Test Automation failure consists of the following steps:

  1. Did the Test Automation uncover a bug in the AUT? (Done by a Test Engineer)
  2. Was the failure caused by a change in the AUT? (Done by a Test Engineer and/or Automation Engineer)
  3. Was the failure caused by an error in the Automation itself? (Done by an Automation Engineer)

With Keyword-Driven Test Automation, scripting is kept to a minimum, so most of your failures will occur due to bugs or changes in the AUT. Test Engineers should be able to do most of the failure analysis, freeing your Automation Engineers to focus more on creating new automated tests, and allowing you to further increase your test coverage, reduce testing time, decrease maintenance, and most importantly, create higher quality products!

Hung Nguyen

Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs.

Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing.

Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.

Hung Q. Nguyen
Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs.
Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing.
Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.
Hung Q. Nguyen on Linkedin

The Related Post

Test Strategy A test strategy describes how the test effort will reach the quality goals set out by the development team. Sometimes called the test approach, test strategy includes, among other things, the testing objective, methods and techniques of testing and the testing environment.
Every once in a while a book is put together that should be read by every person with a relationship to software development. This book is one of them. Everyone dreams of automating their software testing, but few make it a reality. This down-to-earth book contains stories of 28 teams that went for it, including ...
Looking for a solution to test your voice apps across devices and platforms? Whether you’re new or experienced in testing voice apps such as Alexa skill or Google Home actions, this article will give you a holistic view of the challenges of executing software testing for voice-based apps. It also explores some of the basic ...
Bringing in experts can set you up for automation success. Test automation isn’t easy when your testing gets beyond a few hundred test cases. Lots of brilliant testers and large organizations have, and continue to struggle with test automation, and not for lack of effort. Everyone understands the value of test automation, but few testing ...
LogiGear Magazine September Test Automation Issue 2017
The guide for CUI Automated Testing strategies, including chatbot testing and voice app testing. In the Software Testing industry, trends come and go that shape the future of testing. From Automation in Agile, to the DevOps era we are now in, trends are what evolve and improve our testing processes and ideologies. Currently, many researchers ...
Source: From I.M.Testy (BJ Rollison’s blog) I just finished reading Implementing Automated Software Testing by E.Dustin, T. Garrett, and B. Gauf and overall this is a good read providing some well thought out arguments for beginning an automation project, and provides strategic perspectives to manage a test automation project. The first chapter made several excellent ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction There are many potential pitfalls to Manual Software Testing, including: Manual Testing is slow and costly. Manual tests do not scale well. Manual Testing is not consistent or repeatable. Lack of training. Testing is difficult ...
Developers of large data-intensive software often notice an interesting — though not surprising — phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, the application may have been installed on a different OS-hardware-DBMS-networking platform, or newly added customers may have account ...
In order to make the right choices among tools, you must be able to classify them. Otherwise, any choice would be at best haphazard. Without functioning classification, you would not be able to understand new tools fast, nor come up with ideas of using, or creating new tools.
LogiGear Magazine September Issue 2020: Testing Transformations: Modernizing QA in the SDLC

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe