Effective Management of Test Automation Failures

In recent years, much attention has been paid to setting up Test Automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of Test Automation: What do we do when the Test Automation doesn’t work correctly?

Testing teams need to develop a practical solution for determining who’s accountable for analyzing Test Automation failures, and ensure that the right processes and skills exist to effectively do the analysis.

There are 3 primary reasons why your Test Automation may not work correctly:

  1. There is an error in the automated test itself
  2. The application under test (AUT) has changed
  3. The Automation has uncovered a bug in the AUT

The first step whenever a failed test occurs in Test Automation is to figure out what happened. So who should be doing this?

Too often in testing organizations, it’s the case that as soon as a Test Engineer runs into a problem with the Test Automation, they simply tell the Automation Engineer “Hey, the Test Automation isn’t working!” The job of analysis then falls to the Automation Engineer, who is already overburdened with implementing/maintaining new and existing Test Automation.

How can we push this analysis ‘upstream’ to the Test Engineers who execute the Test Automation? In order to do this, we must first look at why the Test Engineers don’t feel that they can or should analyze the issues.

In a typical ‘scripting approach’ to Test Automation, the Test Engineers will first write a verbose test case, typically in Word, Excel, or some sort of in-house or 3rd party test case management tool. Once that task is completed, the Test Engineers effectively “throws it over the wall” to the Automation Engineer. The Automation Engineer thens create a scripted version of the test case, and “throw it back over the wall” to the Test Engineer, who then executes the automated test.

More often than not, the Test Engineer will not understand the scripted test very well. If something is broken, they rely on the Automation Engineer to figure out what went wrong. This situation violates the very principle of the four fundamental tasks that an experienced Test Engineer must be able to do:

  1. Design/write tests.
  2. Execute tests and identify/seek out failure.
  3. Analyze a failure for reproducibility and ideas to incorporate into new tests.
  4. Report a failure and/or bug.

At a minimum, the Test Engineer should be able to analyze the results of the automated tests, and figure out if a failure is due to an actual bug in the AUT. If there is no apparent bug, then the Test Engineer should be able to determine whether or not a change occurred in the application. Finally if there is no apparent bug or changes in the AUT, then they may confidently consider that the issue was caused by an error in the Automation.

So how can you empower the Test Engineer to analyze Test Automation failuress? It’s simple really. If your Test Engineers can create automated tests themselves, then they will be empowered to analyze those tests when they don’t work. In our experience, a Keyword-Driven Test Automation framework is the best way to enable your test engineers to effectively own the analysis of Test Automation failures.

With a properly implemented Keyword-Driven Test Automation framework, the analysis of a Test Automation failure consists of the following steps:

  1. Did the Test Automation uncover a bug in the AUT? (Done by a Test Engineer)
  2. Was the failure caused by a change in the AUT? (Done by a Test Engineer and/or Automation Engineer)
  3. Was the failure caused by an error in the Automation itself? (Done by an Automation Engineer)

With Keyword-Driven Test Automation, scripting is kept to a minimum, so most of your failures will occur due to bugs or changes in the AUT. Test Engineers should be able to do most of the failure analysis, freeing your Automation Engineers to focus more on creating new automated tests, and allowing you to further increase your test coverage, reduce testing time, decrease maintenance, and most importantly, create higher quality products!

Hung Nguyen

Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs.

Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing.

Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.

Hung Q. Nguyen
Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs.
Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing.
Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.
Hung Q. Nguyen on Linkedin

The Related Post

In order to make the right choices among tools, you must be able to classify them. Otherwise, any choice would be at best haphazard. Without functioning classification, you would not be able to understand new tools fast, nor come up with ideas of using, or creating new tools.
An automation framework is a way to organize your code in meaningful manner so that any person who is working with you can understand what each file contains. Automation frameworks differ based on how you organize your code – it can be organized based on your data, so that any person who wants to use ...
Test automation can provide great benefits to the software testing process and improve the quality of the results…. but its use must be justified and its methods effective. The reasons to automate software testing lie in the pitfalls of manual software testing… As we all know too well, the average manual software testing program:
When Netflix decided to enter the Android ecosystem, we faced a daunting set of challenges: 1. We wanted to release rapidly (every 6-8 weeks). 2. There were hundreds of Android devices of different shapes, versions, capacities, and specifications which need to playback audio and video. 3. We wanted to keep the team small and happy. ...
LogiGear Magazine – April 2013 – Test Automation
Automated Testing is a huge part of DevOps, but without human-performed quality assurance testing, you’re increasing the risk of  lower-quality software making it into production.  Automated Testing is an essential DevOps practice to increase organizations’ release cadence and code quality. But there are definitely limits to only using Automated Testing. Without human quality assurance (QA) ...
From cross-device testing, to regression testing, to load testing, to data-driven testing, check out the types of testing that are suitable for Test Automation. Scene: Interior QA Department. Engineering is preparing for a final product launch with a deadline that is 12 weeks away. In 6 weeks, there will be a 1 week quality gate, ...
Developers of large data-intensive software often notice an interesting — though not surprising — phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, the application may have been installed on a different OS-hardware-DBMS-networking platform, or newly added customers may have account ...
Are you frustrated with vendors of test automation tools that do not tell you the whole story about what it takes to automate testing? Are you tired of trying to implement test automation without breaking the bank and without overloading yourself with work? I experienced first-hand why people find test automation difficult, and I developed ...
How lagging automotive design principles adversely affect final products. Cars are integrating more and more software with every model year. The ginormous screen introduced by Tesla in their flagship Model S a few years ago was seemingly unrivaled at the time. Nowadays, screens of this size are not only commonplace in vehicles such as the ...
The growing complexity of the Human-Machine Interface (HMI) in cars offers traditional testers an opportunity to capitalize on their strengths. The human-machine interface (HMI) is nothing new. Any user interface including a graphical user interface (GUI) falls under the category of human-machine interface. HMI is more commonly being used to mean a view into the ...
With the new year just around the corner, here’s a look at the Test Automation trends that have the potential to dominate. DevOps is being relied upon more than ever. With there being strong Market Drivers for the adoption of DevOps, the need for Test Automation has also never been greater. But what’s next after ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe