Is Test Automation the Same as Programming Tests?

Introduction

A common issue that I come across in projects is the relationship between test automation and programming. In this article I want to highlight some of the differences that I feel exist between the two.

After many decades of often slow and hard fought over progress, software engineering is now a well established profession, with recognized methods and practices. Automated testing, however, is still a relatively new and often misunderstood phenomenon. Often software engineers get involved in software testing and try to apply their programming knowledge and experience. For many aspects of software testing this is a good thing, but over the years I have come to believe that automated testing poses its own distinct properties and challenges, and a practitioner should try to understand those and work with them.

This article will explore the differences between software engineering and designing automated tests.

The Role and Importance of Test Design

A first big item is the role and importance of test design. The “logic” of an automated test should be in the test, not in the software automating it. This notion is the core of LogiGear’s Action Based Testing (ABT) method, where test cases are created by testers in a format that is as friendly to them as possible, using action words in spreadsheets. In an earlier article I discussed whether ABT is an automation technique (see: Is Action Based Testing an Automation Technique?). In my view that is not the case. I view it primarily as a test design technique. This illustrates that the focus in automated testing is placed on the test design, not the automation technology.

The Role of Test Cases

A second aspect is the role of test cases, and how they are related. In a regular software system many functions, to which test cases could be compared, all work together and are interdependent. Changes in one part can have consequences for another. In test cases, such relationships should be avoided. A test case should have a specific scope that is well-differentiated from all other test cases. In an ABT test, it is common to let one test case leave behind a well-defined situation for the next one, but that is as far as relationships go. Test cases can best be seen as “satellites” hanging around the IT system, each probing a specific aspect of it, and independent from each other. The consequence is that test design effort focuses on functional aspects of the test cases needed, and should preferably stay away from the technical architecture for the automation. Test automation is a separate effort, focusing on the actions not the test cases.

Test Case Maintainability

Maybe one of the most obvious properties of automated tests is their maintainability, which hardly relates to their technical structure, but instead almost completely focuses on their sensitivity to changes in the system under test. A well designed automated test suite can still be regarded as weak if small changes to the system under test have a large impact on the tests. This property is probably the single most important aspect that gives test automation its own unique dynamic.

Test Case Readability

A further major criterion for me is readability. When looking at test cases I want to be able to understand them quickly and assess their effectiveness and completeness. Readability of test cases in itself also helps their maintainability. For me this includes for example explicit values, both input values and expected outcomes. In ABT these show up as the arguments of the action in the spreadsheet. In programming however, it is common practice not to “hard code” values, rightfully so, since it would jeopardize maintainability. In ABT we also have the possibility of variables to be used instead of hard values, but I encourage test designers to use them as little as possible. Only when a value is variable and is reused in multiple test cases should a variable be used. Examples are an IP address of a server to contact as part of a test, or a sales tax percentage in an order management system. For people with a software engineering background this is quite a hard notion to overcome and it often takes quite a bit of persuasion to get them to use explicit values.

Designing Test Cases is Not a Programming Challenge

There are more examples where one could consider a software background “harmful” for a designer of automated tests. A very noticeable one is if an engineer tackles a test case as a programming challenge, and thus comes up with a complex and constructed solution that might be smart, but does not help readability and obfuscates the intention and logic of the test case. A recent case I saw in a project was the use of a data table to test a number of links in web page. The test case looped through the links following the table to test for the expected link captions. The result maybe qualified as sophisticated programming but it was hard to understand what was going on. A much easier solution, in ABT, is to define an action “check link caption” and apply it for each link to be checked.

There Should be Little or No Debugging of Tests

Last but not least is the debugging. When I hear from a project that automated tests are being debugged, it immediately raises the question for me how well the test design was done. My criterion is simple: test values can be off, but the automation itself should always work. If it doesn’t lower level tests should have been created and run first to make sure all is well in the navigation in the system under test. Also all the actions should be verified apart from the test cases, the automation engineer who is responsible for the actions, should make his/her own test cases to test the actions before “releasing” them for use by the testers. The result should be that once a higher level, more functionally oriented, test is run, it works without problems. In our product we have now released a “debugger” to debug tests, but in fact I encourage everybody not to use it, but rather turn the eye on the test design the moment it turns out to be hard to run a test.

Conclusion

Automated testing and programming have a lot in common, and none of the differences described here are absolute, but I hope I was able to illustrate in this article that automation is a profession in itself with its own specifics and challenges. Understanding these can be an important contribution to automation success.

More Information on Action Based Testing

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

The huge range of mobile devices used to browse the web now means testing a mobile website before delivery is critical.
Even the highest quality organizations have tradeoffs when it comes to their testing coverage. In Japan, Europe, and the United States, automotive manufacturers are aiming to enhance automotive functions by using software; in Japan in particular, Toyota, Nissan, Honda, Mazda, and Subaru are all adding endless amounts of software to their vehicles in the form ...
A short-list of selection criteria and popular automation tools. There are a lot of test automation tools available in the market, from heavy-duty enterprise level tools to quick and dirty playback-and-record tools for browser testing. For anyone just starting their research we’ve put together a short list of requirements and tools to consider.
TestArchitect TM is the name we have given to our automation toolset. It reflects the vision that automated testing requires a well-designed architectural plan allowing technical and non-technical elements to work fluidly in their capacity. It also addresses the continual missing link of all test automation tools of how to design tests. In TestArchitect the test ...
LogiGear Magazine – October 2010
This is part 2 of a 2-part article series; part 1 was featured in the September 2020 issue of the LogiGear Magazine, and you can check it out here. Part 1 discussed the mindset required for Agile, as well as explored the various quadrants of the Agile Testing Quadrants model. Part 2 will delve into ...
For this interview, we talked to Greg Wester, Senior Member Technical Staff, Craig Jennings, Senior Director, Quality Engineering and Ritu Ganguly, QE Director at Salesforce. Salesforce.com is a cloud-based enterprise software company specializing in software as a service (SaaS). Best known for its Customer Relationship Management (CRM) product, it was ranked number 27 in Fortune’s 100 ...
Introduction Many executives have some very basic questions about Software Testing. These questions address the elements of quality (customer satisfaction) and money (spending the least amount of money to prevent future loss). The basic questions that executive have about Software Testing include: Why care about and spend money on testing? Why should testing be treated ...
The success of Automation is often correlated to its ROI. Here are 5 KPIs that we find universally applicable when it comes to quanitfying your Test Automation.
“Testing Applications on the web” – 2nd EditionAuthors: Hung Q. Nguyen, Bob Johnson, Michael HackettPublisher: Wiley; edition (May 16, 2003) This is good book. If you test web apps, you should buy it!, April 20, 2001By Dr. Cem Kaner – Director of Florida Institute of Technology’s Center for Software Testing Education & Research Book Reviews ...
For those that are new to test automation, it can look like a daunting task to undertake For those who are new to Automation, it can look like a daunting task to undertake, but it only seems that way. If we unpack it and pinpoint the fundamentals, we can have the formula for the desired ...
This book isn’t for everyone, but everyone can get some value out of it. What I mean by that rather confusing statement is that folks working in Agile environments will likely want to throw the book across the room while folks in more bureaucratic environments like CMMI or other waterfall environments will likely get a ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe