By Hans Buwalda, CTO, LogiGear Corporation
A common issue that I come across in projects is the relationship between test automation and programming. In this article I want to highlight some of the differences that I feel exist between the two.
After many decades of often slow and hard fought over progress, software engineering is now a well established profession, with recognized methods and practices. Automated testing, however, is still a relatively new and often misunderstood phenomenon. Often software engineers get involved in software testing and try to apply their programming knowledge and experience. For many aspects of software testing this is a good thing, but over the years I have come to believe that automated testing poses its own distinct properties and challenges, and a practitioner should try to understand those and work with them.
This article will explore the differences between software engineering and designing automated tests.
The Role and Importance of Test Design
A first big item is the role and importance of test design. The “logic” of an automated test should be in the test, not in the software automating it. This notion is the core of LogiGear’s Action Based Testing (ABT) method, where test cases are created by testers in a format that is as friendly to them as possible, using action words in spreadsheets. In an earlier article I discussed whether ABT is an automation technique (see: Is Action Based Testing an Automation Technique?). In my view that is not the case. I view it primarily as a test design technique. This illustrates that the focus in automated testing is placed on the test design, not the automation technology.
The Role of Test Cases
A second aspect is the role of test cases, and how they are related. In a regular software system many functions, to which test cases could be compared, all work together and are interdependent. Changes in one part can have consequences for another. In test cases, such relationships should be avoided. A test case should have a specific scope that is well-differentiated from all other test cases. In an ABT test, it is common to let one test case leave behind a well-defined situation for the next one, but that is as far as relationships go. Test cases can best be seen as “satellites” hanging around the IT system, each probing a specific aspect of it, and independent from each other. The consequence is that test design effort focuses on functional aspects of the test cases needed, and should preferably stay away from the technical architecture for the automation. Test automation is a separate effort, focusing on the actions not the test cases.
Test Case Maintainability
Maybe one of the most obvious properties of automated tests is their maintainability, which hardly relates to their technical structure, but instead almost completely focuses on their sensitivity to changes in the system under test. A well designed automated test suite can still be regarded as weak if small changes to the system under test have a large impact on the tests. This property is probably the single most important aspect that gives test automation its own unique dynamic.
Test Case Readability
A further major criterion for me is readability. When looking at test cases I want to be able to understand them quickly and assess their effectiveness and completeness. Readability of test cases in itself also helps their maintainability. For me this includes for example explicit values, both input values and expected outcomes. In ABT these show up as the arguments of the action in the spreadsheet. In programming however, it is common practice not to “hard code” values, rightfully so, since it would jeopardize maintainability. In ABT we also have the possibility of variables to be used instead of hard values, but I encourage test designers to use them as little as possible. Only when a value is variable and is reused in multiple test cases should a variable be used. Examples are an IP address of a server to contact as part of a test, or a sales tax percentage in an order management system. For people with a software engineering background this is quite a hard notion to overcome and it often takes quite a bit of persuasion to get them to use explicit values.
Designing Test Cases is Not a Programming Challenge
There are more examples where one could consider a software background “harmful” for a designer of automated tests. A very noticeable one is if an engineer tackles a test case as a programming challenge, and thus comes up with a complex and constructed solution that might be smart, but does not help readability and obfuscates the intention and logic of the test case. A recent case I saw in a project was the use of a data table to test a number of links in web page. The test case looped through the links following the table to test for the expected link captions. The result maybe qualified as sophisticated programming but it was hard to understand what was going on. A much easier solution, in ABT, is to define an action “check link caption” and apply it for each link to be checked.
There Should be Little or No Debugging of Tests
Last but not least is the debugging. When I hear from a project that automated tests are being debugged, it immediately raises the question for me how well the test design was done. My criterion is simple: test values can be off, but the automation itself should always work. If it doesn’t lower level tests should have been created and run first to make sure all is well in the navigation in the system under test. Also all the actions should be verified apart from the test cases, the automation engineer who is responsible for the actions, should make his/her own test cases to test the actions before “releasing” them for use by the testers. The result should be that once a higher level, more functionally oriented, test is run, it works without problems. In our product we have now released a “debugger” to debug tests, but in fact I encourage everybody not to use it, but rather turn the eye on the test design the moment it turns out to be hard to run a test.
Automated testing and programming have a lot in common, and none of the differences described here are absolute, but I hope I was able to illustrate in this article that automation is a profession in itself with its own specifics and challenges. Understanding these can be an important contribution to automation success.
More Information on Action Based Testing