To address the challenges and fears of implementing Automation in Agile projects, LogiGear CTO Hans Buwalda presents Action Based Testing as the answer.
How can automated functional testing fit into agile projects? That is a question we encounter a lot nowadays. Agile has become more common, but functional testing often remains a manual process because during agile iterations/sprints, there is simply not enough time to automate it. This is unlike unit testing, which is routinely automated effectively. The short answer is:
- A well planned and organized test design and automation architecture
- Organize the test design and automation architecture into separate life cycle
In this article I will show how the Action Based Testing method can help you to do just that. Let me first introduce Action Based Testing, followed by discussing how it can make both test design and Test Automation fit the demands of agile projects.
There are various sources where you can read more about Action Based Testing. Let me summarize the key principles here that are at the core of the method:
1. Not One, But Three Life Cycles
It is common to have testing and automation activities positioned as part of a system development life cycle, regardless of whether that is a waterfall or an agile approach. ABT however regards three distinct life cycles. Even though they have dependencies on each other, in an ABT project they will be planned and managed as separate entities:
- System Development: follows any SDLC, traditional or agile model.
- Test Development: includes test design, test execution, test result follow up, and test maintenance.
- Automation: focuses solely on the action keywords, interpreting actions, matching user or non-user interfaces, researching technology challenges, etc.
2. Test Design
The most important property is the position of test design. It is seen as the single most enabling factor for automation success, much more than the actual automation technology. In ABT, it is considered crucial to have a good “high level test design” in which so called “test modules” are defined. Each test module should have a clear scope that is different from the other and is developed as a separate “mini project.”
A test module will consist of test objectives and action lines. The test objectives outline the scope of the test module into individual verbal statements defining what needs to be tested in the module.
The tests in the test module (which looks like a spreadsheet) are defined by a series of “action lines,” often further organized in one or more test cases. Every action line defines an “action” and consists of an “action word” defining the action, and arguments defining the data for the action, including input values and expected results.
Note here the ABT test case figures, not as central as in some other methods. We feel the test case is too small and too isolated of a unit to give good direction to test development. Rather than having a predefined list of test cases to be developed, we like to make a list of test modules, and let the test cases in them be the result of test design, not the input of it.
Consequences derive from varying test cases and increase significantly during the creative process. Also, each test case can leave behind the preconditions of the next, resulting in a good flow of the test execution.
In ABT the automation activity is separated from the test development. Test design and automation require very different skill sets and interests. There might be people that are interested at doing both, which is fine, but in my experience that is not very common. Also it assigns ownership for “getting the test to work.”
In ABT the automation engineers will concentrate on automation of actions and making “interface definitions” to manage the interaction with the interfaces (user or non-user) of the system under test. This type of automation activity requires advanced skills and experience.
Agile Test Development
In using ABT with its separate life cycles for test development and Test Automation, there are in fact two topics addressing how to fit automated testing in agile projects:
- Test design and development
As explained earlier, I see testing and Test Automation entitled to its own life cycles in addition to the system development life cycle. Apart from how agile the main project is, testing and Test Automation progress individually.
Having said that and using a scrum project with sprints, testing activities in an agile project fall into three timelines:
- Testing in regular system development sprints
- Test development prior to development sprints
- Testing after development has finished
1. Testing in Regular Sprints
The most common practice is, and will remain, to develop and execute tests as part of sprints. In a sprint, functionality is progressively understood from user stories and conversations to become clear enough for testers to test it. This can be done in developed tests similar to ABT test modules, as well as exploratory and interactive testing. It can also be good practice to capture at least some of the “interesting” interactive tests in test modules for future use.
Unit tests are an invaluable asset, but in the ABT approach one would like to consider options to re-use and extend their reach across the lines of addressing single functions.
By defining test modules for unit tests and assigning them to actions, they can be strung together more easily to test with a wider variety of values and include other parts of the system under test, either during a sprint or later on.
2. Test Development Prior to Development Sprints
In the ABT method the use of actions, in particular high-business level actions, allow for the development of tests with a non-technical focus on business functionality, often simply called “high-level tests”. Such tests stay away from details in the UI and focus on business transactions, like requesting a home loan, or renting a car.
“Higher-level” tests can be developed early in a project. These tests don’t have to wait for a system development sprint since there will be limited time to carefully understand business functionalities and create appropriate tests for them.
The number of, and whether or not “business-level” tests can be made, depends on individual situations. In general, I would recommend the following:
- Have as many business level tests as possible, as they add great value to overall depth and quality, as well as being resilient against system changes that do not pertain to them.
- Use the high level test design step in ABT (where the test modules are identified) to determine what can be done early on in business level tests, and what needs to be completed in detail tests as part of development sprints.
3. Testing After Sprints
Once sprints for individual system parts have finished and these parts come together, normally more testing will be needed to ensure quality and compliance of the entire system. Also, tests may be needed to retest parts of systems that were not touched by system changes and confirm the new system parts integrate well with the old ones. This could for example happen in regression or “hardening” sprints.
In my view, this “after-testing” is a key area where it can pay off most to have, in advance, well developed test modules and fully automated actions resulting in valuable time savings, particularly if a release date is getting close. The test development and automation planning should address this use in final testing as a main objective, and identify and plan test module development accordingly.
Agile Test Automation
The term often used for Test Automation in agile projects that best describes what is needed is “just in time automation.” When ABT is applied, the term changes to “just in time test development.” Independent to that, a high level of automation can play an invaluable contribution in improving the productivity and speed in sprints.
To obtain the automation quickly and timely, a number of rules should be applied:
- Build the base early
- Make automation resilient
- Address testability of the system under test
- Test the automation
1. Build the Base Early
A successful automation architecture should start with creating a solid base on which further action can be developed. This includes items like the ability to perform all necessary operations on all UI interface classes, access to API’s, ability to query databases, compiling and parsing messages in a message protocol, etc.
Although much technical functionality is available in LogiGear’s TestArchitect tool, most of our projects will start with R&D efforts to address customer specific technical challenges, e.g. emulating devices in a point of sale system, working with moving 3D graphics for oil exploration, testing mobile devices, accessing embedded software in diagnostic equipment, etc.
This technical base is something to administer as soon as possible and as comprehensively as possible. Identify all technical challenges and resolve them. This typically results in the implementations for low level actions, that then in turn can be used for higher level actions, for example in development sprints. Addressing the technical base early also limits risks.
2. Make Automation Resilient
The essence of agile projects is that many details of the system under test only become clear when they are being implemented, as part of iterations like the sprints in Scrum. This holds in particular for areas that automation tends to rely heavily on, like the UI. Those details can change quite easily as long as the creative process moves along. The automation should in such cases not be the bottleneck. Flexibility is essential.
The action model by nature can give such flexibility as it allows details to be hidden in individual actions, which can then be quickly adjusted if necessary. However, there are some additional items to take care of as well. The most common in our projects has turned out to be “timing.” Often automation has to wait for a system under test to respond to an operation and get ready for the next one.
What we found is that the automation engineer should make sure to use “active timing” as much as possible. In active timing you try to find a criterion in the system under test to wait for, and wait for that up to a preset, generous, maximum. If the criterion is met, the automation should move on without further delay. Paying attention to these and similar measures will make the automation solid and flexible.
3. Address Testability of the System Under Test
When preparing automation, system developers should identify items that the system under test should provide to facilitate easy access by automation. When the items have been identified early on and are formulated as requirements, the teams can easily incorporate it in the sprints.
A good example is the provision of values for certain identifying properties that are available in various platforms for screen controls or HTML elements, properties that are not visible to a user, but can be seen by automation tools. Providing such values will allow automation to address the controls or elements easily, and in a way that is usually not sensitive to changes in the design.
In fact if such values are defined early on in a project, a tool like TestArchitect allows for the creation of “interface definitions” to take advantage of them before the system under test is even built.
Examples of such useful properties are the “id” attribute in HTML elements, the “name” in Java/Swing, and the “accessibility name” in .Net and WPF. All of these do not influence the user experience, and can be seen by the tools. Using them also solves issues of localization: an OK button can be found even if its caption is in another language.
4. Test the Automation
Automation should be tested. In ABT this means actions and interface definitions must be tested. They are like a product that automation engineers provide to testers, a product in which high quality is required. We require in each testing project to have at least one folder (in the TestArchitect test tree) with test modules that test the actions and interface definitions, not necessarily the system under test.
Just like the test development, the automation activities must be well planned and organized, and a number of experienced people to be involved. If that is the case the combination of careful test development planning and automation planning should be able to meet the demands of agile projects quite easily.
Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.
Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.
Hans holds a Master of Science in Computer Science from Free University, Amsterdam.