By Hans Buwalda
There is no one recipe to make big testing a big success. It takes planning and careful execution of the various aspects, like test design, infrastructure and organization – a mix that can be different for each situation in which you may find yourself.
In writing about big testing, the first question that comes up is, “What is ‘big’?” In this article, “big” means, well, a big test.
Now, that does not necessarily mean a big system under test. There are, in fact, a number of factors that can make for a substantial testing effort. They include:
- Big volume of tests – for example, targeting a large system under test
- Concurrent variation of code branches
- Complex functionalities, needing many test situations or values
- Big variation in target platforms and configurations
- Big maintenance: rapid and frequent changes to the SUT
- Big or complex hardware infrastructure needed to support testing
“Big” is also a relative measure. It can mean different things for different organizations. For one company, 1000 test cases could be big, while another company may be looking at 1000 weeks of testing. Testing on ten machines at the same time can seem big, while another setup may involve a data center of ten acres. One definition of “big” could be that the size of the tests is not trivial: you need to think about how to organize and manage them. In other words: to make big testing a big success, it takes more than just making test cases.
In this overview I will describe three factors that I feel are key to big testing:
- Test design and automation
Test Design and Automation
When I have to deal with large or complex testing projects, I always look at overall test design first, since that is where the biggest potential gains lie. Organizing your tests can keep their sizes more manageable, and leads to a much better automation result. I will discuss automation first, then look at a way to best structure and design your tests.
For almost all big tests, at least the test execution will be automated. Automation is not a must: to be sure, human testing can have distinct advantages over automated tests. This is particularly the case if tests needn’t be repeated much (For more on this, see the literature about exploratory testing).
Several techniques have been developed for test automation. The three most common models are record & playback, scripting and keywords.
Record & playback can be useful to get to know your test tool and UI under test, and to quickly capture key steps in the automation. Note that this technique should not be used on its own for large scale test development; rather, recordings should be reused in functions or actions.
Scripting is often based on frameworks, like libraries developed in-house, or public ones like Selenium, a popular open source framework.
Scripting is an effective way to build automation, but difficult to scale up: the scripts need technical experience to build, and tend to become bulky and inaccessible pieces of software when scaled.
Of the three more common automation techniques, the keywords technique, in my experience, is the way to go for larger scale testing. For one thing, this system allows more people to get involved. By encapsulating elements of the automation in keywords, the technique offers a natural way to keep unneeded details hidden from a test design, which in turn keeps large tests more maintainable.
However, keywords alone do not constitute a method, and by themselves are not sufficient to achieve success. One method that builds on keywords is Action Based Testing (ABT). ABT places the focus on how tests can best be organized and designed to accommodate their effective automation. In ABT, tests are kept as a series of actions in a test module. A test module looks essentially like a spreadsheet. Each action is written as a line in the sheet, starting with an action keyword followed by arguments.
A crucial step to get big testing under control with ABT is a good plan for the test modules, which takes an outline form, much like the table of contents in a book. Each test module should have a clear, unambiguous scope that is well-differentiated from the other test modules. Once the test modules have been identified, they can be developed one by one over time. The scope of each test module defines what kinds of actions to use and what kinds of checks to perform.
Of particular importance is that one avoid shaving different kinds of tests within the confines of one test module. A simple example can be taken from a recent project I reviewed. Take a set of tests on a table that presents data within a dialog. Some of the tests may involve manipulating the table, like sorting on columns or cutting and pasting rows. Other tests might address the data in the table: does it have the right values? If you were to place both such tests in the same test module, or test case, your test becomes hard to read. Moreover, it has to change when either the table navigation changes or the value calculations change. In smaller test projects, you may get away with this. But if the tests number in the thousands, or greater, you will soon find that this sort of organization makes it hard to achieve a smooth and stable automation, especially across multiple versions of the application under test.
The scope of a test module also defines how much detail you want to see in the tests, and therefore what actions you will use. In a project I looked at a few years ago, a system for bank tellers was tested. In that system the daily teller cycle would begin with obtaining an initial transaction number, which itself took several UI steps to complete. The QA manager initially insisted on having this procedure spelled out and verified step-by-step as the first part of each and every test module just, in his words, “to make sure”. As a consequence, even a small change in the application necessitated many adjustments to the tests. Only later on was the procedure encapsulated in a high level action called, aptly enough, “get initial transaction number”. As a consequence, a large share of the maintenance problems evaporated.
The automation process focuses on automating actions. Some actions may already be predefined in the tool, while higher level actions need to be recorded or developed. In the automation there are a number of things you can do to speed development and, even more effective, make the automation stable. Your developers can help by making UI elements, such as windows, controls or HTML elements, easy to identify. They can do this by assigning values to certain properties, like the “id” attribute in an HTML element, which cannot be seen by a user but can be seen by an automation tool. Mapping a UI then becomes something that can be performed manually and rapidly, without the need for “spy” tools.
Additionally, timing needs your attention. In large test runs on large systems, responses can vary wildly, and you certainly don’t want a timeout to derail a big test run.
Conversely, you don’t want to slow things down due to needlessly long wait times in many places in your test. Make sure you always have something your tooling can detect and wait for. And test your actions with their own test modules, before you run the regular test modules.
Once your tests and automation are well-designed and stable, your infrastructure basically defines how much you can do in a given amount of time. This is an area where organizations and projects can differ greatly from one another. Some companies may have substantial infrastructure in place; in others, projects may have to be built up from scratch.
As a rule of thumb, you should avoid using the machines of the test developers to also run the tests, since this may inhibit their productivity. You can either give each tester a second machine, or – probably better – run the tests on dedicated servers.
Note that although you may work with dedicated machines as servers, you may not necessarily need server hardware. Most testing is performed against user interfaces, and requires client machines with client operating systems. In particular, blade server solutions tend to be expensive, and often come with features, like load balancing for web servers, that you do not need for your playback automation.
One good way to organize execution is virtualization – the use of virtual machines. A virtual machine is a tester’s dream. It can be set up to mimic a variety of configurations much more readily than can a physical machine. You can set up a virtual machine image with a specific operating system, a version of the application under test, and even some base data for that application already defined. Then run one or more instances of that image every time you want to test against that configuration.
One physical machine can often run multiple virtual machines in parallel, expediting your test cycle. However, when planning your infrastructure, keep in mind that automated test runs tend to place a higher load on virtual machines than does human usage. Automated runs perform operations continuously, while humans tend to be “interval-intensive”. Still, it is not uncommon to have five or more virtual machines running comfortably on an inexpensive physical machine. Hardware properties that are commonly recommended for such a physical machine are: gobs of memory, multiple processors and/or multi-core processors, hardware virtualization support, and a second (physical) disk dedicated to the virtual machines it hosts.
Cloud-like infrastructures also come to mind for handling large test executions. If your company has such an infrastructure, getting a substantial part of it allocated to testing is largely a matter of making a business case for it. There are also “public cloud” vendors to consider. These can be of particular help if the need for a large test execution is only for a limited period of time. For continuous use, public clouds tend to be more costly than an in-house virtualization solution.
Regardless of infrastructure, the success of a big testing project may hinge on the team or teams involved. They have to develop the tests, automate them, manage their execution and follow up on results.
A big testing project has two facets that the teams must be able to handle:
- a test design and development side, with a focus on creativity, effectiveness, and efficient automation; and
- a production side, with a focus on planning, volume and timelines. This demands an industrial “get it done” attitude from the team, which is challenging in its own way.
In a scrum organization, I expect that agile teams can handle most if not all of the big testing needs. It helps to have test execution and system development in the same team, so that any issues that emerge during large tests can be addressed swiftly. However, it is important that the team also include the requisite skills and focus in regards to the managerial aspects of planning and conducting large tests.
In my experience, testing is a profession. There exists a wide variety of techniques that experienced testers can draw from to make tests lean and mean. Sadly, it is not uncommon to see organizations assume that anybody, in particular a developer, can be a tester without much training. From what I’ve seen, however, this mindset results in tests that are simplistic and uninteresting. This leads to shallow testing, and also to large cluttered test sets that are hard to achieve stable automation for. It behooves any organization not to underestimate a profession that has been developing for many years, with many publications and conferences to show for it.
Organizations which are faced with a need for large scale testing or large scale automation will often look at off-shoring as a way to more easily scale up and scale down based on project needs. This can be especially helpful if a large effort is needed for a relatively short period of time. However, it needs to be done with care! Big teams may be able to make big tests, but those are not always good tests. Be sure to do good planning and organization of the tests before starting, and to continuously pay attention to effective design and automation architecture.
There is no one recipe to make big testing a big success. It takes planning and careful execution of the various aspects, like test design, infrastructure and organization – a mix that can be different for each situation in which you may find yourself. ■
Hans Buwalda leads LogiGear’s research and development of test automation solutions, and oversees the delivery of advanced test automation consulting and engineering services. The original architect of the key-word framework for software testing organizations, he assists clients in strategic implementation of Action Based Testing™ throughout their testing organizations, and he is lead developer of LogiGear’s TestArchitect™, the keyword-based toolset for software test design, automation and management.
Prior to joining LogiGear, Mr. Buwalda served as project director at CMG (now Logica) in the Netherlands. During his seventeen years with that firm he assisted clients in nine countries develop and deploy software testing solutions. He is an internationally recognized expert specializing in test automation, test development, and testing technology management, and speaks frequently at international conferences on concepts such as Action Based Testing, the Three Holy Grails of Test Development, Soap Opera Testing, and Testing in the Cold.
He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001) and holds a Master of Science in Computer Science from the Free University, Amsterdam.