Test Design Focused on Expediting Functional Test Automation

Test organizations continue to undergo rapid transformation as demands grow for testing efficiencies. Functional test automation is often seen as a way to increase the overall efficiency of functional and system tests. How can a test organization stage itself for functional test automation before an investment in test automation has even been made? Further, how can you continue to harvest the returns from your test design paradigm once the test automation investment has been made? In this article we will discuss the factors in selecting a test design paradigm that expedites functional test automation. We will recommend a test design paradigm and illustrate how this could be applied to both commercial and open-source automation solutions. Finally, we will discuss how to leverage the appropriate test design paradigm once automation has been implemented in both an agile (adaptive) and waterfall (predictive) system development lifecycle (SDLC).

Test design – selection criteria

The test design selection criteria should be grounded in the fundamental goals of any functional automation initiative. Let us assume the selected test automaton tool shall enable end-users to author, maintain and execute automated test cases in a web-enabled, shareable environment. Furthermore, the test automation tool shall support test case design, automation and execution “best practices” as defined by the test organization. To harvest the maximum return from both test design and test automation the test design paradigm must support:

  • Manual test case design, execution and reporting
  • Automated test case design, execution and reporting
  • Data-driven manual and automated test cases
  • Reuse of test case “steps” or “components”
  • Efficient maintenance of manual and automated test cases

Test design – recommended paradigm

One paradigm that has been gaining momentum under several guises in the last few years is keyword-based test design. I have stated in previous articles that “The keyword concept is founded on the premise that the discrete functional business events that make up any application can be described using a short text description (keyword) and associated parameter value pairs (arguments). By designing keywords to describe discrete functional business events the testers begin to build up a common library of keywords that can be used to create keyword test cases. This is really a process of creating a language (keywords) to describe a sequence of events within the application (test case).”

The Keyword concept is not a silver bullet but it does present a design medium that leads to both effective test case design and ease of automation. Keywords present the opportunity to design test cases in a fashion that supports our previous test design selection criteria. It does not guarantee that these test cases will be effective but it certainly presents the greatest opportunity for success. Leveraging a test design paradigm that is modular and reusable paves the road for long term automation – not only that, it moves most of the maintenance to a higher level of abstraction: the keyword. The keyword name should be a shorthand description of what actions the keyword performs. The keyword name should begin with the action being performed followed by the functional entity followed by descriptive text (if required). Here are several common examples:

  • Logon User – Logon User
  • Enter Customer Name – Enter Customer Name
  • Enter Customer Address – Enter Customer Address
  • Validate Customer Name – Validate Customer Name
  • Select Customer Record – Select Customer Record

Test design – keyword application

Keyword test case design begins as an itemized list of the test cases to be constructed–usually as a set of named test cases. The internal structure of each test case is then constructed using existing (or new) keywords. Once the design is complete, the appropriate test data (input and results) can be added. Testing the keyword test case design involves executing the test case against the application or applications being tested.
At first glance this does not appear to be any different than any other method for test case design but there are significant differences between keyword test case design and any freehand / textual approach to test case design. Keyword test case designs are:

  • Consistent – the same keyword is used to describe the business event every time
  • Data Driven – the keyword contains the data required to perform the test step
  • Self Documenting – the keyword description contains the designers’ intent
  • Maintainable – with consistency comes maintainability
  • Automation — supports automation with little or no design transformation (rewrite)

Test design – adaption based on development/testing paradigm

There are two primary development and testing approaches being used by development organizations today: adaptive (agile) and predictive (waterfall/cascade). Both approaches certainly have their proponents–though the increasingly adaptive (agile) system development lifecycles are gaining precedence. The question becomes how does this affect the test design paradigm? The answer appears to be that it really does not affect the test design paradigm but it does affect the timing.

Predictive (waterfall/cascade) development lifecycles can be supported by a straight-forward design, build, execute and maintain test design paradigm that may later support automation. Eventually, one would expect the predictive testing team to design, build, execute, maintain and automate their test case inventory. This could be accomplished using both Tier 1 commercial automation tools and open source automation tools. As long as the automation tools support modular based design (functions) and data driven testing (test data sources) keyword-based automation can be supported–the most significant difference being the time and effort required to implement the testing framework. Adaptive (agile) development lifecycles come in several flavors–some support immediate keyword-based functional test design and automation while others do not. Agile test driven development (TDD) using FitNesse™, a testing framework which requires instrumentation by and collaboration with the development team, certainly supports keyword-based test case design and automation. Other agile paradigms only support instrumentation at the unit test level or not at all; i.e. a separate keyword-based test case design and automation toolset must be used. The challenge for non-TDD agile becomes designing, building, executing and maintaining functional tests within the context of a two to four week sprint. The solution is a combination of technique and timing. For the immediate changes in the current sprint consider using exploratory testers and an itemized list of test cases with little (if any) content–basically a high-level check list. Once the software for a sprint has migrated to and existed in production for at least one sprint, a traditional set of regression test cases can be constructed using keywords. This separates the challenge into sprint-related testing and regression testing.

David W. Johnson

David W. Johnson “DJ,” is a Senior Test Architect with over 25 years of experience in Information Technology across several business verticals, and has played key roles in business analysis, software design, software development, testing, disaster recovery and post implementation support. Over the past 20 years, he has developed specific expertise in testing and leading QA/Test team transformations — Delivered Test: Architectures, Strategies, Plans, Management, Functional Automation, Performance Automation, Mentoring Programs, and Organizational Assessments.

David Johnson
David W. Johnson “DJ,” is a Senior Test Architect with over 25 years of experience in Information Technology across several business verticals, and has played key roles in business analysis, software design, software development, testing, disaster recovery and post implementation support.

The Related Post

At VISTACON 2011, Harry sat down with LogiGear Sr. VP, Michael Hackett, to discuss various training methodologies. Harry Robinson Harry Robinson is a Principal Software Design Engineer in Test (SDET) for Microsoft’s Bing team, with over twenty years of software development and testing experience at AT&T Bell Labs, HP, Microsoft, and Google, as well as ...
Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one ...
LogiGear_Magazine–March_2015–Testing_Strategies_and_Methods-Fast_Forward_To_Better_Testing
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of software testing. For more details on the evolution of software testing and Software Testing 3.0 see: Software Testing 3.0: Delivering on the Promise of ...
Training has to be fun. Simple as that. To inspire changed behaviors and adoption of new practices, training has to be interesting, motivating, stimulating and challenging. Training also has to be engaging enough to maintain interest, as trainers today are forced to compete with handheld mobile devices, interruptions from texting, email distractions, and people who think they ...
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...
PWAs have the ability to transform the way people experience the web. There are a few things we can agree we have seen happen. The first being that we figured out the digital market from an application type perspective. Secondly, we have seen the rise of mobile, and lastly, the incredible transformation of web to ...
Plan your Test Cases with these Seven Simple Steps What is a mind map? A mind map is a diagram used to visually organize information. It can be called a visual thinking tool. A mind map allows complex information to be presented in a simplified visual format. A mind map is created around a single ...
People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors. As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels ...
Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
D. Richard Kuhn – Computer Scientist, National Institute of Standards & Technology LogiGear: How did you get into software testing? What did you find interesting about it? Mr. Kuhn: About 10 years ago Dolores Wallace and I were investigating the causes of software failures in medical devices, using 15 years of data from the FDA. ...
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe