Key Success Factors for Keyword Driven Testing

Introduction

Keyword-driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:

  • Keywords, which are typically base level and describe generalized UI operations such as “click”, “enter”, “select”
  • Business templates which are typically high level such as “login”, “enter transaction”
  • Action Words, or short “Actions”, which can be both base level and high level and in their most general form allow earlier defined key words to be used to define higher level action words

Keyword driven testing is a very powerful tool helping organizations to do more automated testing earlier in the testing process and making it easier to maintain tests over time. As with any complex undertaking, there are “success factors” that can determine whether or not a testing effort will be successful. This paper will outline key success factors for keyword driven testing including base requirements, the vision for Automation, success factors for Automation, and how to measure success.

Base Requirements

There are numerous requirements that I consider to be “base requirements” for success with keyword driven testing. These include:

  • Test development and Automation must be fully separated – It is very important to separate test development from test automation. The two disciplines require very different skills. Fundamentally, testers are not and should not be programmers. Testers must be adept at defining test cases independent of the underlying technology to implement them. Individuals who are skilled technically, the “Automation people” (Automation engineers), will implement the action words and then test them.
  • Test cases must have a clear and differentiated scope – It is important that test cases have a clearly differentiated scope and that they not deviate from that scope.
  • The tests must be written at the right level of abstraction – Tests must be written at the right level of abstraction such as the higher business level, lower user interface level, or both. It is also important that test tools provide this level of flexibility.

Vision for Automation

It is also important to have a clear vision for Automation. Such a “vision” should include things such as:

  • Having a good methodology – It is important to have a good integrated methodology for testing and Automation that places testers in the driver’s seat. It is also important to employ the best technology that supports the methodology, maximizes flexibility, minimizes technical efforts, and maximizes maintainability.
  • Have the right tool – Any tool that is employed should be specifically designed for keyword based testing. It should be flexible enough to allow for the right mix of high and low level testing. It should allow the testers to quickly build keyword tests, without difficulty. It should also not be overly complicated for Automation engineers to use when implementing the Automation.
  • Succeed in the three “success factors for Automation” – There are three critical success factors for Automation that the vision should account for. They are:
    • Test design
    • Automation solution
    • Organization

Success Factors for Automation

Test Design

Test design is more important than the Automation technology. Design is the most underestimated part of testing. It is my belief that test design, not Automation or a tool, is the single most important factor for Automation success. To understand more about test design see these previous articles:

Comprehensive Automation Architecture

An automation architecture should emphasize methodology over technology, manageability, and maintainability. The methodology should control and drive the technology so that technology supports the methodology and the importance of manageability and maintainability.

Organization and management

Organization and management are also very important. Success is highly dependent on how well you organize the process including:

  • Management of the test process
  • Management of the tests
  • Efficient and effective involvement of stake holders, users, auditors

A plan of approach should be written for test development and automation. In it should be items such as:

  • Scope, assumptions, risks
  • Methods, best practices Tools, technologies, architecture
  • Stake holders, including roles and processes for input and approvals
    …and more.

The “right” team must also be assembled. This team should include:

  • Test management who is responsible for managing the test process.
  • Test development who is responsible for production of tests. Test development should include test leads, test developers, end users, subject matter experts, and business analysts.
  • Automation engineering who is responsible for creating the Automation scheme for Automatic execution. Members of this team include a lead engineer as well as one or more Automation support engineers.
  • Support functions, providing methods, techniques, know how, training, tools, and environments.

For the team there should be a clear division of tasks and responsibilities as well as well-defined processes for decision making and communication.

Some Tips to Get Stable Automation

  • Make the system under test automation friendly. While developers are not always motivated to do that, it pays off. In particular ask development to add specific property values to the GUI interface controls for Automated identification like “accessible name” in .Net and Java, or “id” in Web controls
  • Pay attention to timing matters. In particular use “active timing”, based on the system under test, not fixed amounts of “sleep”.
  • Test your Automation. Develop a separate test set to verify that the actions work. Make separate people responsible for the Automation.
  • Use Automation to identify differences between versions of the system under test

How to Measure Success

With any major undertaking, it is important to define and measure “success”. There are two important areas of measurement for success – progress and quality.

Progress

You should measure test development against the test development plan. If goals are not reached, act quickly to find the problems. Is the subject matter clear? Are stake holders providing enough input? Is it clear what to test (overall, per module)? Is the team right (enough, right skill set mix)?

You should measure Automation and look at things such as implemented keywords (actions) and interface definitions (defined interface dialogs, pages, etc).

You should measure test execution looking at things such as how many modules are executed and how many executed correctly (without errors)?

Quality

Some of the key quality metrics include:

  • Coverage of system and requirements
  • Assessments by peers, test leads, and by stake holders (recommended)
  • Effectiveness
    • Are you finding bugs?
    • Are you missing bugs?
    • Can you find known bugs (or seeded bugs)?
    • After the system is released, what bugs still come up? You should consider calculating the “Defect Detection Percentage” (Dorothy Graham, Mark Fewster)
  • Mine your bug base for additional insights

Conclusion

It is important to understand that keywords are not magic, but they can serve you well. What is more important is to take the effort seriously and “do it right”. Doing it right means that test design is essential, both global test design and the design of individual test cases. Automation should be done but it should not dominate the process. Automation should flow from the overall strategy, methodology, and architecture. It is also very important to pay attention to organization – the process, team, and project environment.

Following the success factors outlined in this paper can lead to a successful implementation of keyword driven testing.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of ...
Introduction This article discusses the all-too-common occurrence of the time needed to perform Software Testing being short changed as specification, development, and unforeseen “issues” cause the phases prior to testing to expand. The result is that extreme pressure is placed upon the testing organization to perform the testing function within a reduced time frame. The ...
The 12 Do’s and Don’ts of Test Automation When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.
March Issue 2019: Leading the Charge with Better Test Methods
Dr. Cem Kaner – Director, Center for Software Testing Education & Research, Florida Institute of Technology PC World Vietnam: What did you think of VISTACON 2010? Dr. Kaner: I am very impressed that the event was very professionally organized and happy to meet my old colleagues to share and exchange more about our area of ...
Think you’re up for a challenge? Print this word search out! See if you can find all the words and learn a few new software testing terms in the process. To see how you’ve done, check your answers in the answer key below. *You can check the answer key here.
Alexa Voice Service (AVS): Amazon’s service offering for a voice-controlled AI assistant. Offered in different products. Source: https://whatis.techtarget.com/definition/Alexa-Voice-Services-AVS Autopilot Short for “automatic pilot,” a device for keeping an aircraft on a set course without the intervention of the pilot. Source: https://en.oxforddictionaries.com/definition/us/automatic_pilot Blockchain Infrastructure: A complex, decentralized architecture that orchestrates many systems running asynchronously over the ...
Differences in interpretation of requirements and specifications by programmers and testers is a common source of bugs. For many, perhaps most, development teams the terms requirement and specification are used interchangeably with no detrimental effect. In everyday development conversations the terms are used synonymously, one is as likely to mean the “spec” as the “requirements.”
Let’s look at a few distinctions between the two process improvement practices that make all the difference in their usefulness for making projects and job situations better! An extreme way to look at the goals of these practices is: what makes your work easier (retrospective) versus what did someone else decide is best practice (post-mortem)? ...
LogiGear Magazine – May 2011 – The Test Process Improvement Issue
Regardless of the method you choose, simply spending some time thinking about good test design before writing the first test case will have a very high payback down the line, both in the quality and the efficiency of the tests. Test design is the single biggest contributor to success in software testing and its also ...
Introduction This 2 article series describes activities that are central to successfully integrating application performance testing into an Agile process. The activities described here specifically target performance specialists who are new to the practice of fully integrating performance testing into an Agile or other iteratively-based process, though many of the concepts and considerations can be ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe