How to Turn Your Software Testing Team into a High-Performance Organization

This article was adapted from a presentation titled “How to Turn Your Testing Team Into a High-Performance Organization” to be presented by Michael Hackett, LogiGear Vice President, Business Strategy and Operations, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006).

Introduction

Testing is often looked upon by many as an unmanageable, unpredictable, unorganized practice with little structure. It is common to hear questions or complaints from development such as:

  • What is testing doing?
  • Testing takes too long
  • Testers have negative attitudes

Testers know that these complaints and questions are often unfair and untrue. Setting aside the development/testing debate, there can always be room for improvement. The first step in improving strategy and turning a test team into a higher performance test team is getting a grasp on where you are now! You want to know:

  • What testing is effective?
  • Are we testing the right things at the right time?
  • Do we need a staffing upgrade?
  • What training does our team need?
  • How does the product team value the test effort?

In this article we provide a framework for assessing your team, including: how to plan for an assessment, how to execute the assessment and judge your current performance, what to do with the information, and how to chart an improvement plan toward higher performance.

The Test Process Assessment

The goal of doing a test process assessment is to get a clear picture of what is going on in testing, the good things, the problems, and possible paths to improvement. Fundamentally, a test assessment is a data gathering process. To make effective decisions we need data about the current test process. If done properly; the assessment will probably cross many organizational and management boundaries.

It is important to note when embarking upon such an assessment that this effort is much larger than the test team alone. Issues will arise over who owns quality as well as what is the goal of testing? It is also important to note that a possible result of the assessment is that work may actually increase. There may be:

  • More demands for documentation
  • More metrics
  • More responsibility for communication and visibility into testing

For such an assessment process to succeed requires:

  • Executive sponsorship
  • A measurement program
  • Tools to support change
  • An acceptance of some level of risk
  • Avoidance of blaming testing for project-wide failures
  • Commitment about the goal of testing
  • An understanding of testing or quality assurance across the product team
  • Responsibility for quality

Components of a Test Strategy – SP3

A test strategy has three components that need to work together to produce an effective test effort. We have developed a model called SP3, based on a framework developed by Mitchell Levy of the Value Framework Institute. The strategies (S) components consist of:

  1. People (P1) – everyone on your team
  2. Process (P2) – the software development and test process
  3. Practice (P3) – the methods and tools your team employs to accomplish the testing task

Phase 1 Pre-Assessment Planning: The goals for this phase are to set expectations, plan the project, set a timeline, and obtain executive sponsorship. The actions that occur in phase 1 include meeting with the management of various groups, laying out expectations for the results of the process, describing the plan, and establishing a timeline. The intended result is to obtain agreement on expectations and buy-in on the assessment process and follow-up commitment for improvement. The phase 1 deliverable is a schedule and a project plan.

In phase 1 it is important to:

  • Get executive buy-in
  • Make a schedule and stick to it
  • Give a presentation of what you are doing, why and what you hope to get out of it
  • Make a statement of goals or outline of work as a commitment
  • Make a scope document a pre-approval/budget deliverable

It is important to note up front that assessment is only the beginning of the process.

Phase 2-Information Gathering: The goal of phase 2 is to develop interview questions and surveys which become the backbone of your findings. Actions in phase 2 include gathering documentation, developing interview questions, and developing a test team survey. The result of this phase is that you will be ready to begin your assessment using the documentation, interview questions, and test team survey. The deliverables include complete development process documentation, interview questions, and the tester survey.

Examples of the documentation to be collected include: SDLC documentation, engineering requirements documentation, testing documents (test plan templates and examples, test case templates and examples, status reports, and test summary reports). Interview questions need to cover a wide range of issues, including (but not limited to): the development process, test process, requirements, change control, automation, tool use, developer unit testing, opinions about the test team from other groups, expectation of the test effort, political problems, communication issues, and more.

Phase 3-Assessment: The goal of phase 3 is to conduct the interviews and develop preliminary findings. Actions include gathering and reviewing documentation, conducting interviews, sending out and collecting the surveys. As a result of this phase there will be a significant amount of material and information to review.

Phase 4-Post-Assessment: The goal of phase 4 is to synthesize all of the information into a list of findings. Actions include reviewing, collating, thinking, forming opinions, and making postulations. The result of this phase is that you will develop a list of findings from all of the gathered information, reviewed documentation, interviews, and the survey. The phase 4 deliverable is a list of findings, collated survey answers, collated interview responses, a staff assessment, and a test group maturity ranking.

The findings can be categorized into:

  • People
    • Technical skills
    • Interpersonal skills
  • Process
    • Documentation
    • Test process
    • SDLC
  • Practice
    • Strategy
    • Automation
    • Environment
    • Tools

More subcategories may also be developed to suit your needs.

Phase 5-Presentation of findings with project sponsor, executive sponsor and team: The goal of phase 5 is to present preliminary findings to executives and the project sponsor, and to obtain agreement on the highest priority improvement areas. It is important in this phase to be prepared for a very different interpretation of the findings than you perceived. The deliverable for phase 5 is an improvement roadmap.

Phase 6 -Implementation of Roadmap: The goal of phase 6 is to establish goals with timelines and milestones and sub tasks to accomplish the tasks agreed upon for improvement. The action of phase 6 is to develop a schedule for implementation of the improvement plan. It is helpful at this point to get some aspect of the project implemented immediately so people can see tangible results right away-even if they are the smallest or easiest improvement tasks. The deliverable for phase 6 is implementation of items in the roadmap for improvement according to the developed schedule.

Conclusion

A test strategy is a holistic plan that starts with a clear understanding of the core objective of testing, from which we derive a structure for testing by selecting from many testing styles and approaches available to help us meet our objectives. Performing an assessment helps to provide the “clear understanding” and “understanding of the core objective of testing”. Implementing the resulting roadmap for improvement can help to substantially improve the performance of your Software Testing organization and help to solidify your test strategy.

LogiGear Software Test & Performance Conference 2006 Presentations

Presentations to be delivered by LogiGear at the Software Test & Performance Conference 2006 include:

  • Wednesday, Nov. 8, 8:30 am to 10:00 am – “Effectively Training Your Offshore Test Team” by Michael Hackett
  • Wednesday, Nov. 8, 1:15 pm to 2:30 pm – “How to Optimize Your Web Testing Strategy” by Hung Q. Nguyen
  • Wednesday, Nov. 8, 3:00 pm to 4:15 pm – “Agile Test Development” by Hans Buwalda
  • Thursday, Nov. 9, 8:30 am to 10:00 am – “Strategies and Tactics for Global Test Automation, Part 1” by Hung Q. Nguyen
  • Thursday, Nov. 9, 10:30 am to 12:00 pm – “Strategies and Tactics for Global Test Automation, Part 2” by Hung Q. Nguyen
  • Thursday, Nov. 9, 2:00 pm to 3:15 pm – “The 5% Challenges of Test Automation” by Hans Buwalda

To register or for more information on STP CON, see: http://www.stpcon.com/

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

D. Richard Kuhn – Computer Scientist, National Institute of Standards & Technology LogiGear: How did you get into software testing? What did you find interesting about it? Mr. Kuhn: About 10 years ago Dolores Wallace and I were investigating the causes of software failures in medical devices, using 15 years of data from the FDA. ...
This article was originally featured in the July/August 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. People often quote Lord Kelvin: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express ...
Are you looking for the best books on software testing methods? Here are 4 books that should be on your reading list! The Way of the Web Tester: A Beginner’s Guide to Automating Tests By Jonathan Rasmusson Whether you’re a traditional software tester, a developer, or a team lead, this is the book for you! It ...
Jeff Offutt – Professor of Software Engineering in the Volgenau School of Information Technology at George Mason University – homepage – and editor-in-chief of Wiley’s journal of Software Testing, Verification and Reliability, LogiGear: How did you get into software testing? What do you find interesting about it? Professor Offutt: When I started college I didn’t ...
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...
The Testing Domain Workbook is the most extensive and exhaustive work you will ever find on a specific testing technique (or related techniques if you include equivalence class analysis and boundary testing as the book does). What I like best is the combination of academic background and roots combined with practical experience and industrial practice. All the concepts are ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of Software Testing. For more details on the evolution of Software Testing and Software Testing 3.0 see: The Early Evolution of Software Testing Software Testing ...
Companies generally consider the software they own, whether it is created in-house or acquired, as an asset (something that could appear on the balance sheet). The production of software impacts the profit and loss accounts for the year it is produced: The resources used to produce the software result in costs, and methods, tools, or ...
The V-Model for Software Development specifies 4 kinds of testing: Unit Testing Integration Testing System Testing Acceptance Testing You can find more information here (Wikipedia): http://en.wikipedia.org/wiki/V-Model_%28software_development%29#Validation_Phases What I’m finding is that of those only the Unit Testing is clear to me. The other kinds maybe good phases in a project, but for test design it ...
Introduction Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.
One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe