2010 – 2011 LogiGear Global Testing Survey Results – Overview

I am Senior Vice President at LogiGear. My main work is consulting, training, and organizational optimization. I’ve always been interested in collecting data on software testing – it keeps us rooted in reality and not some wonkish fantasy about someone’s purported best practice! Just as importantly, many software development teams can easily become myopic as to what they do as being normal or “what everyone does.” I also wanted to do a survey to keep my view on testing practices current. I am always looking to incorporate data I glean about new ideas and methods in my training and consulting programs when I work on for clients at LogiGear.

In 2009 and 2010, I conducted a large survey called “What is the current state-of-the-practice of testing?” I opened the survey in the first part of 2009 and collected data for an entire year. Invitations were sent out to testers from around the world – since software testing is a global trend with experts and engineers having all sorts of ideas on how to do certain methods and practices differently – I wanted to capture and understand that diverse cross-section of ideas.

Some of the data was pretty much what you’d expect, but for some of the sections especially around outsourcing, offshoring, automation and Agile to name a few; the answers were quite surprising.

This first article is designed to give you an introduction into my approach and some preliminary findings. I hope to share with you over the coming months more data and my interpretations of those results.

The Goals

My goal in doing the survey was to move away from guesses about what is happening, and what is common, and move to using actual data to provide better solutions to a wider variety of testing situations. First, we need to better understand the wide diversity of common testing practices already in use and how others are using these processes and techniques for success or failure. In order to make positive changes and provide useful problem solving methods in software development and specifically testing, we need to know what is actually happening at the ground level, not what a CTO might think is happening or wants to happen!
Also, when I write a white paper or article I want it to reference and contrast real-world testing and software development. I hope this will help many teams in checking their practice against some broader test/dev situations as well as give realistic ideas for improvement based on what other teams and companies may really be doing!

The Questions

I wrote the survey on a wide variety of current topics from testing on agile projects; to opinions of offshore teams; to metrics. The survey also featured more question sets on the size and nature of teams, training and skills, the understanding of quality, test artifacts and the politics of testing.

The Sample Set

This was a very large survey, with over 100 multiple choice questions combined with several fill-in essay type responses. The survey was meant to be cafeteria style, that is, testers could choose sections that applied to their work or area of expertise and ignore or skip those that did not apply to them, professionally or by interest. For example, there were sections for “teams that automate,” and “teams that do not automate,” teams that “self-describe as agile”, offshore teams, onshore teams, etc. So no one was expected to complete the entire survey.

Some Sample Responses

Here are some preliminary findings from my survey. Analyzing the entire survey will take more time – but I did want to put out a selection of findings to give you an idea of what type of information I will be sending out. I picked some responses that were interesting because they confirmed ideas or surprising because they are rarely discussed, poor planning, old ideas, or just surprising! I’ve broken them down into four sections, “answers that I expected”, “conventional wisdom that seems validated”, “answers that did not appear uniform”, and some “surprising data” that was in some cases unexpected.

We received responses for 14 countries!

So here we go:

Answers that were along the lines I expected:

Question: Test cases are based primarily onAnswer
A – Requirements Documents62%
B – Subject Matter Expertise12%

The overwhelming majority of teams still begin their work referencing requirements documents. However good, however bad, complete or too vague – most people start here. I did think the number of teams starting with test cases with workflows, user scenarios – using their subject matter expertise would be higher. How a user completes some transaction or some task – I guess, is still secondary to the requirement.

Convention Wisdom that was validated:

Question: What is the name of your team/group?Answer
A – QA48.8%
B – Testing20.5%

This is conventional wisdom, but surprised me. It is definitely a trend – at least in Silicon Valley – to move teams away from the outdated term “QA.” Since the people who test rarely ever, almost never, really do QA. If you are a tester and you think you do QA, please return to 1985. It is interesting, though, that this number calling themselves QA has dropped below 50% — as time goes on this number will continue to drop.

60% of all respondents write test plans for each project

Here is some more conventional wisdom – this can be a great point of interest when you are debating – should I/we write a test plan for each project?

Far from Uniform Answers:

Question: Educational Level (selected responses)Answer
A – High School3.0%
B – Bachelors of Arts/Sciences40.0%
C – Some Graduate Work19.0%
D – Masters Degree24.6%
E – PhD.3.0%

It seems conventional wisdom that the vast majority of people who test have university degrees, but I am surprised at how many have done post graduate work, have a master’s degree and have PhDs. It runs against conventional wisdom that people who test are the least trained on the development team, perhaps they are the most educated!

Surprising Data:

34% of all respondants indicated that their regression testing was entirely manual

A very big surprise to me! The lack of automated regression! Wow. That is one of the biggest and most surprising results of the entire survey! Why do 1/3 of teams still do all manual regression? Bad idea, bad business objective.

52% do not test their application/system for memory leaks

The number of teams not doing some variety of memory, stress, DR (disaster recovery), buffer overflow (where applicable) load, scalability, etc. testing was another big surprise. We need to look further into this. Is it bad planning? Lack of tools, skill, lack of knowledge, keeping your fingers crossed? In many cases I bet this is bad business planning.

87% of respondants rank offshoring or outsourcing as “successful”

Such a very high number of people responding that offshoring and outsourcing was successful goes against conventional wisdom that it’s the managers who like outsourcing/offshoring but production staff (the people who actually do the work), are not happy with it!

37% of teams say they do not currently automate tests, with 10% indicating they’ve never tried to automate

That over 1/3 or respondents currently do not automate tests is in line with what I see in my work at many companies but is contrary to popular belief and any sort of best practice. What I see out in the business world is teams that automate think everyone automates and they automate enough. Teams that do not automate see automation as not common, too difficult, not something testers do. This number is way, way too high. Any team not automating has to seriously look at the service they are providing their organization as well as the management support they are receiving from that organization!

Agile Series Survey Results From “The State of the Practice Survey ”

As part of my on-going series on Agile for Testers – see this month’s article on People and Practices, I wanted to include the data I collected Agile development and testing and give you a chance to view them.

Question 1

Have you been trained in Agile Development?
Yes47.8%
No52.2%

The fact that more than half of the respondents answered “no” here is troubling in many ways; let’s just stick to the Practices issue. It is clear some of these organizations are calling themselves “agile” with no reality attached. Whether you want to call them “ScrumButts” or refer to them as Lincoln’s 5-legged dog, calling yourself “agile” without implementing practices and training on what this is all about is just not agile! Attempting to be agile without training all the team in the why and how of these practices will fail.

Question 2

Since your move to Agile Development, is your team doing:
More Unit Testing?50%
Less Unit Testing?6%
The Same Amount of Unit Testing?28%
I have no idea?16%

Ideas to take from this are many: That more “unit” testing is happening in 50% of the responding organizations is a good thing! That more “unit” testing is happening at only 50% of the organizations is a problem. More troubling to me is that 16% have no idea! This is un-agile on so many levels — a lack of communication, no transparency, misguided test efforts — a lack of information on test strategy, test effort, test results — and a lack of teamwork!

Question 3

Does your team have an enforced definition of done that support an adequate test effort?
Yes69.6%
No30.4%

This is encouraging. Hopefully the 30% without a good Done definition are not “ScrumButts” and will be implementing a useful definition of done very soon!

Question 4

What percentage of code is being unit tested by developers before it gets released to the test group? (Approximately)?
100%13.6%
80%27.3%
50%31.5%
20%9.1%
0%4.5%
No Idea13.6%

I won’t respond again about the No Idea answer, as that was covered above, but it’s important to know that most agile purists recommend 100% unit testing for good reason. If there are problems with releases, integration, missed bugs, and scheduling, look first to increase the percentage of code unit tested!

The Results

The overriding result is that the current testing practice is quite diverse! There is no single test practice, no one way to test, and no single preferred developer/tester ratio. Everyone’s situations were different and even some similar situations had very different ideas about their product quality, work success and job satisfaction!

My Future Plans

I plan to continue to commission surveys as a regular part of my desire to take a pulse on what is really happening in the software development world — with regard to testing rather than postulations from self-described experts. As noted above, as a result of this being a very large survey, I will be publishing sections over the next few months. I look forward to bringing you exciting as well as troubling trends that I’ve postulated from the data I’ve collected.

 

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

Few people like to admit team dynamics and project politics will interfere with successful completion of a software development project. But the more projects you work on, the more you realize it’s very rare that technology problems get in the way. It’s always the people, project, planning, respect, communications issues that hurt development teams the ...
METHODS M1. The test cases for your effort are based primarily on: Response percent Response count Requirements documents 61.3% 46 Discussions with users on expected use 2.7% 2 Discussions with product, business analysts, and marketing representatives 9.3% 7 Technical documents 4% 3 Discussions with developers 8% 6 My experience and subject or technical expertise 12% ...
For 2017 LogiGear is conducting a 4-part survey to assess the state of the software testing practice as it stands today. This is not the first survey that we’ve launched; LogiGear conducted one very large state of the practice testing survey in 2008. It was much larger, with over 100 questions. Nearly a decade later, ...
Process The objective of this survey and analysis is to gather information on the actual state-of-the-practice in software testing today. The questions originate from software development team assessments I executed over the years. A process assessment is an observation and questioning of how and what you and your team does.
Data was compiled and analyzed by Michael Hackett, LogiGear Senior Vice President. This is the sixth analysis of the 2010 Global Testing Survey Series. More survey results will be included in subsequent magazine issues. To read past surveys, visit https://www.logigear.com/magazine/category/issue/survey/. Part 1- The Home Team HT1. Do you outsource testing (outside your company)? Response percent ...
The target audience of the survey were black box testers. Please note that to these respondents, test automation is mainly about UI level automation, not unit, performance or load testing.
Check out the results of our poll where we asked practitioners what software testing trends they think will dominate in 2019. You can barely go online today without being asked to respond to a poll. Many have a hook to a sale or to win a free phone. But, to cut to the point, many ...
Michael Hackett looks at questions posed to managers in the final installment of our 2010-2011 Global Survey results.
Find out how you compare to others in our survey results Test Automation is the topic of the third survey in our State of Software Testing Survey Series. This survey covers skills, tools, benefits, and problem areas of test automation.
Complete 2010 – 2011 Global Survey Results LogiGear Corporation LogiGear Corporation provides global solutions for software testing, and offers public and corporate software-testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast and cost-effective results. Since 1994, LogiGear has worked ...
TOOLS T1. What testing-support tools do you use? (Please check all that apply.) Response percent Response count Bug tracking/issue tracking/defect tracking 87.70% 64 Source control 54.80% 40 Automation tool interface (to manage and run, not write automated tests) 52.10% 38 Test case manager 50.70% 37 Change request/change management/change control system 47.90% 35 A full ALM ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe