home Survey 2010 – 2011 LogiGear Global Testing Survey Results – Overview

2010 – 2011 LogiGear Global Testing Survey Results – Overview

I am Senior Vice President at LogiGear. My main work is consulting, training, and organizational optimization. I’ve always been interested in collecting data on software testing – it keeps us rooted in reality and not some wonkish fantasy about someone’s purported best practice! Just as importantly, many software development teams can easily become myopic as to what they do as being normal or “what everyone does.” I also wanted to do a survey to keep my view on testing practices current. I am always looking to incorporate data I glean about new ideas and methods in my training and consulting programs when I work on for clients at LogiGear.

In 2009 and 2010, I conducted a large survey called “What is the current state-of-the-practice of testing?” I opened the survey in the first part of 2009 and collected data for an entire year. Invitations were sent out to testers from around the world – since software testing is a global trend with experts and engineers having all sorts of ideas on how to do certain methods and practices differently – I wanted to capture and understand that diverse cross-section of ideas.

Some of the data was pretty much what you’d expect, but for some of the sections especially around outsourcing, offshoring, automation and Agile to name a few; the answers were quite surprising.

This first article is designed to give you an introduction into my approach and some preliminary findings. I hope to share with you over the coming months more data and my interpretations of those results.

The Goals

My goal in doing the survey was to move away from guesses about what is happening, and what is common, and move to using actual data to provide better solutions to a wider variety of testing situations. First, we need to better understand the wide diversity of common testing practices already in use and how others are using these processes and techniques for success or failure. In order to make positive changes and provide useful problem solving methods in software development and specifically testing, we need to know what is actually happening at the ground level, not what a CTO might think is happening or wants to happen!
Also, when I write a white paper or article I want it to reference and contrast real-world testing and software development. I hope this will help many teams in checking their practice against some broader test/dev situations as well as give realistic ideas for improvement based on what other teams and companies may really be doing!

The Questions

I wrote the survey on a wide variety of current topics from testing on agile projects; to opinions of offshore teams; to metrics. The survey also featured more question sets on the size and nature of teams, training and skills, the understanding of quality, test artifacts and the politics of testing.

The Sample Set

This was a very large survey, with over 100 multiple choice questions combined with several fill-in essay type responses. The survey was meant to be cafeteria style, that is, testers could choose sections that applied to their work or area of expertise and ignore or skip those that did not apply to them, professionally or by interest. For example, there were sections for “teams that automate,” and “teams that do not automate,” teams that “self-describe as agile”, offshore teams, onshore teams, etc. So no one was expected to complete the entire survey.

Some Sample Responses

Here are some preliminary findings from my survey. Analyzing the entire survey will take more time – but I did want to put out a selection of findings to give you an idea of what type of information I will be sending out. I picked some responses that were interesting because they confirmed ideas or surprising because they are rarely discussed, poor planning, old ideas, or just surprising! I’ve broken them down into four sections, “answers that I expected”, “conventional wisdom that seems validated”, “answers that did not appear uniform”, and some “surprising data” that was in some cases unexpected.

We received responses for 14 countries!

So here we go:

Answers that were along the lines I expected:

Question: Test cases are based primarily on Answer
A – Requirements Documents 62%
B – Subject Matter Expertise 12%

The overwhelming majority of teams still begin their work referencing requirements documents. However good, however bad, complete or too vague – most people start here. I did think the number of teams starting with test cases with workflows, user scenarios – using their subject matter expertise would be higher. How a user completes some transaction or some task – I guess, is still secondary to the requirement.

Convention Wisdom that was validated:

Question: What is the name of your team/group? Answer
A – QA 48.8%
B – Testing 20.5%

This is conventional wisdom, but surprised me. It is definitely a trend – at least in Silicon Valley – to move teams away from the outdated term “QA.” Since the people who test rarely ever, almost never, really do QA. If you are a tester and you think you do QA, please return to 1985. It is interesting, though, that this number calling themselves QA has dropped below 50% — as time goes on this number will continue to drop.

60% of all respondents write test plans for each project

Here is some more conventional wisdom – this can be a great point of interest when you are debating – should I/we write a test plan for each project?

Far from Uniform Answers:

Question: Educational Level (selected responses) Answer
A – High School 3.0%
B – Bachelors of Arts/Sciences 40.0%
C – Some Graduate Work 19.0%
D – Masters Degree 24.6%
E – PhD. 3.0%

It seems conventional wisdom that the vast majority of people who test have university degrees, but I am surprised at how many have done post graduate work, have a master’s degree and have PhDs. It runs against conventional wisdom that people who test are the least trained on the development team, perhaps they are the most educated!

Surprising Data:

34% of all respondants indicated that their regression testing was entirely manual

A very big surprise to me! The lack of automated regression! Wow. That is one of the biggest and most surprising results of the entire survey! Why do 1/3 of teams still do all manual regression? Bad idea, bad business objective.

52% do not test their application/system for memory leaks

The number of teams not doing some variety of memory, stress, DR (disaster recovery), buffer overflow (where applicable) load, scalability, etc. testing was another big surprise. We need to look further into this. Is it bad planning? Lack of tools, skill, lack of knowledge, keeping your fingers crossed? In many cases I bet this is bad business planning.

87% of respondants rank offshoring or outsourcing as “successful”

Such a very high number of people responding that offshoring and outsourcing was successful goes against conventional wisdom that it’s the managers who like outsourcing/offshoring but production staff (the people who actually do the work), are not happy with it!

37% of teams say they do not currently automate tests, with 10% indicating they’ve never tried to automate

That over 1/3 or respondents currently do not automate tests is in line with what I see in my work at many companies but is contrary to popular belief and any sort of best practice. What I see out in the business world is teams that automate think everyone automates and they automate enough. Teams that do not automate see automation as not common, too difficult, not something testers do. This number is way, way too high. Any team not automating has to seriously look at the service they are providing their organization as well as the management support they are receiving from that organization!

Agile Series Survey Results From “The State of the Practice Survey ”

As part of my on-going series on Agile for Testers – see this month’s article on People and Practices, I wanted to include the data I collected Agile development and testing and give you a chance to view them.

Question 1

Have you been trained in Agile Development?
Yes 47.8%
No 52.2%

The fact that more than half of the respondents answered “no” here is troubling in many ways; let’s just stick to the Practices issue. It is clear some of these organizations are calling themselves “agile” with no reality attached. Whether you want to call them “ScrumButts” or refer to them as Lincoln’s 5-legged dog, calling yourself “agile” without implementing practices and training on what this is all about is just not agile! Attempting to be agile without training all the team in the why and how of these practices will fail.

Question 2

Since your move to Agile Development, is your team doing:
More Unit Testing? 50%
Less Unit Testing? 6%
The Same Amount of Unit Testing? 28%
I have no idea? 16%

Ideas to take from this are many: That more “unit” testing is happening in 50% of the responding organizations is a good thing! That more “unit” testing is happening at only 50% of the organizations is a problem. More troubling to me is that 16% have no idea! This is un-agile on so many levels — a lack of communication, no transparency, misguided test efforts — a lack of information on test strategy, test effort, test results — and a lack of teamwork!

Question 3

Does your team have an enforced definition of done that support an adequate test effort?
Yes 69.6%
No 30.4%

This is encouraging. Hopefully the 30% without a good Done definition are not “ScrumButts” and will be implementing a useful definition of done very soon!

Question 4

What percentage of code is being unit tested by developers before it gets released to the test group? (Approximately)?
100% 13.6%
80% 27.3%
50% 31.5%
20% 9.1%
0% 4.5%
No Idea 13.6%

I won’t respond again about the No Idea answer, as that was covered above, but it’s important to know that most agile purists recommend 100% unit testing for good reason. If there are problems with releases, integration, missed bugs, and scheduling, look first to increase the percentage of code unit tested!

The Results

The overriding result is that the current testing practice is quite diverse! There is no single test practice, no one way to test, and no single preferred developer/tester ratio. Everyone’s situations were different and even some similar situations had very different ideas about their product quality, work success and job satisfaction!

My Future Plans

I plan to continue to commission surveys as a regular part of my desire to take a pulse on what is really happening in the software development world — with regard to testing rather than postulations from self-described experts. As noted above, as a result of this being a very large survey, I will be publishing sections over the next few months. I look forward to bringing you exciting as well as troubling trends that I’ve postulated from the data I’ve collected.

 

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Facebooktwittergoogle_plusredditpinterestlinkedinmail
Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe