The ownership of quality has evolved, don’t get left behind
Welcome to our new feature in LogiGear Magazine! We will be doing a column in each issue on current topics and how to manage, deal with, and support your team through them.
This first installment of Leader’s Pulse is about making the move to DevOps. This is a large topic and will be covered over a few magazine issues. What I would like to cover in this article are two topics: high-level mindset topic: the growing and evolving ownership of quality, and a low-level details topic of DevOps and its impact on test environments and data.
First, who owns quality? Of course, it’s a trick question—the answer is everyone owns quality—there is no single owner of quality. And certainly the Test team does not own quality alone. Testers may be running more tests and even sets of tests that they do not own—like performance tests but the speed of DevOps, the reliance on fast reporting on consistency through running all the speedy creation of environments makes the broad understanding that everyone, at every single step, owns the quality of the delivered product—an understanding worth repeating.
But, if someone felt like arguing that there is a single team that owns quality, I would say it is whoever manages the product owners in your organization. The usual suspects include but, are not limited to: Director of Development, VP of Engineering and the Product Manager. Once a product has a product roadmap and the team is sized, the Dev and Test teams have their first set of constraints in quality.
But that is not the subject for today.
Today’s topic of discussion is the growing and evolving ownership role in quality. DevOps pushes more people into the ownership or quality discussion. The move to Agile in the early 2000s showed unmistakably, that Developers have a clear and big impact on quality by incorporating code reviews, pair programming, unit testing amongst their many other practices. Test teams have their practices/role too: requirements and user story analysis, exploratory testing, design collaboration, test automation, bug finding and reporting among them.
The Key to being Agile today, is that most companies have implemented various principles of Lean. Lean Software Development (LSD) outlines 7 principles; one is Quality at Every Step. This is sometimes referred to as “Build quality in’ or “build integrity in”. It means exactly what it sounds like: you have to build quality in at every step. This means quality user stories, quality code, quality unit tests, quality test cases, quality bugs, quality automated scripts, quality performance tests, quality environments, quality data, among many other deliverables at every step.
One of the things DevOps does is put a spotlight on automating every one of the reproducible quality practices e.g. re-running unit tests, re-running the test team’s many automated suites. This also means things that were traditionally done at the end of the delivery process, e.g. performance and security tests, now have to happen much earlier. The decision to move those tests will have an impact and often, a cost. That decision is a quality assurance decision. The impact of moving security and performance testing even earlier into the Continuous Integration process is that performance or security bugs can be found and fixed earlier when they are cheaper. If Ops is in charge of environments, cloud infrastructure, containers or whatever virtualized services you are using for environments and data—then obviously Ops owns pieces of delivering a quality product.
Now let’s talk about, great test environments and great test data.
I’ll start with a story of a client LogiGear has been helping push their development practices into the new millennium.
The team supported a complex system with both legacy and new products running on different environments, they were also completely integrated with data. The two most important ingredients in the test—the environments and the data were a mess.
The builds were “pulled” rather than automated. The environments were managed by the IT team; the data was old, rarely scrubbed, and seldom mirrored production data.
Due to the data having little integrity the number of “bugs” the Test team found was not even close to confidence level. There were issues not uncovered by the team dealing with the test environments. Meaning, Dev went on wild goose chases only to hit dead ends and throw the “bug” back to test teams—essentially undermining the team’s credibility. Clearly there were many problems to fix.
The first problem was the manager of the team was fighting for every ounce of help. He was the only person on the team who really had an understanding of how productive, responsive and useful Test teams could be—most of the management team had been in place for too long and had no idea of the impact test teams can have on the bottom line, and consequently didn’t want to spend money on them. Instead, his job was mainly spent on educating management (re: hitting his head against the wall), protecting his team, making incremental change, and only then could he move on to focusing on day-to-day tasks.
Ultimately, the environments and data mess was caused by finger pointing between Dev and IT/Ops, which was made worse by management’s not caring or willingness to dedicate funds to fix it. Briefly, we fixed this problem by auditing/measuring how many “bugs” and wastes-of-time problems for Developers and testers were caused by testing on bad environments with bad data. We presented these findings to management, and we did not let anyone point-fingers or blame, we explained that the fix lied in making sure time testing had a dedicated IT person, that the hardware needed to be brought into this century, and the build process needed to be automated along with a more detailed set of fixes for data. With my guidance, a decade long nagging problem was completely fixed in under one month.
This was not 20 years ago. Even more alarming—it was a fairly recent client. I am happy to say this team is now in much, much better shape. Everyone is happier—Dev, testers, and management. I hope you do not have these problems.
DevOps—or even Agile for that matter—will not tolerate this. DevOps shines a bright light on environments and data. If your team has an environment or data problems, fix them now. We have known about these issues for a long time—they are gone—we hope—from most organizations—but the solution today is more pressing and luckily, easier.
The reality of DevOps is really the journey towards the ideals of greater collaboration, immediate feedback, greater productivity, automating everything possible, getting your team the tools and resources to have easy, fast, perfect, production-like environments at all times as well as great data to test against, mirrored, current, live, whatever you need-the data is, like the environment, great high quality, reliable, predictable, current and as close as is effective to production data. Collaborate with the IT/Ops teams to solve these problems. Virtualized environments today are as common as automated builds. VMs, cloud, “platforms as a service”. Fix these problems; there should be no excuses, especially when there are such a big number of tools available to help you.
Leading the organization and/or test team into the DevOps era is a big task. It starts with a change in culture. Let’s make sure everyone is on the same page with Quality Assurance practices throughout the development cycle. Also, making significant, incremental change is the key. We can’t change the world overnight. Incremental change is the way to go. Tackling environment and data problems is not easy—but it’s a great place to start to get the Test team much more productive and trusting their results and reporting on a more consistent basis.
Request More Information
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).