banner-649x60

LEADER’S PULSE: THE CHANGING DYNAMIC OF MEASUREMENT AND ITS COMPETING FORCES

By Michael Hackett

I have been leading and managing software projects for a long time. Measuring the product and process has always caused great consternation. At its worst we measured everything that we could possibly measure.  We measured so many things that, in the end, I always doubted whether it led to higher quality software, or process improvement, when in actuality it  was to make managers feel “safer”.  Of course someone always had to keep track that the bugs got closed and the priority of deferred bugs, but in most cases the extent to which companies tracked and made reports on those bugs was far greater cost in time than the benefit we got from them.  I remember the days when Friday afternoon was compiling all kinds of numbers and measurements and metrics. All the test case and bug measures. Test cases executed, tests yet to execute. Automation runs. Passes and fails. New bugs open, bugs close rates. Find fix rates. Metrics reports turned into dashboards distributed to the team- ignored by most. Few metrics on the dashboard were even discussed. Did it really lead to higher quality?

No other teams on the product Dev team got measured or generated metrics on anything at all- but testers got scrutinized.

What did the business want to know? How did they decide when to release? Why wasn’t that the only measure? How did you measure product readiness?

First, let’s distinguish measure from metric. There are many definitions for these. I want to use a description to make our discussion easier: measurements are data. Metrics are derived from data.

In quality and product development we have been aware of many mottos about measurement:

  • If you can’t measure it, don’t do it!
  • What You Measure Improves
  • Quality guru Peter Drucker is attributed as saying:”you can’t manage what you can’t measure.”

Why do we measure at all? Just for more busy work? To assess product readiness or very differently team efficiency?  For more predictable projects?

The reasons for all these measures had better not be to measure for trust of the team. If it is, there is no measurement that will cure mistrust.  We all have to stop with useless, unused, not actionable fluff.

On the other hand, I recently had a client who paid for an automation project that was successful and run often. But the manager wanted to stop the project because he had zero idea what had been done. The test team and outside consultants had done a big and successful test automation program but the managers had no idea and no visibility. This was a problem.

I want to re-examine measuring in terms of the complete change in how we develop product and who is responsible for quality.

Developing software has changed- Agile

Then came Agile  to show how tired development teams are of being measured. One of the Principles of Agile is:  “Working software is the primary measure of progress.” So… stop with all the dashboards.

By the book, Scrum has only one measurement: burndown. Then you flip it for the velocity metric.  By the book, Scrum has no bug tracking system and the issues found during the development of a product would be fixed or change dynamically spontaneously or become acceptance criteria on the user story.  After that user story was done and released into production, they became support ticket issues or a new user story. So the idea that people would actually be capturing and measuring bugs and doing reports on those things went out the window, which is the case in many organizations.  But I still find typically in large organizations and need for dashboards capturing all kinds of data for better or for worse the important thing to remember here now for this conversation is by the book scrum has only one measurement.

When that teams’ velocity went up it was “good  job developers!”

The very foundation of development has changed. The most important set of principles in Agile and product development today are the Lean Practices. For example: cut waste and empower the team. Cut everything that is not absolutely necessary and let the team decide what is and is not necessary. If the team decides all those dashboards are a waste of time and hold back productivity- cut them. Discussion over.

Most teams and corporate have moved away from dashboards. Still, the bigger the company, the bigger, less Lean and often, more useless the dashboards are. The best thing to do here is question how they are Lean and why they are used.

Logistics technology concept

Developing software has changed again- Along comes DevOps

While many teams today are still wrestling with Agile across the organization, DevOps is an extension of Agile that does change how we get software product out the door.

The DevOps demands immediate feedback to Dev teams. More than pure Agile/Lean, less than old style dashboards, some type of feedback to the team has to be generated. Whether that is simply a conclusion of “the build passed” or “the build failed”- some measurements need to be taken for these assessments. There needs to be easy, quick but meaningful measures of coverage. A part of immediate feedback is no dashboards. Dashboards are not immediate. But something has to be measured and communicated to the team that testing was completed and can progress to the next environment in the pipeline.

The need for immediate feedback is paramount:  immediate feedback on builds and on the health of the system. A few important things to remember here are that immediate feedback needs to be built around actionable data and the development or business teams need to understand the readiness for production, status and health of the system. It is not information that the test team feels like giving, such as numbers of automated tests executed or percentage of test cases automated, bug find/fix rates, numbers of manual tests hours. Those kinds of measurement are very old school and I have not met a development team in a decade that cares about that information.  So what kind of immediate feedback does the team need to get?  The answer is numbers of blocking issues and that the automated suites passed or failed –based on predetermine to pass/fail criteria.

A new aspect of measurements

In the old days, test teams developed numbers to communicate to the team. Often project managers would ask the team for certain measures. In Agile that stopped. The scrum master calculated burndown then velocity. Now in DevOps, there may be some measures Dev wants to progress in the pipeline but more importantly, DevOps is a business driven practice. The business side will ask for any measurements that will prove or be actionable for Continuous Delivery.  The taking of the measurements and metrics and the reporting process cannot be a time drain or they need to be redundant.  Productivity is most important. Keep it Lean.

It’s also important to look at when information is captured to be reported.  Pre-production information which is what we were focusing on versus post production information in the DevOps world post-production information captured is called continuous monitoring.

Post production: Continuous monitoring

It is very common in DevOps to do a small amount of testing in production. Those tests are designed to run solely for assessing the operating system and sometimes to find bugs in the functionality of the live system. This is always a small set of tests.

The test team may run an automated suite to monitor certain functionality or workflows on the production system. This ensures that when a fail happens on the production system the development team knows about it before without waiting for lost transactions or a user to call support.

Most continuous monitoring is done using different, non-intrusive tools to capture system-level and business measurements and would not be the responsibility of the test team.

Thinking about Automation has changed

Teams are finally understanding test automation more today. Re-running a suite of automated tests says less about current quality of the system than it does about consistency.  If you miss the bug first- every time you run the same automation- it will miss that bug consistently. Automated test suites- when they pass- tell teams- the tests that ran in the past gave the same result now. If there are new bugs- the automated tests will not find them. If there are issues, interactions, or integrations not covered in the test suite which fail- the automated suite will not find them. What these tests hope is that the tests will show the system runs consistent or predictably with the last run.  To say re-running the set of automated tests proves quality is a big and perhaps misleading statement.

Summary

There are competing pressures for measuring testing today. The biggest pressure is Lean. Keep measures to a minimum and don’t let the measuring or reporting impact productivity.  Be Lean and effective.

Changes in software development practices have had a big impact on measuring. Measuring should be mainly about product readiness for progress further in the pipeline. The measures a team uses for this are often driven by what information the business wants- not what test teams feel like giving them.

Michael_Hackett.20150723

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), available in English, Chinese and Japanese, and Global Software Test Automation (HappyAbout Publishing, 2006).

He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Subscribe