This article is part 2 of the 2-part series, Developer Testing? What is Testing and QA’s Place? Part 1 explored modern SDLCs, such as Agile, SCRUM, and Lean, specific to the dynamic of Developer Testing. It also discussed the traditional role of the Developer and the “testing” responsibilities typically delegated to them. The motivation and mindset for Developers and Testers are fundamentally different. What we learned in part 1 is that Developer Testing is a complex and multifaceted issue. It’s not as easy as deciding, “Okay, we’re going to start having our Developers test!”
Part 2 of this series will explore the other side of this Developer Testing situation: The role and responsibilities of the Tester. It will also discuss the implications of changing these responsibilities for Developers, the people now expected to take greater responsibility for testing. In addition, part 2 will touch on the manager’s perspective. This will include things to keep in mind when embarking on a Developer Testing initiative, as well as tips for leaders looking to better equip their cross-functional teams.
Let’s dive into part 2 by discussing how Testers test.
I could write a lot about what testing is about and how Testers test––but I won’t. For the purposes of this article, I will focus on the kinds of tests that Test Teams are famous for: The tests that Developers rely on for robust code (which also happen to be the types of tests Developers usually dislike doing!). This is not simply validating an acceptance criteria in a user story, which today is commonly referred to as checking, as in “check that that thing works.” Instead, Test Teams should focus on doing aggressive testing.
If Test Teams generally take responsibility for testing after the code has been validated through unit tests, the first tests that they focus on are integration tests. These integration tests are a big task in themselves from simple API calls, to complex workflows, transactions, tasks, scenarios under lots of different conditions, personas, with various data and data types on numbers of environments… Integration Testing at its simplest is one component talking to another, which is already done by many Developers. These tasks are so different that in some more structured companies, these tasks had well-defined separate phases: Development Integration Testing versus System Integration Testing. Past the simple understanding, it gets complicated and time-consuming. The number and variety of cases to be developed here ought to be extensive and complex. Now, you may be asking: Under the new ideas of Developer Testing, who does this type of testing? And, more importantly, who automates it?
There are 2 ideas I want to expand upon and further clarify:
- Integration Tests. When I use the phrase “workflow,” you may use a multitude of phrases––transaction, path, use case, task, end-to-end, or scenario––they all have overlap. In general, what I am talking about here is multiple functions, not individual. A user would do function 1, then function 2, then function 3, then go back and edit function 2, then do functions 3, 4, and 5. Whatever you name that kind of real usage to be tested––someone has to do it. These types of tests usually get done in System Testing, which you may call the Beta Phase, User Acceptance Testing, Pre-Production testing––however you define that phase and the environments that Testers work on––they still need to get done.
- Agile Testing Quadrants. In addition, there is the Agile Testing Quadrants paradigm (Figure 1) created by Brian Marick. Q3 and Q4 contain the tasks traditionally done by Test Teams. Additionally, Q2 has some tasks very often done by Testers on Scrum Teams. Historically, I have seen great resistance from Developers when asked to take over these Q3 and Q4 testing tasks. Q1 and Q2 are more about preventing bugs and producing better code. Q3 and Q4 are for finding bugs and getting happier customers.
In a compressed schedule when time gets pressured, this testing tends to somehow get lost. It’s important that organizations plan ahead to ensure that these important tasks don’t get lost in a time crunch or ignored when transitioning to Developer Testing. Remember, it is very common to find bugs in this phase. It’s in bad situations like these that finger pointing happens over integration issues found too late. Worse, if these tests don’t happen at all, then you are assured to miss problems and have costly support issues.
The Value of Tester Testing: How Testers Test
It’s no secret that a majority of bugs are uncovered by the testing done by Testers––and not by validation. For example, Exploratory Testing, which is historically performed by Testers, is where most bugs are found. This is for a multitude of reasons, but some in particular include:
- “We missed that in the user story.”
- “We did not think of that situation.”
- “We only had time for Happy Path validation, not real-world exploring.”
This also happens to be the main area where most Developers need to take a deep breath and get over the hurdle of Exploratory Testing. And, contrary to popular belief, Exploratory Testing is not simply monkeying around! Exploratory Testing uses the Lean principle just in time for test design. Sometimes, people think Exploratory Testing has no test design or test engineering because it is done “on the fly“––this is the opposite of true.
Exploratory Testing is the most sophisticated and creative testing we do. It’s just in time test design:
- Make a hypothesis,
- Run a test,
- Observe what happens,
- Then, design another test.
Your test design happens at the time of execution: It’s simultaneous test design and execution. It’s critical to product quality, but also is the most time-consuming.
This is where Developers have the most hurdles to cross. I know a lot of Developers who are great Exploratory Testers––especially when they combine their white-box code level knowledge, their “how the environment and system work” gray-box knowledge, and their “what users do” black-box knowledge. On the other hand, I also know Developers who struggle with exploratory tests. To use an extreme example, say you have a test where you use boundary data in a complex workflow under a stress load, on a corner case environment, browser, or mobile device. This situation could be 100% real and possible, albeit rare. In my work with training teams, I have had more than a few Developers say, “I just don’t think that way. When you say it, it is a good case to test; but, I would never have thought of it.” For Test Teams, this is bread and butter. This is a gap that needs bridging if you’re expecting your Developers to take on more of the testing tasks.
Exploratory Testing can be fun and creative, but it’s going to take a lot of time to be done well. Yes, Development Teams can get way better at Exploratory Testing, and “break it” if that is a goal. But, it needs time budgeted to it and some creativity. However, it does not happen by itself––teams need training and, more importantly, a culture to support this.
The Value of Automation
If there is one responsibility that Testers have historically had that needs to be redistributed in this new world of “Developer Testing,” it’s Test Automation. Despite Automation being a well-known and well-adopted practice at this point in Software Testing, there are still 2 big, consistent issues with Automation that I see in my work:
- A relentless drive to automate more and more.
- Everyone underestimates the cost of maintaining big Automation suites––maintenance kills!
Automation is needed. Automation is essential. The more Automation you can do, the better. But, make sure it is smarter Automation. Thoughtful Automation. Your Automation programs need to be strategized, designed, and built knowing that maintenance cost, time, and frustration can kill any trust or savings benefit Automation was supposed to add.
Test Automation is software development. Yes, it’s not production code––but it’s software. All of that code that goes into your Test Automation suite needs design, engineering, creation, management, execution, maintenance, bug fixing, environments, data management, running, analysis, reporting… all of which are super time-consuming.
Here are the situations that I see with organizations’ Automation suites:
- Little to no Automation, so the only Test Automation becomes low-level, unit/component-level Automation. This is great, but it should never be the only part of a Test Automation strategy.
- Automation is developed, but it is not well-engineered or valued.
- Developers inheriting mediocrely-engineered Automation suites that, as a part-time investment, lose value and are not trusted.
In order to be successful and effective, Test Automation needs an end-to-end strategy. How much unit testing will you do? How much service-level testing will you do? What about UI Automation? All of these levels are dependent on product, users, platform, and other parameters that make your own Test Automation strategy unique. Test Automation also needs investment, time, management, skill, and more to have value. In addition, visibility in the running, consistency, change, and failures of the system will earn the trust of the organization. Like I stressed before, you need to keep in mind who is going to handle these tasks as you embark on your Developer Testing initiative. Similar to Exploratory Testing, integration testing, and all of the other testing that Testers do, each Automation level is unique and an important aspect in ensuring end-product quality that cannot afford to be missed.
The Concerns of Developer Testing
The idea of Developers Testing their own code is great for what it is. It’s an additional quality practice, but it is not a direct swap for QA/Test Team type testing (for more information on this, watch the on-demand webinar Balancing Developer Testing and Tester Testing in the Modern SDLC). For example, test-driven development (TDD) is joined in eXtreme Programming (XP) by a Pair Programming practice. I am not suggesting that if you do TDD, you have to do Pair Programming, but many people would suggest doing so. Pair Programming is an XP software development technique that pairs 2 programmers together at one workstation. One person, called the driver, writes the code while the other person, called the observer or navigator, reviews the code as it is written. Often, these 2 people will switch roles as the project goes on. The spirit behind Pair Programming is:
- Two heads are better than one.
- Simultaneous code-review.
- Someone else better look at my code before it goes out!
I often hear concerns (really, complaints) from Developers that TDD and Pair Programming slow them down too much for their perceived value. I disagree––but that is not the concern here. The issue is the reliance on Developers doing other quality practices to cover for reduced, late Tester-type testing. If you cut Tester-type testing efforts, you need to replace them with additional quality practices and you still risk a drop in product quality.
When it comes to Developer Testing, increased low-level Developer Testing is great in achieving shift-left and Quality at Every Step. At the same time, it can streamline downstream testing, but will never entirely eliminate it. It’s true that individual functional bugs found through unit tests are a huge cost savings; but, integration, workflow, environment, platform, and especially user/customer type issues and bugs will not be found by these low-level tests. From thorough requirements or user story analysis for TDD, to simply building the right thing, a Tester’s mindset is crucial to any project success. Test Teams and the work they do still play an important role in software development, even if the methodology driving your SDLC doesn’t call for a separate QA department or Test organization. It doesn’t matter if you change the tasks’ names to UAT, Q3 & Q4, Alpha & Beta––whatever you want to call them, these testing tasks still have to get done. We cannot simply validate happy paths. We need to test, explore, and learn.
How to Succeed with Developer Testing: The Management Part
From a manager’s perspective, the shift-left paradigm supports modern management’s flip; this flip is the shift of the manager’s role to being the supporter and coach of the team as opposed to the ruler. From a more tactical perspective, it focuses on 3 of the 7 Lean Principles: Empower the Team, Amplify Learning, and See the Whole.
Empower the Team.With increased, effective low-level testing done by Developers, the functional validation tasks have also shifted-left to Developers. After this, who does integration testing? Who does the workflow testing and exploration? Who automates it? If you are adhering to the “you built it, you own it” ideas, that means the Development Team of a particular feature set owns all the testing tasks through production. If you are using the Agile Testing Quadrants idea of Q3 & Q4 testing, or System Test or UAT Teams, those teams need to have close collaboration with the Development Team. This way, they can optimize, transfer knowledge, and better engineer Automation––whether it’s outsourced or in-house.
Amplify Learning. Amplify Learning applies to SDLC culture just as much as it applies to practices. Focus on open communication and collaboration. You want to foster learning about quality, but also about testing, test case design, smarter Automation, Exploratory Testing, and Forced Error Testing––all of those aforementioned testing tasks that need to get done. You may also want to build skills in different testing methods. This may allow your Developers to think about their tests in ways that they may not have done before (remember the previous Developer who “didn’t think like that”?).
See the Whole. However, one of the most important things you can do as a leader is see and communicate the whole to the entire team. Nothing empowers a team more than understanding the vision and knowledge. From customers and their needs, to POs and user stories, to Developer tests and environments, to the entire value chain, a successful product needs Quality at Every Step with everyone on the team owning a piece in quality assurance. This can’t happen if everyone is not kept “in the know” about the project.
Looking for training opportunities and creative staffing solutions with full collaboration is a tough management task––especially with the dynamic staffing and productivity issues we have today. Planning for and creating a culture where innovation, creativity, exploring, thinking, as well as knowing and understanding the customer is essential. As a leader, it starts with you. The days of traditional management, where the leader was the project overlord, may be over. But, it is still your job to coach and lead your team towards success.
As a leader, you know that SDLCs are constantly changing. Some people swear by SAFe (Agile) or Scrum@Scale, while others may say they have graduated to Lean Software Development (LSD) or Kanban, whereas others may be embracing DevOps. Practices and tools are changing faster than ever before. With my years in software development, I know it is always culture that matters more than a set of practices. The organization and teams’ understanding of Quality at Every Step, shift-left, fail-fast, you build it you own it––or whatever Development Team culture you want––will go further in getting a great product built than adding a tactical practice.
The practices we focused on here concern Developers taking over testing tasks; this is great and useful as long as the many tasks Test Teams (other than functional validation) do not get lost. Someone still needs to test integration, workflows, end-to-end transactions, and error cases––among many others. Blocks of time should be set aside for “unexpected errors” hunting, cross-platform testing, device testing, and the difficult Automation associated with it––blocks of time that won’t be eliminated by time pressures! These tests have a direct impact on support costs and the value your customers see in your system.
Test Automation needs to be approached as a separate engineering project as opposed to a means to an end. It needs design, execution, analysis, and, most of all, maintenance. This can be––no, this will be––time-consuming and sometimes difficult even in the best circumstances. That is okay. Just make sure you’ve properly planned for it.
Coaching and training are just as essential to success as the most basic collaboration. Redistributing a block of unique skills (i.e. testing skills) to teams who may never have been tasked with complex, user-focused design and Test Automation maintenance (i.e. Developers) can not be underestimated or under-planned.
LogiGear works with world-class enterprises to integrate scalable Automation, testing best practices, and more. Do you need to transform your SDLC? We’d love to help. We’ve recently created multiple resources on the topic of Developer Testing, including a 2-part webinar series that you can view on-demand!
Developers taking over many Testing responsibilities isn’t black and white: It’s complex, multi-faceted, and requires immense planning to get it done correctly. However, that doesn’t mean that it’s impossible! Start small, and always remember: Communication and collaboration can go a long way.