home Test Methods & Metrics Testing a Mission Critical System: The Way We Do it

Testing a Mission Critical System: The Way We Do it

For mission-critical applications, it’s important to frequently develop, test, and deploy new features, while maintaining high quality. To guarantee top-notch quality, you must have the right testing approach, process, and tools in place.

I’m currently working as an offshore consultant to a tier one retailer in the USA. The client is a very tough and demanding one. All of these factors make the project a mission critical one. So let’s see how we test the system and what the approaches we take are.

The latest trend in QA Testing is called “shift-left testing”. Simply put, it’s where we move all the QA related activities to the beginning of the sprint. In traditional approaches, most of the QA activities begin after the development work is completed. So they were more focused on defect finding. But the cost of fixing those defects were high because all the defects were identified at the end of the sprint.

Advantages of Shift-Left Testing

Since we are moving ahead with modern world concepts and techniques, we are now focusing more on defect prevention rather than defect finding. That means our work now starts in the early stages of the sprint as soon as requirement gathering starts. We review the user stories and screen mock-ups prepared by our business analyst (BA) team. And we report if these are not aligned with the requirement or it differs from our understanding of the requirement. We brainstorm with both the development team and BA team until we get the finalized requirements. The objective of this exercise is to bring all the teams into the same understanding of the requirement. It also aligns with the goal of DevOps, which is to improve collaboration between business stakeholders, and application development and operations teams.

While the development team starts their design and development processes, we start our test scenario design task concurrently. We use techniques like mind mapping and Functional Specification Data Mapping (FSDM) to capture the requirements correctly into our test scenarios. Once we complete that, we will send them to the development and BA team for review. If needed, we will have walk-through sessions with them as well. In the meantime, the QA team will start to create test cases from those test scenarios. If there are any alterations or valid feedback from either the development or BA team, we will incorporate them into our test cases. Manual test cases and Automation test scripting are performed simultaneously.

Testing Activities Throughout the Cycle

With the nature of our application, we are more focused on API Automation. It covers ground more quickly than UI automation. So as soon as we get a working environment with the APIs deployed, we start scripting. Most of the time, this environment will be a local development environment. Once we receive the API documentation, we can finalize our automation scripts by adding the remaining assertions. Since most of the tasks happen simultaneously, test case creations as well as scripting tasks will also be completed by the time of development completion.

Another important activity we perform is “peer testing”. We test the underdeveloped application on local development environments. Whatever the features that developers have completed, the QA team does high-level testing on them. We are more focused on the application functionality rather than UI. Of course, if we see an obvious UI issue, we report it. But we pay more attention towards the functionality. Whatever issues we find at this phase are, we report them quickly to the development team in a group chat. We also add them to a Google spreadsheet for tracking purposes so we can get them fixed quickly and tested at the same time, rather than waiting for a whole release cycle to get the defect fixed in post-release. Since the release is not an official one, whatever the bugs we find, do not go into the official report either. The target is to find and fix the bugs in the early stages. This is a very important milestone on the journey towards defect prevention.

After the development team completes the development and unit testing, they send an official QA release to the QA team. We use a common release note template for all the applications, which was also a product of the QA team. Once a majority of the API related functionalities have been automated, we run them overnight through our CI environment. The next morning, we will start with verifying the Automation status report and re-run the failed test cases. If we find any issues, they will be tracked in our official defect tracking system. UI testing will be more focused on happy path since we have covered all the negative test cases through the API automation. So testers will get more time to do exploratory testing.

Root cause analysis will be done after each major release. We will decide whether to go for another deployment or move the defects to the backlog. For this decision, we take into account the facts: like severity, priority of the defects, the importance of the feature, and also how soon this feature will be used in production. We also maintain a root cause analysis report for each major release. Whatever the mitigation actions that need to be taken will also be included in the same report. This report will be used for future reference.

Once the testing work is complete, we share our test results with the client. Those are needed to get the managerial approval for the production deployment. The deployment will be performed by the cloud ops team, but both the Dev and QA teams will also participate in the deployment process.

Once the application gets deployed, the QA team will perform a high-level verification to make sure all the new features are included and the already existing functionality isn’t broken. This will conclude a successful production deployment.

Facebooktwitterredditpinterestlinkedinmail
Sankha Jayasooriya
Sankha Jayasooriya is an IT Professional with more than 8 years of experience in the Software Quality Assurance field. He is an ISTQB certified professional specialized in service level testing, automated testing, and manual testing. His areas of domain expertise extend to retail, innovation, banking and finance, enterprise software, robotics, and mobile testing. Sankha is a co-author of the “Multi-Domain Supported and Technology Neutral Performance Testing Process Framework” white paper and is also a regular blogger on Genius Quality—Medium.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe