Mobile apps are a necessity for companies of all sizes, and apps are getting more complex all the time. That along with the dizzying array of devices requires a well thought-out mobile testing strategy. And it will involve a bit of risk/reward analysis.
Having an appropriate amount of risk and reward will help you create a well-balanced mobile testing strategy. It’s important to take into consideration platform mix, test execution, manual/automated testing, device management, and outsourcing as part of your strategy. With such growth in the mobile industry, testers must thoroughly understand the application and its customers.
Start thinking about whether these items may fit into your strategy:
- Figuring out your target demographics
- Accessing emulator limitations
- Accounting for multitasking and integrations
- Striving for adequate automation for scaling
- Owning a manageable number of devices
- Outsourcing to reduce inventory demands
This article was originally featured as a Guest View piece on SD Times.
Mobile apps come with inherent risks. For usability, compatibility and responsiveness testing, what might be considered a minor issue on a laptop could be critical on a mobile device. People are generally hurrying, multitasking and have limited time and attention spans when using mobile devices, so it’s not just bugs in apps that aren’t well tolerated. Buttons, menus and forms that are easy to access on a desktop can be small and frustrating to use when resized for mobile. Testing too many devices creates unnecessary expenses; too few devices risks lost revenue from app abandonment. However, taking time to understand the device ecosystem and the customer the application is designed for will enable creating a test strategy that will balance risk and return.
The diversity in devices, operating systems and screen resolutions makes determining the right mix of devices to test complicated. A little basic data analysis will provide a lot of insight into determining the best device matrix. Three manufacturers account for 80% of devices used in the U.S.: Apple (43.5%), Samsung (28.7%) and LG (8.2%). Using that information and looking at specific target demographics can give a pretty good composite picture of the devices predominantly used by them (which will provide insight into the operating system version), and hence which ones to focus the majority of testing on. Also, the product type (such as business vs. consumer apps or games) will influence the target devices.
After identifying the device matrix, there is also the option to use a mix of emulators and real devices. The testing implications of when (and when not) to use emulators vs. real devices are large and complex; hardly anyone would argue that nothing takes the place of testing on actual devices. Holding the device is everyone’s wish. Seeing page load and performance issues on the real device is the most efficient, but we know we can’t physically test every device. Usability testing on emulators and browsers with any extensions is getting better, but won’t always represent what will be seen on the actual device. Emulators can be good for testing new functionality or a new component design, and they have some advantages over using actual devices. Logging faults and capturing screenshots are much simpler when working from a desktop, and some conditions that are hard to duplicate on real devices, like low battery power, are easy to simulate.
Emulators also tend to be slower than real devices. Depending on what type of app is being tested and whether tests are manual or automated can limit testing on emulators. Native apps talk directly to the operating system, while Web apps talk to the browser, which talks to the OS. The more layers there are, the slower the response time. By being aware of the limitations, selective use of emulators is an option to increase test coverage with minimal cost.
Normally it is not practical or cost effective to conduct full testing or full functional testing on multiple devices. A practical approach is running a full set of tests on one or two primary devices, and then running the smoke test on additional devices to identify any obvious issues. However, it depends on the nature of the application. If the app is cutting-edge and can possibly stress the device’s capability (processing power, memory, GPS, or other device-specific hardware), then more extensive testing is in order.
One thing to keep in mind when running basic tests is that most handheld mobile devices give priority to the communication environment. For example, an incoming phone call always receives priority over a running application. This makes it important to test the various events and the OS’ multitasking ability.
A mobile testing strategy is not complete without testing the integration between the application and back-end system. This is especially true when the release cycles of mobile apps and back-end systems are very different, which they often are.
Manual or automated
A lot of basic compatibility and basic functional testing can be done efficiently with manual testing, but when it comes to testing lots of devices and applications that need to be retested frequently, automation can be an efficient way to scale. The efficiency gain will depend on the experience and skill of the automation team—the standard disclaimer “results may vary” is even more applicable to mobile test automation due to all the variables. Also, various test automation tools will impact your choices of emulators vs. real devices.
A big challenge of mobile testing is sourcing and then management of devices. Creating the initial matrix is just the beginning. It’s common for each manufacturer to introduce three or more new devices each year, and, on average, devices are upgraded every two years. For most companies this makes it impractical to maintain an inventory of devices. The growing numbers of cloud service providers make it possible to completely “outsource” device management, and are a good way to go most of the time. However, there are limitations to relying solely on device rental. An option is to own a manageable number of the key devices for a majority of testing and then utilize devices in the cloud for basic compatibility and functional testing. The knowledge and research for doing this is a big task.
Fully outsourced option
Completely outsourcing mobile testing is a strategy that works well for a lot of organizations. This eliminates the challenges and headaches of managing and maintaining an inventory of mobile devices. Firms with mobile specialists typically understand the unique device and emulator testing nuances, and likely have mobile automation expertise as well. Better firms, because of their experience, can also help develop the device and testing matrix that will provide the optimum test coverage at the lowest cost.
Mobile is rapidly becoming the primary user interface; with the Mobile First movement, it already is the primary interface, which means mobile testing will continue to increase in importance. Applying a thoughtful approach and rational analysis will go a long way in developing a mobile strategy that will provide the right level of testing.
Author: Michael Hackett, Senior Vice President
Michael is also a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics.
Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), available in English, Chinese and Japanese, and Global Software Test Automation (HappyAbout Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries.