Testing Netflix on Android

When Netflix decided to enter the Android ecosystem, we faced a daunting set of challenges:

1. We wanted to release rapidly (every 6-8 weeks).

2. There were hundreds of Android devices of different shapes, versions, capacities, and specifications which need to playback audio and video.

3. We wanted to keep the team small and happy.

Of course, the seasoned tester in you has to admit that these are the sort of problems you like to wake up to every day and solve. Doing it with a group of fellow software engineers who are passionate about quality is what made overcoming those challenges even more fun. 

Release rapidly

You probably guessed that automation had to play a role in this solution. However automating scenarios on the phone or a tablet is complicated when the core functionality of your application is to play back videos natively but you are using an HTML5 interface which lives in the application’s web view.

Verifying an app that uses an embedded web view to serve as its presentation platform was challenging in part due to the dearth of tools available. We considered Selenium, Android Native Driver, and the Android Instrumentation Framework. Unfortunately, we could not use Selenium or the Android Native Driver, because the bulk of our user interactions occur on the HTML5 front end. As a result, we decided to build a slightly modified solution.

Our modified test framework heavily leverages a piece of our product code which bridges JavaScript and native code through a proxy interface. Though we were able to drive some behavior by sending commands through the bridge, we needed an automation hook in order to report state back to the automation framework. Since the HTML document doesn’t expose its title, we decided to use the title element as our hook. We rely on the onReceivedTitle notification as a way to communicate back to our Java code when some Javascript is executed in the HTML5 UI. Through this approach, we were able to execute a variety of tasks by injecting JavaScript into the web view, performing the appropriate DOM inspection task, and then reporting the result through the title property.

With this solution in place, we are able to automate all our key scenarios such as login, browsing the movie catalog, searching, and controlling movie playback.

While we automate the testing of playback, the subjective analysis of quality is still left to the tester. Using automation we can catch buffering and other streaming issues by adding testability in our software, but at the end of the day we need a tester to verify issues such as seamless resolution switching or HD quality which are hard to achieve today using automation and also cost prohibitive.

We have a continuous build integration system that allows us to run our automated smoke tests on each submit on a bank of devices. With the framework in place, we are able to quickly ascertain build stability across the vast array of makes and models that are part of the Android ecosystem. This quick and inexpensive feedback loop enables a very quick release cycle as the testing overhead in each release is low given the stakes.

Device diversity

To put device diversity in context, we see around 1000 different devices streaming Netflix on Android every day. We had to figure out how to categorize these devices in buckets so that we can be reasonably sure that we are releasing something that will work properly on these devices. So the devices we choose to participate in our continuous integration system are based on the following criteria.

We have at least one device for each playback pipeline architecture we support (The app uses several approaches for video playback on Android such as hardware decoder, software decoder, OMX-AL, iOMX).

We choose devices with high and low end processors as well as devices with different memory capabilities.

We have representatives that support each major operating system by make in addition to supporting custom ROMs (most notably CM7, CM9).

We choose devices that are most heavily used by Netflix Subscribers.

With this information, we have taken stock of all the devices we have in house and classified them based on their specs. We figured out the optimal combination of devices to give us maximum coverage. We are able to reduce our daily smoke automation devices to around 10 phones and 4 tablets and keep the rest for the longer release wide test cycles.

This list gets updated periodically to adjust to the changing market conditions. Also note that this is only the phone list, we have a separate list for tablets. We have several other phones that we test using automation and a smaller set of high priority tests, the list above goes through the comprehensive suite of manual and automation testing.

To put it another way, when it comes to watching Netflix, any device other than those ten devices can be classified with the high priority devices based on their configuration. This in turn helps us to quickly identify the class of problems associated with the given device.

Small happy team

We keep our team lean by focusing our full time employees on building solutions that scale and automation is a key part of this effort. When we do an international launch, we rely on crowd-sourcing test solutions like uTest to quickly verify network and latency performance. This provides us real world insurance that all of our backend systems are working as expected. These approaches give our team time to watch their favorite movies to ensure that we have the best mobile streaming video solution in the industry.

Amol Kher
Amol is the Chief Technical Officer at Wello, a startup that focuses on connecting users with trainers over live video. He is currently developing a mobile app for the product. In his previous job, he was the engineering manager for the tools and test team at Netflix mobile team and was responsible for shipping the Netflix Mobile app on the iOS, Android and AppleTV platforms. Prior to Netflix, he was an early engineer on the Chrome for Android team at Google.

The Related Post

LogiGear Magazine – October 2010
“Testing Applications on the web” – 2nd EditionAuthors: Hung Q. Nguyen, Bob Johnson, Michael HackettPublisher: Wiley; edition (May 16, 2003) This is good book. If you test web apps, you should buy it!, April 20, 2001By Dr. Cem Kaner – Director of Florida Institute of Technology’s Center for Software Testing Education & Research Book Reviews ...
Elfriede Dustin of Innovative Defense Technology, is the author of various books including Automated Software Testing, Quality Web Systems, and her latest book Effective Software Testing. Dustin discusses her views on test design, scaling automation and the current state of test automation tools. LogiGear: With Test Design being an important ingredient to successful test automation, ...
5 roadblocks in vehicular autonomy that complicate Software Testing Experts in the field have previously referred to air travel as somewhat of a gold standard for autonomous vehicle safety, but after Boeing’s two tragedies, that analogy can no longer be used when talking about self-driving cars. This was after Boeing’s 737 MAX Jets have found ...
Two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation: 1. Code context – e.g. unit testing. 2. System context – e.g. protocol or message level testing. 3. Social context – e.g. ...
Test Automation is significant and growing-yet I have read many forum comments and blog posts about Test Automation not delivering as expected. It’s true that test automation can improve reliability while minimizing variability in the results, speed up the process, increase test coverage, and ultimately provide greater confidence in the quality of the software being ...
People who know me and my work probably know my emphasis on good test design for successful test automation. I have written about this in “Key Success Factors for Keyword Driven Testing“. In the Action Based Testing (ABT) method that I have pioneered over the years it is an essential element for success. However, agreeing ...
Introduction As a consultant and trainer, I am often asked by my clients and students how to deal with automated acceptance tests. One common question is whether automated acceptance tests should use the graphical user interface (GUI) provided by the application.
The guide for CUI Automated Testing strategies, including chatbot testing and voice app testing. In the Software Testing industry, trends come and go that shape the future of testing. From Automation in Agile, to the DevOps era we are now in, trends are what evolve and improve our testing processes and ideologies. Currently, many researchers ...
An Overview of Four Methods for Systematic Test Design Strategy Many people test, but few people use the well-known black-box and white-box test design techniques. The technique most used, however, seems to be testing randomly chosen valid values, followed by error guessing, exploratory testing and the like. Could it be that the more systematic test ...
In recent years, much attention has been paid to setting up Test Automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of Test Automation: What do we do when the Test ...
Having the right Test Automation plan helps bridge gaps and fragmentations in the complex mobile environment. Figuring out the best Test Automation plan is one of the biggest frustrations for today’s digital teams. Organizations struggle to develop cross-platform Test Automation that can fit with their Continuous Integration cadence, their regression cycles and other elements of ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe