WHO WE ARE
Founded in 1994 by top thought leaders in the software testing industry, LogiGear has completed software testing and development projects for prominent companies across a broad range of industries and technologies.
LogiGear provides leading-edge software testing technologies and expertise, along with software development services that enable our customers to accelerate business growth while having confidence in the software they deliver.
LogiGear is headquartered in the heart of Silicon Valley with the majority of the software testing and software development staff located in Ho Chi Minh City and Da Nang Vietnam. We are among the largest employers of software testing and development professionals in Vietnam, and our close partnerships with universities throughout the country allow us to attract and recruit top software engineering talent.
LogiGear continues to grow as companies realize the benefits of outsourcing their software testing and development. We have been listed among the fastest growing privately held companies by Inc. 500|5000 in 2009, 2012, 2013 and 2014.
The senior executive team has co-authored several top-selling books on software testing and test automation, including:
- Testing Computer Software, by Cem Kaner, Jalk Falk and Hung Q. Nguyen
- Testing Applications on the Web, by Hung Q. Nguyen, Michael Hackett and Robert Johnston
- Integrated Test Design and Automation, by Hans Buwalda, Dennis Janssen, Iris Pinkster, and Paul Watters
- Global Software Test Automation, by Hung Q. Nguyen, Michael Hackett, and Brent K. Whitlock (foreword by Apple Computers co-founder Steve Wozniak)
An insider's guide to the AI and IoT testing process
Testing the internet of things is one thing, but AI takes it to the next level. A LogiGear executive shares what the company learned from its first serious foray into this world.
For software testers, change is inevitable and unlikely to stop anytime soon. And nowhere is that more true than when it comes to AI and the IoT testing process. For an inside look at what this is really going to be like, we asked Phuoc Nguyen, software testing engineer at LogiGear, about the company's recent experience testing an AI/IoT product from gaming company Anki. In the first part of this two-part series, we asked Anki Test Director Jane Fraser how it all worked from her perspective. Nguyen offers an insider's very detailed look at the AI and IoT testing process.
Is this your first serious foray into an AI/IoT testing process? What lessons can you share for other companies struggling to test these cutting-edge technologies?
Phuoc Nguyen: We have completed testing for other clients' embedded systems, but this is our first serious foray into the AI/IoT testing process. The first game we tested was a racing game where the robotic car is built with AI using the client's application on iOS and Android smartphones and tablets.
We learned a lot about AI. Specifically, when we first started, we wondered how these cars build their intelligence and could do things like identify the target to defeat its opponent exactly. We learned that, from a player perspective, defeating AI was really difficult, especially an AI car with a high level of intelligence. The higher level of AI allows the car to become smarter.
At that time, we thought that the intelligence was implemented inside the AI car. However, after some time testing, we see that the intelligence of an AI car is actually based on the way the engineer writes a code of application. Through technology and algorithms, each car knows where it is on the racetrack, and where the other car is on the racetrack is based on the infrared camera on the bottom of cars. After scanning code from the racetrack, the car relays the information back to the smartphone or tablet via Bluetooth. The engineer receives that information and, as a result, enhances the AI, allowing it to decide to use the most suitable weapons to attack the opponents based on their positions on the racetrack.
After we understood these factors, we developed strategies that included physical intervention to impact the way AI acts while testing. For example, when playing a game with AI at its high level, players (especially people with less experience in playing the game) couldn't win it if their cars drove in front of the AI car. Thus, we chose an AI car equipped with a forward-firing weapon. In the game, if the AI car was behind the player car, we picked the AI car up and put it in front of the player car. That way, we could defeat the AI more easily. This scenario helped with developing a comprehensive strategy.
In conclusion, an AI's intelligence depends on a human's programming, not its own self. As a result, a human can create testing methods for it based on the rules made by the programmers.
You used error guessing in the AI and IoT testing process. Can you expand on what that is, how you used it and how it helped?
Nguyen: Error guessing is a technique which is based on the experience of testers to guess the problematic areas of the application. We usually use this technique to identify where the team should focus when executing the testing to create an effective strategy and avoid wasting time on the stable areas. Based on the experience we had gained during four years in the project, we could understand what the AI did and how the system worked. So we were able to easily find out the weaknesses of the application, as well as AI, by our assumption and guessing. This helps us save a lot of time as we focus on the questionable areas.
Stochastic testing is one technique you used. Can you expand on what this is and how you used it, and why is this particularly helpful for an under-14-year-old demographic and as part of the IoT testing process?
Nguyen: Stochastic testing (sometimes called monkey testing or random testing) is a technique where a tester randomly tests applications to find problems. Most gaming audiences in our project are children who are under 14 years old. Thus, we often play the game as children to see if the application can handle cases in scenarios that would not happen often with adults. For example, we usually test cases where the children could potentially break an application in a common real-world scenario. Examples include tapping two buttons at the same time, tapping on a button multiple times, tapping on multiple buttons/links continuously, interrupting, tapping everything on screen to see what their functions are (they don't usually go through the tutorial), etc. All of these actions may cause an application to get stuck or crash.
We've talked about the IoT testing process, so what about exploratory testing and AI? Everyone is wondering how to get their hands around AI testing. Can you be specific about how you approached this and what you learned from it?
Nguyen: Testing AI is a challenging task since we didn't have much documentation on how Anki's AI was programmed. Thus, we had to discover and explore to get familiar with AI, as well as understand AI behavior. Then, while testing, we took notes and recorded the actual test, so when the bugs occurred, we checked our recording to find the cause of failure. We observed the whole context, environment, platform, device, the emotion on the robot's face, the robot's battery, the device's battery, as well as the game we played, and used the experience of game testing we had gained during the project to narrow the "bug zone."
For example, we're testing a robot that is introduced as an intelligent guy with a big mind. He has the ability to remember, be curious and explore and get to know people. He is almost like a human. Thus, we first focused testing on the robot since we thought the intelligence was based on him. However, we found the intelligence was actually on the device application (after getting familiar with AI and the application using this test type). Basically, the robot is a collection of lights, motors, sensors and firmware running on processors. Firmware has the duty to communicate with the application via the robot's Wi-Fi (the robot acts as a Wi-Fi access point) to store data persistently, run motors, etc. Thus, once the firmware changes, we often focus testing on communication between the app and robot instead of the robot's behaviors only.
How long did the IoT testing process take, and do you have an idea of how many tests total were run?
Nguyen: Actually, this is a difficult question to answer. There are no written test cases for the above test types since they are techniques that are based on the experience of each tester to test the system. However, one certainty is that we often write test cases for test types called functional testing and smoke testing in which we apply the test case design techniques, such as equivalent partitioning, boundary analysis, constraint analysis, state transition and condition combination, to design test cases. Until now, the total number of test cases for the three games we tested is around 8,000 test cases. We combined all type tests, such as exploratory testing/ad-hoc testing, error guessing, stochastic testing, functional testing and smoke testing, during the testing phase to make sure we had maximum test coverage.
We often hear about automating software testing. This seems counter to that argument because, for so much of this IoT testing process, you needed human testers and lots of them working together. Can you expand on those thoughts?
Nguyen: Test automation is using computer time to execute tests. There are a lot of benefits of test automation, such as running the automated tests unattended (overnight), reliable repetitive testing, increasing speed in test execution, improving quality and increasing test coverage, reducing costs and time for regression testing, execution of tests that can't be done manually, performance testing to assess software stability, concurrency, scalability, etc.
We cannot apply automation testing for AI since it is just useful for stable systems with written test cases. Whereas AI behaviors are very complicated and random, so AI testing is more suited for manual execution. However, in our current project for Anki, we applied automation testing, which helped the team free up time.
For example, one week after a release, we often clean up crash reports on Jira that no longer happen on the latest release. Crashes are errors generated and sent to [the] server when players get crashed while playing game. These crashes are logged on Jira automatically. We created a bot to query crashes that no longer happen on the latest release in Jira, then add [a] comment and close them. Thousands of bugs are closed that way after every release [each] week, which help[s] manual tester[s] free up time. Another example is we often run unit tests to check basic functions when we have a new build to make sure the build is testable. This also saves time for us since we will not have to wait until manual testing to see if any function is broken because this function check process is already covered by unit testing. One final example is we run daily regression tests for the website where products of client[s] are sold. The website is quite stable now, but we need to run regression testing everyday (overnight) to make sure the website is working well since sometimes a developer makes minor changes on the website. This saves a lot of time as a manual tester cannot execute thousands of test cases every day.
What Makes Test Automation Successful?
An important but often underestimated part of software development is testing. Testing is, by definition, challenging. If bugs were easy to find, they wouldn’t be there in the first place (although it should be noted that early in the SLDC various trivial bugs might show up as well of course). A tester has to think outside the box to find the bugs that others have missed. In many cases, understanding the business domain of an application is more crucial for effective testing, as is detailed knowledge of the application itself.
In open source projects, quality is typically addressed by contributors and coordinating architects. Tests of units, components, and services are often done effectively and automated well. This allows a project to move forward even when many contributions are made. Comprehensive automated testing with sufficient range and depth helps keep the product stable.
While some open source projects develop from accumulating contributions of dispersed people, DevOps oriented projects may follow a Scrum or Kanban approach that includes simultaneous development and release. This process also relies heavily on comprehensiveness of tests and their seamless automation. Whenever there is a new version (this can be as small as a check in of a single source file), tests should be able to verify that the system didn’t break. At the same time, those tests shouldn’t break themselves either, which for UI-based tests is not trivial.
The testing pyramid, proposed by Mike Cohn in his book, Succeeding with Agile, positions the UI as the smallest part of testing. Most of the testing should focus at the unit and service or component levels. It makes it easier to design tests well, and automation at unit or component/service level tends to be easier and more stable.
I agree that this is a good strategy. However, from what I’ve observed on various projects, the UI testing remains an important part. In the web world, for example, the availability of techniques like Ajax and AngularJS allow designers to create interesting and highly interactive user experiences in which many parts of the application come together under test. An ultimate example of UI right web applications are single-page applications, where all or much of the application functionality is presented to users in a single page. The complexity of a UI can rival that of the more traditional client-server applications.
I therefore like to leave some more room in the top of the picture, making it look like this.
Even for UI automation, the technical side can be fairly straightforward. There are simple open source tools like Selenium that can take care of interfacing with the UI, mimicking the user’s behavior toward the application under test. Tests through the UI are often mixed with non-UI operations as well, such as service calls, command line commands, and SQL queries.
The problems with UI tests come in maintenance. A small change in a UI design or UI behavior can knock out large amounts of the automated tests interacting with them. Common causes are interface elements that can no longer be found or unexpected waiting times for the UI to respond to operations. UI automation is then avoided for the wrong reason: the inability to make it work well.