HMI – A Tester’s Unique Entrée Into Testing The Software Car

The growing complexity of the Human-Machine Interface (HMI) in cars offers traditional testers an opportunity to capitalize on their strengths.

a software tester testing the diagnostics on a car

The human-machine interface (HMI) is nothing new. Any user interface including a graphical user interface (GUI) falls under the category of human-machine interface. HMI is more commonly being used to mean a view into the machine, meaning: a user interface to sensors, medical device systems, factory floor automation and processing systems, oil field processing systems with software and sensors—systems that are not typical to mainstream computers.

When people talk about HMI and cars, it is the interface for the driver to get a view into the various systems running on the car. Most cars today already have some kind of digital interface between the car machine and the human driver. They are getting significantly more complicated.

Particularly, for test teams who are tasked with testing as early as possible in the development cycle, we need various tools, emulators, and simulators to get us an interface to the running system, device, or service that is typically not the common user interface delivered to users. Some of the unique and interesting parts of the HMI in cars include:

1. Most people don’t think of their dashboards as command centers

2. Drivers are getting more and more information delivered by this interface

3. Some of this information will not be fully understood by users

4. The user should not be paying too much attention to the interface, but more attention to the actual driving environment, which can cause problems

5. The input to this interface will not be traditional. Voice, gesture, tap, double tap, and drag may each be used—all of which are complex enough by themselves; but, add to this the fact that the user will be interacting with it while driving a 2-ton, metal weapon and we see the need for streamlining these systems.

This is complicated. We need humans—probably not the designers—testing and using these. This article will walk you in depth on what HMI is, examples of HMI in the car, Software Testing considerations, and software testing solutions for testing the car dashboard, or HMI.

What is HMI?

According to TechTarget, a human-machine interface (HMI) is the user interface that connects an operator to the controller for an industrial system, which are integrated hardware and software designed to monitor and control the operation of machinery.

Think about HMIs as the command center for the car—different than consumer computers—this interface gives you a view into the machine: it lets you see what the machine sees. We’re all used to temperature, speed, battery level, or gas and oil level indicators; these views or interfaces into the car machine, along with the software driving them, are getting increasingly complex and this is causing problems for drivers. The software car interface is a control center for vehicle communication, navigation, car operation, entertainment, personal safety, telecommunications (voice, text) and many others.

The HMI may also be the integration point or view into the integration of various systems. It may translate data from the machine to what information the driver will see. The integration will happen at a lower level, but the communication to the driver, and perhaps driver action before use of another system, will happen through the HMI—all while limiting or filtering what the driver sees to minimize distraction.

The need for testing the integration of systems, the graphical representation, and various driver input methods as well as testing the complexity and sequence, input, or responses from the user will be vast.

Integration Example: Airbags and HMI

If, for example, the airbags deployed in the car, this could set off an integration with a number of other systems. If the airbags deploy:

1. 911 (U.S. emergency services) could be alerted

2. The cameras in the car could turn on and record to capture any information and prevent anyone leaving the scene without being at least photographed.

3. It could lock or unlock the car doors for personal security.

4. It could integrate with the voice system to ask if you would like emergency services called (an ambulance or the police).

Although some of this would be automatic, some would be managed through the human-machine interface to minimize the effects of an accidental airbag deployment or canceling the need for emergency services being called due to a minor fender bender. The testing of each of these systems would occur at the lowest level first, but the integration and running through the user interface serves a different purpose with different testing goals.

Haptics Need a Unique Test Strategy

When mobile devices started on the scene, there was a new field for testing: haptics. How would a mobile user interact with this mobile device? With one hand? With two hands? With fingers by touch, by gesture, with a tap, a slide, a double tap? A wheel picker?

We can think of the notion of haptics as the input method. This conversation has been taken to an entirely new level with the automotive human-machine interface. If there are more and more systems feeding information to the driver—who needs to pay attention to the road, to pedestrians, to their passengers and their safety—how much interaction is required for the command center?

How will the driver interact? Will we all be saying, “Hey Siri,” to automatically park our car? The haptic discussion from mobile devices did not need to revolve around distraction, attention, and safety as much as it does now with the software car. The variety of inputs, as well as combinations of concurrent inputs, will make testing complicated and Automation important—but also complicated. There will need to be examinations of test strategy for various components and situations to better conduct manual or automated testing.

Training Need for HMI?

As autonomous cars become more popular, we’re facing one large question: How much training will people need to operate a driving assistant car or a “driverless” car?

Do most drivers understand how “manual” braking systems work today? No. Do they have to? It seems not. But learning to drive, people learn enough about braking to be safe. With autonomous braking systems, what do drivers have to learn? These automated braking systems are omnipresent. What braking situations do drivers still have to manage? When will automatic braking kick-in beyond your control? What is assisted driver braking and what is autonomous car braking? This is an example area where training and understanding of a now-automated system has gaps. I wonder how much the training systems, video and documentation have been tested alongside the system itself.

More importantly to us testers: How will these be tested? How do you test training, or lack thereof? Documentation? Real-time decision making? Distracted or slow decision making? It seems like a manual test design puzzle with many permutations. The problems around testing the integrated systems can lead to an explosion of test situations and cases.

Testing Considerations

The issues testers will face are normal testing issues: getting familiar enough with the capabilities for the system for effective test case design, finding bugs, exploring the system, developing error scenarios, test case design for validation, for errors, for complexity, for concurrency, and race conditions… this is all standard testing work. Also, modeling the system, mixing various input methods, developing oracles for expected behavior or outcome, and hopefully the product design and engineering teams will welcome comments on ease of use, or usability.

Testing Skills

Modeling the right use cases. Driving is social. How do you test that? While autonomously parking, you see a person walking a dog. Your camera may not see them or they may be on the sidewalk—in the line of the camera but far from the curb. How do you test that situation? Most drivers will look at the person walking the dog. When eye contact is made (you are both aware of each other) most drivers would proceed.

Hardware companies (automotive and device companies) testing unpredictable users and various input methods may be a difficult transition. Most software testers are used to random or unpredictable user input. Companies that make firmware, platform OSs, device drivers, etc. usually tightly scope things users are able to do. With automotive HMIs, complex input out on the open road, or dynamic urban settings will be a challenge to test. Also important (and potentially new) for hardware companies are user experience focused type tests

So how are testers uniquely qualified here to do this variety of testing? Well, test engineers have a very long history of acting like a user and simulating the users behavior. With issues of attention and multiple input methods and distraction, a human tester is invaluable when exploring the human-machine interface. Some of the testing through the HMI may be the first testing not using emulators or mocked-up dummy interfaces. 

There are two obvious places here where testers are uniquely and historically qualified to this testing. First is testing the user interface—the actual interface where the user and use cases “I need directions” are more of a focus than for example API testing, unit testing, lower level component testing, etc. are more commonly done by developers. Second, also common for testers is testing the integrated system. Instead of individual components, workflows, paths, or scenarios where information is handed off from one system or sub-system to another system to another system. Each sub-system has hopefully been validated at the component level but need system-level integration testing and validation.

The testing here is not only usability testing: it’s integration testing, full system, and user acceptance testing. The level of testing that is not component validation but creative, complex scenarios and paths that a normal user or driver would go through in normal operation. It is also creative and unexpected errors or complex, difficult race condition testing—by a slow decision maker or distracted driver—styles of testing that test teams specialize in.

HMI Tools

The tooling and interface may be more varied than most testers may think. It will evolve over the project. For anyone that has tested software on dynamic, developing hardware, the tools that provide the view into the machine change over the course of the project. From simple “home-grown” tools interfacing with one component to simulators and emulators that simulate or emulate either the device, input, or “real-world” driving conditions to more complicated, integrated HMI to—in this case—fully functioning complex system HMI, various tools will be used at various stages of integration and development. Each will present situations for testers to design test cases and strategize manual or automatic execution and predict possible expected results. There are interfaces made by specific car manufacturers; there are also many companies trying to become the standard. Please refer to this link showing some well-reviewed HMI systems. There are also infotainment tools running through the HMI, such as Apple Car Play and Google Android Auto.

Recommendations & Summary

The software car’s human-machine interfaces are getting more and more complex. With drivers having varied input methods and reacting—sometimes at fast or slow speeds—while distracted, the varieties of testing needed here are vast, but also specialties of traditional testers specializing in fickle, variously skilled users doing things right and wrong.

This level testing is different than many hardware companies are used to. The HMI may also be the first or only place sub-systems integrate with other subsystems to form bigger “workflows” for users and have their own style test design. These issues come together to make HMI testing a perfect entrée for testers in automotive system development.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

Introduction A common issue that I come across in projects is the relationship between test automation and programming. In this article I want to highlight some of the differences that I feel exist between the two.
This article was developed from concepts in the book Global Software Test Automation: A Discussion of Software Testing for Executives, by Hung Q. Nguyen, Michael Hacket and Brent K. Whitlock Introduction The top 5 pitfalls encountered by managers employing software Test Automation are: Uncertainty and lack of control Poor scalability and maintainability Low Test Automation ...
I feel like I’ve spent most of my career learning how to write good automated tests in an Agile environment. When I downloaded JUnit in the year 2000 it didn’t take long before I was hooked – unit tests for everything in sight. That gratifying green bar is near-instant feedback that everything is going as ...
People who know me and my work probably know my emphasis on good test design for successful test automation. I have written about this in “Key Success Factors for Keyword Driven Testing“. In the Action Based Testing (ABT) method that I have pioneered over the years it is an essential element for success. However, agreeing ...
Cross-Browser Testing is an integral part of the Software Testing world today. When we need to test the functionality of a website or web application, we need to do so on multiple browsers for a multitude of reasons.
June Issue 2019: Testing the Software Car
LogiGear Magazine, December 2015: Test Automation
For this interview, we talked to Greg Wester, Senior Member Technical Staff, Craig Jennings, Senior Director, Quality Engineering and Ritu Ganguly, QE Director at Salesforce. is a cloud-based enterprise software company specializing in software as a service (SaaS). Best known for its Customer Relationship Management (CRM) product, it was ranked number 27 in Fortune’s 100 ...
The 12 Do’s and Don’ts of Test Automation When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.
Having the right Test Automation plan helps bridge gaps and fragmentations in the complex mobile environment. Figuring out the best Test Automation plan is one of the biggest frustrations for today’s digital teams. Organizations struggle to develop cross-platform Test Automation that can fit with their Continuous Integration cadence, their regression cycles and other elements of ...
September Issue 2018: The Secrets to Better Test Automation  
One of my current responsibilities is to find ways to automate, as much as practical, the ‘testing’ of the user experience (UX) for complex web-based applications. In my view, full test automation of UX is impractical and probably unwise; however, we can use automation to find potential UX problems, or undesirable effects, even in rich, ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news