Testing Smoke Detectors

People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors.

As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels to something they should test regularly at home: their smoke detectors.

A silent smoke detector gives occupants peace of mind; no news is good news. But smoke detectors need to be tested periodically to assure they are still alive and are capable of saving lives. Since humans are relying more and more on software every year, testing is critical. Software bugs can result in a broad spectrum of consequences, from wrong typefaces, to a catastrophic loss of lives.

A smoke detector has essentially three components: a power supply, a smoke sensor, and an alarm unit. Each component is tested in a different manner, individually, and combined. Similarly, modern software is divided into individual modules written by different developers, are constantly changed and replaced, and may not be compatible with each other. 

The power supply, or battery, has a built-in unit test, the LED that indicates the battery has adequate voltage. The user’s role is to habitually verify visually that the LED is on. Modern smoke detectors can throw exceptions, trigger a chirping noise or recorded voice message when the battery is weak. Still, the LED and the low power warnings only test the power supply.

The alarm unit is the main component of the user manual test. An alarm unit emits an audible alarm when it receives an input current.

This is a fairly standard test case. The input conditions are, a smoke detector is ready and installed and equipped with a battery. The input “data” is manual pressure on the test button. The expected output “data” is an audible alarm. The expected output conditions are that the device can be silenced manually and reset, also known as teardown tasks. 

The alarm unit is a black box under test. We are not concerned with how the alarm turns currents into sound, just that it sounds when triggered.

The alarm unit has presumably been unit tested at the factory. When we do a manual test of the smoke detector, we are doing an integration test of the partial system, verifying that two previously tested components, the battery and alarm unit, will function together. The alarm sounding after the test button is depressed verifies that the power supply, the alarm unit, and the connecting wires (the interface between the two units) all function properly.

The test button is not an integral part of the system under test. It is a test harness, that aids in testing. It contributes nothing to the intended purpose of alerting inhabitants of a fire. A smoke detector would function the same without a test button or an LED, we just could not test it.

The aforementioned manual black box integration test still misses one key system component: the smoke sensor. When the test button is pressed, it feeds currents directly to the alarm unit, bypassing the smoke sensor. Hearing the alarm after pressing the button does not prove that the smoke detector will react to actual smoke. A test harness feeds artificial input data, rather than output data from upstream, to a component under test, to observe output. The component undergoes bottom-up testing.

The smoke sensor is essentially a glorified switch that allows currents to pass through when near smoke, and blocks the flow otherwise. We trust the manufacturer to test the smoke sensor for a lifetime of service. A research lab presumably has some sort of “smoke room”, which simulates the structure and air flow of the rooms where end users will place the smoke detectors. Researchers can place multiple smoke sensors around the room and remotely introduce smoke of different types and concentration levels.

It is not necessary here to know how a smoke sensor actually senses smoke; the test is to verify that a smoke sensor will emit an output current when surrounded by smoke. Also, instead of alarm units, the smoke sensors under test are connected to stubs or recorders, and undergoes top-down testing. Using stubs has many advantages over real output components. 

With stubs, many different smoke sensors can be under test at once. Each stub records if and when the sensor under test emits an output current, and directly populates a database for analysis. Also, a human does not have to enter the smoke filled room when the test is underway. Furthermore, the same smoke sensors may be connected to different output devices, alarm units, voice speakers, fire sprinklers, or a direct fire department connection. A stub can be a substitute for any kind of output device. Similarly, a software module under test may be designed to call other modules which are not under test or are yet to be written; these called modules are replaced with stubs, which may be merely a single line of code to print “module xyz is called and run here”.

Of course, this testing with smoke is not to be confused with a smoke test, in which a new component is connected and launched, just to assert it will power on without “making smoke”, to determine if further testing can or should start.

A system test will verify that a complete smoke detector emits an alarm of a certain decibel level, when surrounded by smoke containing a certain carbon monoxide concentration level.

An acceptance test, following a system test, will validate that the smoke detector is suitable to protect a particular household from fire, that the alarm can wake up the residents through closed doors, and ideally will not report false positives when triggered by smoke from cooking, ashtrays, incense, and what not, given the layout of the house and the residents’ lifestyle.

It is cumbersome and dangerous to test a smoke detector with real smoke as few homeowners do this regularly. They do periodic integration tests, and rely on unit tests. Besides, many home smoke detectors get unintended system tests, when smoke from regular kitchen cooking triggers the alarm.

Fred Murphy
Fred Murphy grew up in Menlo Park where he started programming on a TRS-80. He received his degree in Computer Science from Loyola University in Maryland. Fred has done Software Quality Assurance Contracting around Silicon Valley at companies such as: Apple, Intel, Adobe, KLA Tencor, LogiGear, and many others. His hobbies include bicycling, classical keyboarding, and coding in C and Python. Fred currently lives in Mountain View.
Fred Murphy on Linkedin

The Related Post

Do testers have to write code? For years, whenever someone asked me if I thought testers had to know how to write code, I’ve responded: “Of course not.” The way I see it, test automation is inherently a programming activity. Anyone tasked with automating tests should know how to program. But not all testers are ...
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...
Reducing the pester of duplications in bug reporting. Both software Developers and Testers need to be able to clearly identify any ‘Bug’, via the ‘Title’ used for the ‘Bug Report’.
Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...
March Issue 2019: Leading the Charge with Better Test Methods
VISTACON 2010 – Keynote: The future of testing THE FUTURE OF TESTING BJ Rollison – Test Architect at Microsoft VISTACON 2010 – Keynote   BJ Rollison, Software Test Architect for Microsoft. Mr. Rollison started working for Microsoft in 1994, becoming one of the leading experts of test architecture and execution at Microsoft. He also teaches ...
Jeff Offutt – Professor of Software Engineering in the Volgenau School of Information Technology at George Mason University – homepage – and editor-in-chief of Wiley’s journal of Software Testing, Verification and Reliability, LogiGear: How did you get into software testing? What do you find interesting about it? Professor Offutt: When I started college I didn’t ...
One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
When You’re Out to Fix Bottlenecks, Be Sure You’re Able to Distinguish Them From System Failures and Slow Spots Bottlenecks are likely to be lurking in your application. Here’s how you as a performance tester can find them. This article first appeared in Software Test & Performance, May 2005. So you found an odd pattern ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of software testing. For more details on the evolution of software testing and Software Testing 3.0 see: Software Testing 3.0: Delivering on the Promise of ...
This is an adaptation of a presentation entitled Software Testing 3.0 given by Hung Nguyen, LogiGear CEO, President, and Founder. The presentation was given as the keynote at the Spring 2007 STPCON conference in San Mateo, California. Watch for an upcoming whitepaper based on this topic. Introduction This first article of this two article series ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe