When we added this topic to the editorial calendar, I had the notion that we might illustrate some large or complex systems and explore some of the test and quality challenges they present. We might have an article on: building and testing the software for a rocket to Mars, and discuss the complex infrastructure behind it. (This, by the way, has been done many times, primarily to highlight the large scale system failures and huge sums of money wasted when projects of a massive scale are shortchanged on adequate planning, communication and testing).
We could do the same for the air traffic control system’s infrastructure, and a dozen other big software development projects that instantly come to mind. But big and complex are different to different people.
Big and complex software systems have been woven into our daily lives, and in many cases, those lives literally depend on them. It certainly, and justifiably, might give us pause when we consider that it’s people like you and me who test these systems. Medical devices, online banking, missile guidance systems, prescription drug systems – it is a big list, and you and I hope they are tested well!
When I visit companies for consulting or training, I very often hear: “We have a really complex system! It’s too difficult to diagram or describe.” After one minute of hearing their explanations, I understand it to be a database with a web front end. Simple enough. But after five minutes, it’s an inventory control system with tax and shipping integration with three varieties of credit card processing, all tied into reporting and accounting systems, accommodating three languages that all must work on five browsers and a variety of mobile devices. Indeed, what seemed simple enough at first, mushroomed in complexity very quickly. How do they test it? It has too many moving parts belonging to too many different groups. Each group has its own schedule, headaches and problems, and integrations of third party software.
I worked at Palm Computing during its early days. Palm was a pioneer in the handheld devices, smart phones and mobile computing systems that we all take for granted today. We thought we were complicated (at the time, we certainly were): changing hardware, changing OS, changing apps, hotsynching (synchronizing) to a wide variety of PCs, all in eight languages. Very ambitious indeed – and very complex testing. And yet, in retrospect, the complexity of what we were dealing with then at Palm pales in comparison to so many of the systems I see today.
How is big or complex testing different than testing other-sized products? Maybe not so different after all: good test design, for example, is important no matter what size system you test! In this issue, we look at big testing from many perspectives, to examine both the differences as well as the fundamental constants of testing. LogiGear CTO Hans Buwalda provides us with a “big picture” look at complex systems; we see examples of complex system failures; Marcin Zręda reviews Project Management of Complex and Embedded Systems; Ginny Redish describes how thinking outside the box can lead to better testing; John Brøndum says the science of testing complex systems is constantly changing; I interview some Salesforce.com quality engineering directors about their approach in testing complex systems and I examine the professional characteristics of our global survey respondents. Finally, Robert Japenga shows us how to write a great software testing plan.
When I find myself in a distant country, late at night, hard up for cash, confronted with the ATM of a bank I’ve never heard of, my plastic lifeline somewhere deep inside its bowels while it awaits my bank’s confirmation that I have funds available, my only thought is, “This had better work!” Who tests that and how well do they do it? Well, it may not always be possible to know who, but if you test big and complex bank or financial transaction systems, I hope we will give you some insight into how!
Senior Vice President
Editor in Chief