BY JAMES SIVAK
This article provides a path on the security testing journey. It is designed to present concepts fundamental to security testing web applications and guide you on where and how to begin. This is not intended to be all-encompassing, but will provide waypoints to guide your security testing journey.
Tell a test team that they now have to do security testing and you will receive a plethora of reactions ranging from groans at the amount of work involved, blank stares, eager anticipation of “hacking”, and of course the proverbial “deer in the headlights” look. There is a great amount of information available on the internet, but deciding where to begin can be daunting. What does it mean to actually do security testing?
A formal definition of security testing is: a type of testing performed to assess the software’s ability to assure the six basic concepts of security: confidentiality, authentication, availability, authorization, integrity and non-repudiation. In addition, there may be industry regulations—think financial (PCI compliance) or health related (FDA) requirements that must be validated.
A starting point must include the requirements and goals of this unique security testing challenge in relatively specific terms. Thus, a good plan that is comprehensive and realistic is step number one.
Unlike other aspects of software such as usability and supportability, security must be designed into the product—one cannot test in security and hope to be successful. In fact, secure development is a requirement to have a product that can be considered secure. However, testing is a requirement in order to assess the security aspects of a software product and ensure it meets the goal of the project.
There are two critical concepts that are the keystones for understanding security.
The first is to understand the difference between a “bug” and a “flaw”.
Most non-security testing is done to find bugs—the unexpected or incorrect behaviors of an application that are due to errors in the design or code. They are relatively easy to discover because acceptance criteria and functional requirements detail how the software is supposed to react.
In security testing, the goal is to find flaws—design or code vulnerabilities that fail at any of the security concepts. In many cases, the functionality exhibits no failures, but in fact fails at assuring security concepts.
It has been shown that 50% of the security problems are design flaws. For example, consider a login screen where the user types in a name and a password. If the password is wrong, and an error message is displayed stating that the password is not correct, the test could be considered successful as the user is informed of their error. However, this is a flaw because the message provides information to the person entering the data that can be used to “hack” in to the system because they now know that the user name exists, although the password is wrong. Thus authentication can be potentially breached. (And yes, someone could have written a requirement that the user is informed with a generic error message such as “invalid credentials” and then this would be considered a bug).
“Thinking like a hacker” is the second major concept.
Putting aside the motivations of hackers, in general their main goal is obtaining information. This information may be used to obtain more information, eventually finding something of commercial value. Information gives hackers the means to gain access to systems that they should not be allowed to enter. Instead of asking how an application behaves, a hacker asks what can be done to make an application misbehave. Misuse analysis must be cross functional and requires a mindset opposite from the norm.
Three Phases of the Security Testing Journey
Taking a measured approach to security testing, starting with zero and ending with a very high degree of understanding and skills, the journey can be divided into three phases.
The First Phase requires no specialized training.
The Middle phase brings skills and basic understanding.
The Third Phase involves a security specialist, highly trained in penetration testing and threat modeling.
Given this background, you can begin by adding a new focus to the current testing that is being done. Some simple steps include:
How is information given to a user actually used? Can the error message give someone insight into third party tools (and their versions)?
Are passwords kept in plain text anywhere, especially in the logging when debug is turned on?
Are there assumptions about the environment that could be flawed?
How are sessions defined? How are cookies stored?
How can you make the application do what it is not supposed to do?
In order to specifically test for vulnerabilities and discover how well the software assures the six concepts of secure software, skills and techniques must be learned. The best resource available is the Open Web Application Security Project (OWASP). For testers, consult the Testing Guide—a wealth of examples on how to perform testing for specific types of vulnerabilities in applications. Using this guide, formulating specific test cases becomes easy. In addition, OWASP has tools available, such as ZAP that can be downloaded and used. Other open source tools that should be mastered include Fiddler and Firebug.
Want to up your game from a security testing skills perspective? A good place to start is by “hacking” known vulnerable applications. Both WebGoat (from OWASP) and Gruyere (from Google) are vulnerable applications that can be downloaded. They both include training guidelines to facilitate learning.
Finally, for specialized training, there are classes in ethical (white hat) hacking that have certifications associated with them. Advanced classes are also available to provide the technical background to enable one to become a security specialist.
Security testing requires a different mindset and approach. Although it can be formidable, taking an approach that includes both strategic and tactical goals affords a path to success. Keeping in mind that testing will not assure that a product is secure, and that security begins with the right design and coding techniques, will aid in keeping the correct perspective. The challenge in security testing is well stated in this often used quote –“The good guys need to be right all of the time. The bad guys just need to be right once.”
Currently a Director of QA with Unidesk, Jim has spent more than forty years in the technology arena. Starting with testing components of the Space Shuttle, he has worked in diverse industries developing and testing a wide spectrum of software, including operating systems and applications. Jim’s passion lies not only in testing but instilling quality concepts into all phases of the software development.