Category Archives: Blogger of the Month

3 Ways to Get in Shape for Continuous Testing

Making the leap to CT is easier than you think— follow this guide to transform your testing process

By Alex Martins, CA Technologies

No pain, no gain! Achieving Continuous Testing shouldn’t take a “Hans and Franz” attitude. It should be painless, more like a natural progression from implementing certain practices over time.




By Evgeni Kostadinov

To start with, we need a Test schedule. The same is created in the process of developing the Test plan. In this schedule, we have to estimate the time required for testing of the entire Data Warehouse system. There are different methodologies available to create a Test schedule. None of them are perfect because the data warehouse ecosystem is very complex and large, also constantly evolving in nature. The most important takeaway from this article is that DW testing is data centric, while software testing is code centric. The connections between the DW components are groups of transformations that take place over data. These transformation processes should be tested as well, to ensure data quality preservation. The DW testing and validation techniques I introduce here are broken into four well defined processes, namely:


Test in Production?

This post is part of the Pride & Paradev series

By Alister Scott


With continuous deployment, it is common to release new software into production multiple times a day. A regression test suite, no matter how well designed, may still take over 10 minutes to run, which can lead to bottlenecks in releasing changes to production.

So, do you even need to test before going live? Why not just test changes in production?


Pushing the Boundaries of Test Automation: An Overview of How to Automate the UX with Heuristics

By Julian Harty

One of my current responsibilities is to find ways to automate, as much as practical, the ‘testing’ of the user experience (UX) for complex web-based applications. In my view, full test automation of UX is impractical and probably unwise; however, we can use automation to find potential UX problems, or undesirable effects, even in rich, complex applications. I, and others, am working to find ways to use automation to discover these various types of potential problems. Here’s an overview of some of the points I have made. I intend to extend and expand on my work in future posts.

In my experience, heuristic techniques are useful in helping identify potential issues. Various people have managed to create test automation that essentially automates different types of heuristics.


Cruise Control: Automation in Performance Testing

LGMweb.shutterstock_231819124When it comes to performance testing, be smart about what and how you automate

By Tim Hinds

Listen closely to the background hum of any agile shop, and you’ll likely hear this ongoing chant: Automate! Automate! Automate! While automation can be incredibly valuable to the agile process, there are some key things to keep in mind when it comes to automated performance testing.


Why Onshore Vs. Offshore Isn’t the Right Question

Outsourcing 3d conceptWell publicized offshore outsourcing challenges, a narrowing labor-cost gap, and political considerations have some rethinking how to approach outsourcing.

By Andy Sealock

Enterprises elect to bring offshore or outsourced operations in-house for a number of reasons. While performance certainly can play a role, motivation also includes strategic business reasons and a belief that the enterprise can perform the function better and more cost effectively than the service provider. Or, maybe it’s convinced that the value generated by increased service quality (via shorter cycle time, reduced error rate, better customer satisfaction) that’s, in theory, achieved by performing functions in-house will more than offset the increase in costs.


When will software testing be truly mobile?

3 -Blogger of SepWill testers be among the first IT professionals to shift their toolset and workflows from desktops and laptops to tablets and smartphones?

By Ole Lensmar

As I’m sure you already know, a monumental shift from desktop to mobile is upon us. Not only have consumer applications started leaving the desktop behind, but B2B applications are also starting their migration – like a flock of elderly pelicans, they spread their wings to follow the younger seafowl. And although it still might be hard to envision a tablet version of your favorite word processor or spreadsheet, rest assured that someone will spearhead that shift, using a mobile-inspired touch-driven UI with all the bells and whistles the mobile experience makes possible, to rescue word processing or spread-sheeting from the grey and aging cobwebs spreading over your desktop.


Why you Need a Software Specific Test Plan

Picture1Experience-based recommendations to test the brains that drive the devices

By Philip Koopman

In essentially every embedded system there is some sort of product testing. Typically there is a list of product-level requirements (what the product does), and a set of tests designed to make sure the product works correctly. For many products there is also a set of tests dealing with fault conditions (e.g., making sure that an overloaded power supply will correctly shed load). And many companies think this is enough, but I’ve found that such tests usually fall short in many cases.

The problem is that there are features built into the software that are difficult or near-impossible to test in traditional product-level testing. Take the watchdog timer for example. I have heard in more than one case where a product shipped (at least one version of a product) with the watchdog timer accidentally turned off. Just in case you’re not familiar with the term, a watchdog timer is an electronic timerthat is used to detect and recover from computer malfunctions. During normal operation, the computer regularly restarts the watchdog timer to prevent it from elapsing, or “timing out”. (Wikipedia)


Why Exploratory Testing Should be Part of the Test Plan

A test plan should always include exploratory tests as part of the approach. Without them, the number of defects that find their way into production will always be higher.

By Brian Heys

Exploratory testing is a key testing technique that is often left out of formal test plan phases such as system testing, system integration, and regression. Instead, these phases favor planned, scripted tests that are easily repeatable and measurable.

While sometimes labelled as ‘ad-hoc’, and frowned upon in some circles because of the more unstructured nature, the truth is exploratory testing can be extremely fruitful in finding elusive bugs that may not otherwise have been discovered until user acceptance testing (UAT) or beyond.


Beta Testing Mobile Apps, How to Get it Right

Steps that will enable you to identify the weaknesses of your new app, its vulnerabilities and strengths.

By Virtual City

So you’ve just finished developing a nifty, customisable app that can help farmers track their produce from source to market via their mobile phone. You’re elated and want to get started marketing it right away. Not to burst you bubble, but are you 100% sure that the app actually works across all mobile platforms and is scalable affordably?

As a developer, the foremost thought that should linger at the back of your mind is not whether the app is cool;  yes, it solves a real need for farmers, especially those in rural areas who have no idea whether they are getting real value from their produce sales. But a question you should have an asnwer to is: will the app be received well be the public, the consumers of the applications – consumers in your case being the agribusiness stakeholders for example? Whether it will fly off the app store shelves or stare back at the creator with a one star rating, is something you ought to find out prior to launch phase.