Knowledge Library

Agile

At LogiGear, we pride ourselves not only on providing world-class service to our clients, but on contributing to the development of the software testing industry as a whole.

Over the years, we've generated and collected a great deal of valuable information on Agile. The Agile and Testing Resource Center has been created to provide you with the information that can help you in your understanding and application of Agile. Below you will find links to articles, book reviews, interviews and videos from some of the industry's foremost thought-leaders and expert testers such as Scott Ambler, Michael Hackett, Jonathan Rasmusson and Guido Schoonheim.

We're actively seeking additional resources to add, so if you've written about Agile and think your piece would be valuable for your peers, please feel free to submit an article by emailing logigearmagazine@logigear.com.


image

TOP 10 WAYS

TO MAKE IT SUCCESS

image GET YOUR FREE BOOK

Action Based Testing

The Action Based Testing™ method represents the continued evolution of the keyword-based testing approach and is the foundation of LogiGear's test automation toolset, TestArchitect™, which uses keywords to create and automate the majority of tests without scripting of any kind.

Action-Based Testing (ABT) provides a powerful framework for organizing test design, automation and execution around keywords. In ABT, keywords are called actions — to make the concept absolutely clear. Actions are the tasks that are executed during a test. Rather than automating an entire test as one long script, tests are assembled using individual actions. Non-technical test engineers and business analysts can then define their tests as a series of these automated actions.

Unlike traditional test design, which begins with a written narrative that must be interpreted by each tester or automation engineer, ABT test design takes place in a spreadsheet format called a test module. Actions, test data and any necessary GUI interface information are stored separately and referenced by the main test module.

image

TOP 10 WAYS

TO MAKE IT SUCCESS

image GET YOUR FREE BOOK

Mobile Testing

One of the hottest trends in the software testing world is mobile. As mobile devices come to resemble computers more and more, the complexity of mobile applications has followed suit. Numerous operating systems and device sizes make testing these applications increasingly difficult.

We’ve published a few magazines on mobile testing and thought it would be helpful to put everything in one place for easy reference. Here, you can find articles, book reviews, videos and interviews from some of the industry’s thought-leaders such as Julian Harty, Edward Hill, Gal Tunik and Robert V. Binder. We hope that this resource center will help you save time and money in your mobile testing efforts!



Books

Let's Talk About Agile Test Automation

by Michael Hackett and Hans Buwalda

What is Agile Testing Automation? How can automated functional testing fit into agile projects? These are questions we encounter from customers all the time. Agile methods are relatively new to the software world, and hold great promise and many early success stories. With this in mind we've created this eBook for you, "Agile Test Automation".

AVAILABLE ON LOGIGEAR >>

Testing Computer Software, 2nd Edition

by Cem Kaner, Jack Falk and Hung Q. Nguyen

Software testing is a race against time: testers must make sure that highly complicated programs will work reliably for the consumer, often with insufficient resources and on unrealistic schedules. Testers will appreciate the author's advice on effective bug analyzing, reporting, and tracking; black box testing; printer compatibility tests; and software product liability.

AVAILABLE ON AMAZON >>

Testing Applications on the Web, 2nd Edition

by Hung Q. Nguyen, Bob Johnson, Michael Hackett and Robert Johnson

With Internet applications spreading like wildfire, the field of software testing is increasingly challenged by the brave new networked world of e-business. Software engineers have developed sophisticated test methodologies over the years, but they just don't do the job for web-based software. Distributed applications have different performance goals from those of desktop applications, and require networking know-how on the part of the tester.

AVAILABLE ON AMAZON >>

Integrated Test Design and Automation

by Hans Buwalda, Dennis Janssen, Iris Pinkster and Paul Watters

Zero-defect software is the holy grail of all development projects, and sophisticated techniques have emerged for automating the testing process so that high-quality software can be delivered on time and on budget. This practical guide enables readers to understand and apply the TestFrame method - an open method developed by the authors and their colleagues that is rapidly becoming a standard in the testing industry.

AVAILABLE ON AMAZON >>

Global Software Test Automation:
A Discussion of Software Testing for Executives

by Hung Q. Nguyen, Michael Hackett and Brent K. Whitlock

This is the first book to offer software testing strategies and tactics for executives. Written by executives and endorsed by executives, it is also the first to offer a practical business case for effective test automation, as part of the innovative new approach to software testing. Global test automation, as demonstrated here, is a proven solution, backed by case studies that leverage both test automation and offshoring to meet organizations' quality goals.

AVAILABLE ON AMAZON >>

Articles

Action Based Testing, by Hans Buwalda in Better Software Magazine, March/April 2011

To address the challenges and fears of implementing automation in agile projects, LogiGear CTO Hans Buwalda presents Action Based Testing as the answer.

Hans Buwalda, CTO, LogiGear

How can automated functional testing fit into agile projects? That is a question we encounter a lot nowadays. Agile has become more common, but functional testing often remains a manual process because during agile iterations/sprints, there is simply not enough time to automate it. This is unlike unit testing, which is routinely automated effectively. The short answer is:

  1. A well planned and organized test design and automation architecture
  2. Organize the test design and automation architecture into separate life cycles

In this article I will show how the Action Based Testing method can help you to do just that. Let me first introduce Action Based Testing, followed by discussing how it can make both test design and test automation fit the demands of agile projects.

Action Based Testing

There are various sources where you can read more about Action Based Testing. Let me summarize the key principles here that are at the core of the method:

1. Not one, but three life cycles

It is common to have testing and automation activities positioned as part of a system development life cycle, regardless of whether that is a waterfall or an agile approach. ABT however regards three distinct life cycles. Even though they have dependencies on each other, in an ABT project they will be planned and managed as separate entities:

  1. System Development: follows any SDLC, traditional or agile model
  2. Test Development: includes test design, test execution, test result follow up, and test maintenance
  3. Automation: focuses solely on the action keywords, interpreting actions, matching user or non-user interfaces, researching technology challenges, etc

2. Test Design

The most important property is the position of test design. It is seen as the single most enabling factor for automation success, much more than the actual automation technology. In ABT, it is considered crucial to have a good "high level test design" in which so called "test modules" are defined. Each test module should have a clear scope that is different from the other and is developed as a separate "mini project.

A test module will consist of test objectives and action lines. The test objectives outline the scope of the test module into individual verbal statements defining what needs to be tested in the module.

The tests in the test module (which looks like a spreadsheet) are defined by a series of "action lines," often further organized in one or more test cases. Every action line defines an "action" and consists of an "action word" defining the action, and arguments defining the data for the action, including input values and expected results.

Note here the ABT test case figures, not as central as in some other methods. We feel the test case is too small and too isolated of a unit to give good direction to test development. Rather than having a predefined list of test cases to be developed, we like to make a list of test modules, and let the test cases in them be the result of test design, not the input of it.

Consequences derive from varying test cases and increase significantly during the creative process. Also, each test case can leave behind the preconditions of the next, resulting in a good flow of the test execution.

3. Automation

In ABT the automation activity is separated from the test development. Test design and automation require very different skill sets and interests. There might be people that are interested at doing both, which is fine, but in my experience that is not very common. Also it assigns ownership for "getting the test to work.

In ABT the automation engineers will concentrate on automation of actions and making "interface definitions" to manage the interaction with the interfaces (user or non-user) of the system under test. This type of automation activity requires advanced skills and experience.

Agile Test Development

In using ABT with its separate life cycles for test development and test automation, there are in fact two topics addressing how to fit automated testing in agile projects:

  1. Test design and development
  2. Automation

In ABT the automation engineers will concentrate on automation of actions and making "interface definitions" to manage the interaction with the interfaces (user or non-user) of the system under test. This type of automation activity requires advanced skills and experience.

Having said that and using a scrum project with sprints, testing activities in an agile project fall into three timelines:

  1. Testing in regular system development sprints
  2. Test development prior to development sprints
  3. Testing after development has finished

1. Testing in regular sprints

The most common practice is, and will remain, to develop and execute tests as part of sprints. In a sprint, functionality is progressively understood from user stories and conversations to become clear enough for testers to test it. This can be done in developed tests similar to ABT test modules, as well as exploratory and interactive testing. It can also be good practice to capture at least some of the "interesting" interactive tests in test modules for future use.

Unit tests are an invaluable asset, but in the ABT approach one would like to consider options to re-use and extend their reach across the lines of addressing single functions.

By defining test modules for unit tests and assigning them to actions, they can be strung together more easily to test with a wider variety of values and include other parts of the system under test, either during a sprint or later on.

2. Test development prior to development sprints

In the ABT method the use of actions, in particular high business level actions, allow for the development of tests with a non-technical focus on business functionality, often simply called "high level tests." Such tests stay away from details in the UI and focus on business transactions, like requesting a home loan, or renting a car.

Higher level tests can be developed early in a project. These tests don't have to wait for a system development sprint since there will be limited time to carefully understand business functionalities and create appropriate tests for them.

The number of, and whether or not business level tests can be made, depends on individual situations. In general, I would recommend the following:

  • Have as many business level tests as possible, as they add great value to overall depth and quality, as well as being resilient against system changes that do not pertain to them.
  • Use the high level test design step in ABT (where the test modules are identified) to determine what can be done early on in business level tests, and what needs to be completed in detail tests as part of development sprints.

3. Testing after sprints

Once sprints for individual system parts have finished and these parts come together, normally more testing will be needed to ensure quality and compliance of the entire system. Also, tests may be needed to retest parts of systems that were not touched by system changes and confirm the new system parts integrate well with the old ones. This could for example happen in regression or "hardening" sprints.

In my view, this "after-testing" is a key area where it can pay off most to have, in advance, well developed test modules and fully automated actions resulting in valuable time savings, particularly if a release date is getting close. The test development and automation planning should address this use in final testing as a main objective, and identify and plan test module development accordingly.

Agile Test Automation

The term often used for test automation in agile projects that best describes what is needed is "just in time automation." When ABT is applied, the term changes to "just in time test development." Independent to that, a high level of automation can play an invaluable contribution in improving the productivity and speed in sprints.

To obtain the automation quickly and timely, a number of rules should be applied:

  • Build the base early
  • Make automation resilient
  • Address testability of the system under test
  • Test the automation

1. Build the base early

A successful automation architecture should start with creating a solid base on which further action can be developed. This includes items like the ability to perform all necessary operations on all UI interface classes, access to API's, ability to query databases, compiling and parsing messages in a message protocol, etc.

Although much technical functionality is available in LogiGear’s TestArchitect tool, most of our projects will start with R&D efforts to address customer specific technical challenges, e.g. emulating devices in a point of sale system, working with moving 3D graphics for oil exploration, testing mobile devices, accessing embedded software in diagnostic equipment, etc.

This technical base is something to administer as soon as possible and as comprehensively as possible. Identify all technical challenges and resolve them. This typically results in the implementations for low level actions, that then in turn can be used for higher level actions, for example in development sprints. Addressing the technical base early also limits risks.

2. Make automation resilient

The essence of agile projects is that many details of the system under test only become clear when they are being implemented, as part of iterations like the sprints in Scrum. This holds in particular for areas that automation tends to rely heavily on, like the UI. Those details can change quite easily as long as the creative process moves along. The automation should in such cases not be the bottleneck. Flexibility is essential.

The action model by nature can give such flexibility as it allows details to be hidden in individual actions, which can then be quickly adjusted if necessary. However, there are some additional items to take care of as well. The most common in our projects has turned out to be "timing." Often automation has to wait for a system under test to respond to an operation and get ready for the next one.

What we found is that the automation engineer should make sure to use "active timing" as much as possible. In active timing you try to find a criterion in the system under test to wait for, and wait for that up to a preset, generous, maximum. If the criterion is met, the automation should move on without further delay. Paying attention to these and similar measures will make the automation solid and flexible.

3. Address testability of the system under test

When preparing automation, system developers should identify items that the system under test should provide to facilitate easy access by automation. When the items have been identified early on and are formulated as requirements, the teams can easily incorporate it in the sprints.

A good example is the provision of values for certain identifying properties that are available in various platforms for screen controls or HTML elements, properties that are not visible to a user, but can be seen by automation tools. Providing such values will allow automation to address the controls or elements easily, and in a way that is usually not sensitive to changes in the design.

In fact if such values are defined early on in a project, a tool like TestArchitect allows for the creation of "interface definitions" to take advantage of them before the system under test is even built.

Examples of such useful properties are the "id" attribute in HTML elements, the "name" in Java/Swing, and the "accessibility name" in .Net and WPF. All of these do not influence the user experience, and can be seen by the tools. Using them also solves issues of localization: an OK button can be found even if its caption is in another language.

4. Test the automation

Automation should be tested. In ABT this means actions and interface definitions must be tested. They are like a product that automation engineers provide to testers, a product in which high quality is required. We require in each testing project to have at least one folder (in the TestArchitect test tree) with test modules that test the actions and interface definitions, not necessarily the system under test.

Just like the test development, the automation activities must be well planned and organized, and a number of experienced people to be involved. If that is the case the combination of careful test development planning and automation planning should be able to meet the demands of agile projects quite easily.

READ MORE >>

The First Holy Grail of Test Design by Hans Buwalda

This article focuses on the first principle, the effective break down of the tests. I also like to refer to it as the "high level test design". In this step you divide the tests that have to be created into manageable sets like chapters in a book, which I call "test modules".

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation

Introduction

In my previous article "Key Principles of Test Design" I discussed a vision for test design, built around three key principles (which I call the "Holy Grails of Test Design"):

  1. Effective break down of the tests
  2. Right approach per test module
  3. Right level of test specification

This article focuses on the first principle, the effective break down of the tests. I also like to refer to it as the "high level test design". In this step you divide the tests that have to be created into manageable sets like chapters in a book, which I call "test modules". Each test module should typically contain between a few to a few dozen test cases. The next steps in test development deal with designing the individual test modules ("holy grails" 2 and 3) and with effective automation.

Effective Break Down of the Tests

Although making a good high level test design is as much art as it is science, there are some guiding criteria for it that I like to use. They are organized as "primary" and "additional" criteria. The primary criteria are the more obvious ones that should be applied first. The additional criteria can help to further refine the line-up of test modules.

Primary Criteria

  • Functionality and other requirements. The basis for an IT system is the required functionality, usually organized into groups and/or categories. Tests can be organized along similar lines.
  • Architecture of the system under test. Just about every IT system is built up in layers, modules, protocols, databases, etc. All of these pieces have to be tested individually and in combinations. The line-up of test modules should reflect that.
  • Kind of test. Many kinds of tests such as functionality, UI, performance, screen lay out, security, and more can be done to even one small part of a system under test. Generally each test module should not do more than one kind of test.
  • Ambition level. I tend to categorize tests in levels of ambition. A low level is a smoke test, just to see if a system can start and do basic functions. The most common tests are medium ambition level, testing individual functions without combinations. High ambition level tests are "aggressive" tests that are designed to "break" a system under test. Organizing the tests of different ambition levels in different modules will make it easier to develop the test and most of all easier to run them (for example, run the smoke tests first, if successful run the functional tests, last come the aggressive tests).

Additional Criteria

  • Stakeholders. These are departments or individuals with a particular interest in some of the tests. One good line-up of tests is along the lines of stakeholders, so that each test module has only one stakeholder to be involved (for input and/or assessment).
  • Complexity of the test. Put particularly complicated tests in separate test modules, so that the other tests can run unaffected.
  • Technical aspects of execution. Some tests might need a complex environment or specific hardware to run, while others can run more easily. Make sure the module line-up reflects this.
  • Planning and control. Overall project planning and progress can impact whether or not enough information is available to develop certain test cases. Keeping such test cases separate from ones that can be developed earlier in the life cycle can allow you to obtain a more smooth progression of test development.
  • Risks involved. A risk analysis can provide great input for test design. When there are high risk areas in a system under test it can make sense to devote specific test modules to them. A good example is a premium calculation in an insurance system. Any bug in a core function like that is not acceptable, so it is worthwhile to plan for a test module for each single aspect of such a calculation.

The way to apply these criteria is to start with the straightforward ones first, one at the time, then review the results using all of the criteria, including the additional ones. Repeat this process a couple of times, preferably with a number of knowledgeable people involved. When you want to use outside consultants this step is a good candidate. There is also not much time involved in this step helping to keep down outside consulting costs.

When the modules are identified, they can be the basis for a Test Delivery Plan in which the modules selected to be developed are listed with tentative dates for the delivery of their first version (for example, to a stake holder who will review them).

Here are some examples of what typically can go into separate modules:

  • UI oriented tests, like "does a function key work" or "does listbox xyz contain the right values"
  • Do the individual functions (like transactions in a financial system) work
  • Tests of alternate paths in use cases, like does the system roll back after canceling a transaction
  • Higher level business level end-to-end tests, like: create a new customer, let him do a couple of transactions and see if his end balance is correct
  • Odd tests that are more difficult to execute, for example because they need multiple workstations (i.e., a test is done to exceed a limit and to see if a supervisor from another workstation will be involved to approve)
  • Tests that test other qualities of a system other than functionality, like a load/performance test
  • Tests that involve non-UI actions, like testing individual methods of classes used in the system under test, or message in a TCP/IP or SS7 protocol
  • Tests with different "ambition levels", like:
    • A simple low ambition smoke test to see if a new build of the system under test works well before running any other modules
    • An aggressive test, designed to break a system under test, typically to be executed after other modules were successful already

Conclusion

However you do it, try to end up with a list of test modules that are well-differentiated from each other and each have a single well-defined scope. The scope is the anchor point for the successive development of tests within the test modules.

READ MORE >>

The Second Holy Grail of Test Design by Hans Buwalda

This article discusses the "second Holy Grail", namely finding the right approach per test module. This step focuses on developing the individual modules. When a good job is done on the module breakdown, each test module should now have a clear scope.

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation

Introduction

In the article "Key Principles of Test Design" I presented three key principles (the "Holy Grails of Test Design"):

  1. Effective break down of the tests
  2. Right approach per test module
  3. Right level of test specification

This article discusses the "second Holy Grail", namely finding the right approach per test module. In the text of the first Holy Grail article ("The First Holy Grail of Test Design") we saw that a first important step is the breakdown of tests into test modules, a step that can make or break your test design (and subsequent test automation).

Right Approach per Test Module

The next step or "second grail" is developing the individual modules. When a good job is done on the module breakdown, each test module should now have a clear scope. This can then lead to two sets of items for the test modules:

  1. Test requirements
  2. Test cases, related to the test requirements

The test requirements are a set of statements describing as comprehensively as possible what should be tested. The best way I have found to write and read them is to think of the words "test if" in front of them. Examples:

  • Coming directly from a system requirement: (test if) "the password must be a minimum of 6 characters"
  • More aimed at the test, only indirectly coming from system requirements: (test if) "a transfer can be made from Mexican to Chinese currencies"

Making test requirements is part "science" and part "art". It is the "analytical phase" of test development, in which you should actually analyze and understand system requirements and not just copy and paste them. The test requirements should show what you do. We have a more extensive guideline for test requirements, but here are some things to look for:

  • Make cause and effect clear, and mention cause first ("clicking 'Submit' empties all fields")
  • Make condition and effect clear, and mention condition first ("if all fields are populated, ok is enabled")
  • Split complex sentences into small statements
    • It is ok to combine two or more functionalities if this is not adding to complexity (like "ok becomes enabled if both first name and last name are specified")
  • Keep test requirements short. Leave out as many words as you can without loosing the essential meaning

After the "analytical" phase of devising test requirements, the next step is the "design" phase of creating the actual test cases. Once the test cases are developed they can be related to the test requirements. Sometimes this a is one-to-one, but in the majority of cases the relation will be many-to-many: one test requirement might be tested in more than one test case, and one test case can verify multiple test requirements.

As in the earlier phases of test development (test module break down and test requirements), the creation of test cases should show added value from the tester. We train both our on-shore and off-shore testers to "use their head before using their hands", meaning think about the test cases while they are developing them. Try to make them smart and aggressive:

  • To get maximum effect from a limited set of test cases
  • To make them aggressive in finding system faults

There are a substantial number of testing techniques available, many of which have been published over the years in books like Testing Computer Software (Cem Kaner, Jack Falk, and Hung Nguyen). The value of these techniques depends on the situation, in our terminology: the scope of your test module. Please make a good study of them, and keep using your own intelligence and creativity. Test development should most of all be an intelligent and creative activity (you have to find issues that, also intelligent, developers overlooked), not just a mechanical one.

From my own experience I have come up with a test design technique that is specifically meant to steer away from too much mechanical testing. I have called it "Soap Opera Testing", since I used the popular format of television "soap operas" as an inspiration. This technique can come in handy if: (1) the business processes in the system under test are complex, and (2) end-users are involved, or can be involved if needed. The idea is to write test cases as if they were an "episode" in a "series", as a way to make them creative and aggressive. For more information please see my article "Soap Opera Testing" which was published in Better Software magazine in February 2004 and is also available on the LogiGear web site in the downloads section.

Conclusion

Regardless of a specific technique, I feel that a combination of "analytical" test requirements that focus on completeness and "creative" test cases that focus on aggressiveness can lead to an optimal result:

  • Completeness in testing functionalities and combinations of functionalities
  • Aggressiveness in finding hard to find bugs
  • Lean design that leads to efficient and maintainable automation

For the automation the use of appropriate "actions" is significant too. This is a topic for the next article on the "third grail" of test design.

Most of all make sure that the scope of the test module is clear and that all test requirements and test cases adhere to the scope. Avoid "sneaky checks", like testing the caption of an OK button in a test module that focuses on a business aspect like an insurance policy premium calculation. Such checks should really go into another test module.

READ MORE >>

The Third Holy Grail of Test Design by Hans Buwalda

This is the last in a series of articles that outline how to do effective and efficient test design. This last crucial step is to write down the test cases as clearly and efficiently as possible.

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation

Introduction

This is the last in a series of four articles that started with "Key Principles of Test Design". In these four articles I present what I view to be three key principles to make test design successful (the "Holy Grails of Test Design"):

  1. Effective break down of the tests
  2. Right approach per test module
  3. Right level of test specification

If you followed the instructions of the previous articles you should now have a list of well-defined and differentiated "test modules". For each test module you should have "test requirements", and you should know what the test cases are going to be. Now a last crucial step is to write down the test cases as clearly and efficiently as possible.

Writing Test Cases at the Right Level of Abstraction

The challenge at this point, the "Third Holy Grail of Test Design", is to write the test cases at the right level of abstraction:

  • Detailed enough to clearly show the intention and logic of the test case: what is the input, what is verified, etc
  • At the same time hiding as many details as possible that are not relevant for the test

This principle is most clearly visible when you use Action Based TestingT (ABT) or a similar key-word driven approach. In ABT the tests are written as a sequence of actions with arguments. The actions are the basis of the automation. This allows you to "hide" those steps that are not signifi-cant for a test in the implementation of the action.

However, even for manual tests it can make sense to "hide" detailed steps that are not relevant, especially when such details are repeated many times. A common example is logging into the system. Let us say that the manual instruction is:

Enter a user name in the field "User Name", and a password in the field "Pass-word". Then click on the button called "Login".

It is not uncommon to find an instruction like this repeated many times in a set of manual test in-structions. Some disadvantages are:

  • Instructions are repeated over and over again, which can be a lot of work.
  • The test cases are hard to read, because of the needless detail it is difficult to see the forest through the trees.
  • If there are changes in the logon screen of the system under test all the test cases have to be updated (or become outdated).
  • In this example the values that actually are interesting are the user name and password. However, they are not specified, only mentioned implicitly. This means that during test execution the tester has to come up with the values over and over again.
  user password
logon hans logigear

The values are now explicitly specified, while the actual steps needed to log on are not visible. They are "hidden" in the interpretation of the action "logon". Technically this is a simple step, simi-lar to defining subroutines in a programming language. The important point though is the test de-sign objective of:

  • Showing those details that are relevant for a test, like input values
  • Hiding anything else as much as possible

As another example consider these lines. They click a node in a tree, and check whether an item called "parabola" appears in a list:

  window tree tree item path
click tree item main pictures My Projects/Main/Picture 1
  window    
wait for window main    
  window list item
check list item exists main picture elements parabola

These fragments are rich in details, explicitly telling us which item in a tree to click, to wait for a window to respond, and to check for an element in a list. Whether or not this is appropriate de-pends on the scope of the test:

  • Was the goal to verify the workings of the "pictures" tree and the "picture elements" list? In that case it is good to show the details of this interaction.
  • If the goal was just to see if "parabola" appears as an element in the "picture 1" picture then the details should be avoided.

In the project where this fragment comes from, the goal was just to verify the contents of "picture 1", and therefore the fragment was too detailed. This was more so because the similar fragments (with other values) appeared in many dozens of places throughout the test set. In such a case it is much better to write something like this:

  project picture element
check picture element Main Picture 1 parabola

With this notation the purpose of the check is clearer, the number of lines are reduced and there will be less maintenance when the system under test undergoes changes.

Another category where it is easy to save on details is action arguments. In many cases argu-ments like a "zip code" or "phone number" are not relevant for a test. They are just there to com-plete underlying dialogs. If that is the case, leave the arguments out and make sure the action implementations use suitable default values for them.

Conclusion

Exactly what to show and to hide is not always an easy decision, which is why I have named this the third "holy grail". A crucial note here is that the decision is a test design decision, not an engi-neering one! The purpose of hiding or showing details is not necessarily to make the test as short as possible, like you would when writing a program with subroutines. What you need to do most of all is write clear test cases that:

  • Show explicitly what is relevant for the test allowing the reader to understand the test based solely on the test lines, without having to look into the details of an action
  • Hide those steps and arguments that are not relevant in the scope of the test, to avoid unneeded maintenance, and to make the test easier to read

These types of test design decisions need to come from the testers, not from the automation en-gineers. Automation engineer can, however, play a useful role pointing out to testers what is pos-sible, but ultimately the test design decisions belong to the testers.

READ MORE >>

Key Success Factors for Keyword Driven Testing by Hans Buwalda

Keyword driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:

  • Keywords, which are typically base level and describe generalized UI operations such as "click", "enter", "select"
  • Business templates which are typically high level such as "login", "enter transaction"
  • Action Words, or short "Actions", which can be both base level and high level and in their most general form allow earlier defined key words to be used to define higher level action words

Keyword driven testing is a very powerful tool helping organizations to do more automated testing earlier in the testing process and making it easier to maintain tests over time. As with any complex undertaking, there are "success factors" that can determine whether or not a testing effort will be successful. This paper will outline key success factors for keyword driven testing including base requirements, the vision for automation, success factors for automation, and how to measure success.

Base Requirements


There are numerous requirements that I consider to be "base requirements" for success with keyword driven testing. These include:

  • Test development and automation must be fully separated - It is very important to separate test development from test automation. The two disciplines require very different skills. Fundamentally, testers are not and should not be programmers. Testers must be adept at defining test cases independent of the underlying technology to implement them. Individuals who are skilled technically, the "automation people" (automation engineers), will implement the action words and then test them.
  • Test cases must have a clear and differentiated scope - It is important that test cases have a clearly differentiated scope and that they not deviate from that scope.
  • The tests must be written at the right level of abstraction - Tests must be written at the right level of abstraction such as the higher business level, lower user interface level, or both. It is also important that test tools provide this level of flexibility.

Vision for Automation


It is also important to have a clear vision for automation. Such a "vision" should include things such as:

  • Having a good methodology - It is important to have a good integrated methodology for testing and automation that places testers in the driver's seat. It is also important to employ the best technology that supports the methodology, maximizes flexibility, minimizes technical efforts, and maximizes maintainability.
  • Have the right tool - Any tool that is employed should be specifically designed for keyword based testing. It should be flexible enough to allow for the right mix of high and low level testing. It should allow the testers to quickly build keyword tests, without difficulty. It should also not be overly complicated for automation engineers to use when implementing the automation.
  • Succeed in the three "success factors for automation" - There are three critical success factors for automation that the vision should account for. They are:
    • Test design
    • Automation solution
    • Organization

Success Factors for Automation


Test Design

Test design is more important than the automation technology. Design is the most underestimated part of testing. It is my belief that test design, not automation or a tool, is the single most important factor for automation success. To understand more about test design see these previous articles:

Comprehensive Automation Architecture

An automation architecture should emphasize methodology over technology, manageability, and maintainability. The methodology should control and drive the technology so that technology supports the methodology and the importance of manageability and maintainability.

Organization and management

Organization and management are also very important. Success is highly dependent on how well you organize the process including:

  • Management of the test process
  • Management of the tests
  • Efficient and effective involvement of stake holders, users, auditors

A plan of approach should be written for test development and automation. In it should be items such as:

  • Scope, assumptions, risks
  • Methods, best practices, tools, technologies, architecture
  • Stake holders, including roles and processes for input and approvals...and more.

The "right" team must also be assembled. This team should include:

  • Test management who is responsible for managing the test process.
  • Test development who is responsible for production of tests. Test development should include test leads, test developers, end users, subject matter experts, and business analysts.
  • Automation engineering who is responsible for creating the automation scheme for automatic execution. Members of this team include a lead engineer as well as one or more automation support engineers.
  • Support functions, providing methods, techniques, know how, training, tools, and environments.

For the team there should be a clear division of tasks and responsibilities as well as well defined processes for decision making and communication.

Some Tips to Get Stable Automation

  • Make the system under test automation friendly. While developers are not always motivated to do that, it pays off. In particular ask development to add specific property values to the GUI interface controls for automated identification like "accessible name" in .Net and Java, or "id" in Web controls
  • Pay attention to timing matters. In particular use "active timing", based on the system under test, not fixed amounts of "sleep".
  • Test your automation. Develop a separate test set to verify that the actions work. Make separate people responsible for the automation.
  • Use automation to identify differences between versions of the system under test

How to Measure Success


With any major undertaking, it is important to define and measure "success". There are two important areas of measurement for success - progress and quality.

Progress

You should measure test development against the test development plan. If goals are not reached, act quickly to find the problems. Is the subject matter clear? Are stake holders providing enough input? Is it clear what to test (overall, per module)? Is the team right (enough, right skill set mix)?

You should measure automation and look at things such as implemented keywords (actions) and interface definitions (defined interface dialogs, pages, etc).

You should measure test execution looking at things such as how many modules are executed and how many executed correctly (without errors)?

Quality

Some of the key quality metrics include:

  • Coverage of system and requirements
  • Assessments by peers, test leads, and by stake holders (recommended)
  • Effectiveness
    • Are you finding bugs?
    • Are you missing bugs?
    • Can you find known bugs (or seeded bugs)?
    • After the system is released, what bugs still come up? You should consider calculating the "Defect Detection Percentage" (Dorothy Graham, Mark Fewster)
  • Mine your bug base for additional insights


Conclusion


It is important to understand that keywords are not magic, but they can serve you well. What is more important is to take the effort seriously and "do it right". Doing it right means that test design is essential, both global test design and the design of individual test cases. Automation should be done but it should not dominate the process. Automation should flow from the overall strategy, methodology, and architecture. It is also very important to pay attention to organization - the process, team, and project environment.

Following the success factors outlined in this paper can lead to a successful implementation of keyword driven testing.


READ MORE >>

Key Principles of Test Design by Hans Buwalda

Test design is the single biggest contributor to success in software testing. Not only can good test design result in good coverage, it is also a major contributor to efficiency. The principle of test design should be "lean and mean." The tests should be of a manageable size, and at the same time complete and aggressive enough to find bugs before a system or system update is released.

Test design is also a major factor for success in test automation. This is not that intuitive. Like many others, I initially also thought that successful automation is an issue of good programming or even "buying the right tool". That test design turns out to be a main driver for automation success is something that I had to learn over the years, often the hard way.

What I have found is that there are three main goals that need to be achieved in test design. I like to characterize them as the "Three Holy Grails of Test Design", a metaphor based on the stories of King Arthur and the Round Table. Each of the three goals is hard to reach, just like it was hard for the knights of King Arthur to find the Holy Grail. This article will introduce the three "grails" to look for in test design. In subsequent articles in this article series I go into more detail about each of the goals.

The terminology in this article and the three follow up articles is based on Action Based Testing (ABT), LogiGear's method for testing and test automation. You can read more about the ABT methodology on the LogiGear web site. In ABT test cases are organized into spreadsheets which are called "test modules". Within the test modules the tests are described as a sequences of "test lines", each starting in the A column with an "action", while the other columns contain arguments. The automation in ABT does not focus on automating test cases, but on automating individual actions, which can be re-used as often as necessary.

The Three Goals for Test Design


The three most important goals for test design are:

  1. Effective breakdown of the tests

    The first step is to breakdown the tests into manageable pieces, which in ABT we call "test modules". At this point in the process we are not yet describing test cases; we simply identify the "chapters" into which test cases will fall. A break down is good if each of the resulting test modules has a clearly defined and well-focused scope, which is differentiated from the other modules. The scope of a test module subsequently determines what its test cases should look like.

  2. Right approach per test module

    Once the break down is done each individual test module becomes a mini-project. Based on the scope of a test module we need to determine what approach to take to develop the test module. By approach I mean the choice of testing techniques used to build the test cases (like boundary analysis, decision tables, etc), and who should get involved to create and/or assess the tests. For example, a test module aimed at testing the premium calculation of insurance policies might need the involvement of an actuarial department.

  3. Right level of test specification

    This third goal is where you can win or lose most of the maintainability of automated tests. When creating a test case try to specify those, and only those, high-level details that are relevant for the test. For example, from the end-user perspective "login" or "change customer phone number" is one action; it is not necessary to specify any low-level details such as clicks and inputs. These low-level details should be "hidden" at this time in separate, reusable automation functions common to all tests. This makes a test more concise and readable, but most of all it helps maintain the test since low-level details left out will not have to be changed one-by-one in every single test if the underlying system undergoes changes. The low-level details can then be re-specified (or have their automation revised) only once and reused many times in all tests.
    In ABT this third principle is visible in the "level" of the actions to be used in a test module. For example, in an insurance company database, we would write tests using only "high-level" actions like "create policy" and "check premium", while in a test of a dialog you could use a "low level" action like "click" to see if you can click the OK button.

Conclusion


Regardless of the method you choose, simply spending some time thinking about good test design before writing the first test case will have a very high payback down the line, both in the quality and the efficiency of the tests.

READ MORE >>

Capitalizing Testware as an Asset by Hans Buwalda

Companies generally consider the software they own, whether it is created in-house or acquired, as an asset (something that could appear on the balance sheet). The production of software impacts the profit and loss accounts for the year it is produced: The resources used to produce the software result in costs; methods, tools or practices that reduce those costs are considered profitable.

Software testing is generally regarded as an activity, not a product: the test team tests the products of the development team. In that sense testing is seen in terms of costs and savings: The activity costs money; finding bugs early saves money. Test automation can reduce the cost of the testing itself.

Managing the testing effort in financial terms of profit and loss (costs and savings) is a good thing, particularly if it leads managers to make conscious decisions about the amount of testing that should be performed: More testing costs more, and less testing increases risks, which are potential (often much higher) costs down the line.

Very few companies think of software tests as products, or in financial terms, company assets. Test teams are not seen as "producing" anything. This is unfortunate, since it underestimates, particularly in financial terms, the value of good "testware".

The underlying reasons for not treating testware as a long term assets are hardly surprising:

  • In manual testing, the bulk of the hours are spent executing tests against the system, even if test cases are documented in a test plan.
  • In most test automation projects, the test scripts are not well architected and too sensitive to system changes.

If an organization begins to consider it's tests as assets, then it can significantly enhance the way that it approaches testing. Consider the following:

  • Test cases for your application have a definite value, and just like any other capital asset, can depreciate over time as the underlying application changes.
  • Well-written test cases, along with thoroughly documented requirements and specifications, are one of the few ways to consolidate the 'intellectual capital' of your team members. With today's global teams, and the increasing challenge of retaining engineers, especially overseas, being able to retain knowledge as people come and go is critical to the success of your testing (and the entire product development) effort.
  • Well-automated tests can be re-used over and over again, thus forming assets which produce profits for the company.

So how can you apply this idea at your company?

Creating automated tests is the best way I've found to maximize the output of your investment in software testing. Not only does test automation reduce your costs (a positive impact to your P&L), but well-designed test automation is also a valuable asset (a positive impact on the balance sheet of the company) that can be used across many different versions of your product, even as you switch between platforms!

  • As much as possible, define your tests at the 'business process' level, leaving out unneeded details of the application under test, like its UI build-up or screen flow. Business processes change less frequently than the systems that are supporting them, so your test will require less maintenance (i.e. depreciate less quickly.)
  • The tests should be executable either automatically or manually, so that they still provide value even when the system has changed and some updates to the automation are required. Keyword-driven testing is a great example of how tests can be defined in a format that can be executed either way.
  • Remember that test automation tools are not silver bullets. To maximize the output of your investment in test automation, you must combine good methodology and technology. A poorly planned test automation effort can quickly become a burden on your organization that provides little value.

READ MORE >>

Bonus Bugs by Hans Buwalda

Hans Buwalda discusses “bonus bugs,” bugs caused by fixes or code changes and how to avoid them from the point of view of the developer, tester and manager.

Bonus bugs are the major rationale for regression testing in general and test automation in particular, since test automation is the best way to quickly retest an entire application after each round of code changes.

Since this is probably a 'bonus' you want to avoid, how do we prevent the bonus bugs from occurring, and how do we detect them when they have been introduced? I will give some notes here from the perspective of the developer, the tester and the manager respectively.

Let's first talk about the developer. A developer can do quite a lot to reduce the chances of bonus bugs. Today's systems are becoming more and more complex, and this complexity only increases over time as changes to the system are made. Any change can easily trigger a problem somewhere else, thus producing a bonus bug.

There is a lot written about commenting and documenting code, which I will not go into here, but whatever standard you adhere to (or are told to adhere to), make sure that somebody can easily "inherit" your code. It should take minimal energy for somebody to "decipher" and maintain the code you have written. Code should be written in small blocks each of which starts with a meaningful comment. For example is there something that you want the next person to know about the code (e.g. some technical pitfall that you had to work around), state it explicitly in the code comments.

Another good policy is to have code changes reviewed and approved by either a peer programmer, or even better by a supervising "architect" who understands how the system is built up and what consequences of system changes could be.

From the point of view of the tester, there are two main items to worry about: test design and level of automation.

Test design is one of the most underestimated topics in IT. Most tests that I encounter in companies and other organizations are "lame"; they simply follow the system requirements one by one and don't even attempt to combine several different parts of the system functionalities with each other in creative ways that could reveal unexpected problems, like bonus bugs. Even though requirement bases tests are useful, they have a low "ambition level" and it can pay out to allocate time and resources to make more aggressive tests.

A high level of test automation will greatly enhance your capability to catch the bonus bugs before they reach the release. To get to such a high level, simply buying a test tool will not be enough. A well thought-out method of test automation, such as keyword-driven testing, is essential, combined with training and coaching by experienced test automation experts.

Finally, a few words from the perspective of the manager: Here the recommendation is in fact quite simple: make a determination what bonus bugs can cost, and what it is worth to prevent them. This is a business estimate and decision: having bonus bugs can cost money; efforts to prevent them cost money too. Effects of bonus bugs (or any other kind of bugs) can typically be loss of time before or after system release, and/or decreased appreciation by end-users of you and your company. Preventing bonus bugs takes extra time and money to follow policies and procedures for development and testing, which can include reviews of code and setting up a high level of test automation.

By understanding how and why bonus bugs get introduced into applications, we can both prevent them from being introduced, and find them when they are. This takes a combined effort of the developers, testers, and managers, and it's a very important step in ensuring that your end-product satisfies your customers and other stakeholders.

READ MORE >>

Business Test Policies by Hans Buwalda

In a previous newsletter I discussed Test Governance, the topic of organizing and managing testing activities in an organization. In this article, I want to discuss something called "business test policies." These are statements that serve as basis for the Test Governance, and describe how testing is positioned in the overall company strategy, environment and culture.

Business test policies give the corporate perspective on testing (and test automation), using explicit policy statements. An example of a business test policy is "Performance testing is a responsibility of the system development groups", or "tests and their automation are regarded as company assets and need to be managed accordingly".

These policies are not something that many companies or institutions will have developed, but it makes sense to spend some time here, since software quality is critical for business and testing is hard to organize and manage. Considering that in a typical IT organization, testing makes up about 30% of costs, it deserves the attention.

Let me give you the bad news first: to be effective, business test policies need to come from upper level management. Directions on testing, like how much it may cost, are business decisions: too much testing means too much expenditure, while too little testing introduces risks that threaten the company's revenue. This means test managers need to engage in deep discussions with upper level management about the testing objectives, which can be intimidating. Due to their experience with these sorts of discussions, external consultants can be quite helpful in these situations.

The good news: Developing good policies shouldn't take too long. Most companies I have been in already have some sense of the position of testing. For example, in a recent discussion with a major technology company about a consumer product, they told me the product must not contain any bugs that are visible to the user; another company dealing with specialized geological data was generally tolerant for bugs in dialogs and controls, as long as the data underlying data was flawless.

Policy statements need to be meaningful, not just "lip service". A statement along the lines of "testing is good, bugs are bad" is not enough. The best way to think about it is in terms of money: every statement has cost consequences, so is it important enough to justify the cost?

Thinking about testing in business terms is challenging. Testers should not try to, or be expected to, set their own goals "in a bubble" (i.e. "we should test, because it is good to test"). Unfortunately this is the case more often than not, and it leads to a lack of commitment from the rest of the organization to the testing effort. Testing costs time and money, so there should be a business reason, coming from a business manager, to test. A business manager is responsible for costs and profits: he or she is accountable for money spent on testing, but also responsible if the company loses money because a system was released without enough testing. For a tester/test manager, life is much more comfortable if there is a clear assignment from the business, i.e. what to test, and what the budget is.

This leads to the hardest part in getting business test policies: If you as a test manager want to establish them effectively, it is best to think in business terms and address business issues, without even considering testing at first. I call this a "U-turn": you step out of your testing world, engage in a business discussion, and then translate the business considerations back to testing and test automation considerations.

Here are some of the concerns that an organization can formulate business test policies for:

  • What is the significance of testing and how much can be spent on it?
  • How does testing connect to critical success factors of the company?
  • Do we have problems, and if yes, what are their causes?
  • When should testing be done in the system development life cycle?
  • Who should be involved in testing (test development, assessment, reporting) and who is responsible for testing?
  • What testing expertise is needed, and how will it be provided?
  • Is testing centralized or decentralized in the organization?
  • Are there methods and tools that should be used?
  • What degree of test automation should be used?
  • What (if any) degree of outsourcing of (1) testing, (2) development of tests and/or (3) automation should be used?

Most importantly, keep an exercise on business test policies practical. That way they can contribute to an effective and efficient test process.

READ MORE >>

Test Governance

Hans Buwalda, LogiGear, 12/29/2005

Software testing is commonly perceived as a chore: products that are made by other developers have to be verified. Chores are something that you don’t want to spend to much attention and money on.

With our Action Based Testing method we have shifted the focus from “testing” to “test development” (with automated execution). This is successful because creating tests becomes a more systematic and easier to plan and control activity, resulting in tangible and valuable products.

Another shift that I would like you to think about is in focus: in stead of regarding software testing as a derivate of software development, give it a central separate focus and manage it as a key asset of the company. To summarize this thought I use the term “Test Governance”.

There is a good case to be made to do so:

  • Testing is a large part of the efforts in IT, typically 30% of all efforts
  • Good testing is very hard to do. It needs skilled staff and there is a lot to learn
  • Testing is often on the critical path of system development and maintenance
  • Test automation is a potential solution, but in itself is very hard to do successfully
  • Testing needs to be organized well, including who is responsible for which tests and how to report results

When discussing Test Governance a word of warning is in place. It is important to be practical about testing. Thinking about Test Governance should not lead to introducing all kinds of bureaucracy that nobody cares about.. Be careful with impractical standards and heavy life cycles that miss their purpose.

In my view three elements should be part of Test Governance:

  • What are the “business test policies” around testing
  • How should testing be organized in projects
  • How should testing be organized across projects

Business test policies are statements that describe, in broad terms, what the point of view is of an organization on testing. I will discuss these in a next newsletter article. Sufficient for now is to know that they should describe the importance of testing in the business of the organization and how it should be organized.

Testing activities are usually part of system development projects. Sometimes they are also organized in separate projects, usually to introduce test automation. Most books and articles on testing are about the activities within projects, and this is rightfully so. Consider creating a standard plan of approach for testing projects that deals with questions like responsibilities, communication structures, resources and skills, and obviously the planning, budget and risks.

However, projects tend to have a very strong “solution focus”: what is it that we need to achieve and how do we get there within given budgets and timelines. Projects are actually not a good environment to learn and improve best practices. Therefore I recommend to consider additional structures that have an “improvement focus”. This could be something traditional like a central test support department, or a more “light” solution, like one or more coordinating committees with members from various departments.

An addition to formalized structures consider “soft” ones, where staff members meet and discuss matters of know how and experience. For example one could introduce “Special Interest Groups” (SIG’s) that have regular informal meetings, typically in the off-hours. Members of a SIG share a common interest, for example “test design”, “test automation” or “test management” and an evening is typically structured around a presentation and discussion. SIG’s can also run sites on the intranet. All of these activities provide an inexpensive and light way to improve competence, but also help people “find” each other for advice and discussion of matters in projects.

READ MORE >>

Presentations

Agile Support On-Demand - A Cloud-Like Approach to Testing Services

A team might be done with work items in a sprint, but it’s often the case that development and automation of functional tests aren’t finished. To get automated testing “done” in agile sprints, handing over excess workload for test development and automation to a service group-on-demand is an efficient, viable option.

This webinar outlines how you can implement a process to relieve teams and keep automated testing in sync with development by employing “Outsourcing 2.0”.

VIEW THE PRESENTATION >>

Automating Testing in Real-Time Environments

Agile application delivery requires test automation, and also the ready applications and infrastructure to efficiently execute large-scale testing. Learn how using on-demand virtual environments enables you to rapidly scale testing and remove the constraints that commonly hold back testing cycles, resulting in both faster testing and increased test coverage.

According to Forrester Research, nearly 50% of Agile teams can't automate more than 29% of their tests. Furthermore, recent voke research indicates that 63% of organizations experience development delays and 68% of organizations experience QA delays due to waiting for an environment.

Presenters from LogiGear and Skytap here outline how organizations can automate more tests, shorten market release cycles, and lower the cost of development per release by combining test automation with on-demand production environments.

VIEW THE PRESENTATION >>

Scalability of Tests - A Matrix

LogiGear Chief Technology Officer Hans Buwalda authored this article in TechWell, discussing the scalability of unit, functional, and exploratory tests. Since many automation tools focus on functional testing, Hans proposes options to make this type of testing easier to manage.

VIEW THE PRESENTATION >>

Test Automation: Garbage-in = Garbage-out

In this video Hans Buwalda outlines how to design and organize tests for efficient automation, and how the leading test methods, Action Based Testing (ABT) and behavior-driven development (BDD), enable good test design.

VIEW THE PRESENTATION >>

Successful Testing by Design - Hans Buwalda

In this webcast Hans Buwalda examines the importance of test design for maintainable automation and how Action Based Testing (ABT) facilitates successful test design.

VIEW THE PRESENTATION >>

Automate Testing within the Same Sprint

Automating tests in the same dev sprint can be a game changer. This webcast outlines how it can be done following the same processes the TestArchitect software development team uses.

VIEW THE PRESENTATION >>

Halliburtons's Last Mile to Continuous Integration

Cheronda Bright of Halliburton shares how she leveraged LogiGear's expertise to integrate TFS, MTM and TestArchitect to allow testing to keep up with rapid development cycles.

VIEW THE PRESENTATION >>

Automated Testing with Keywords

Like Agile, there can be a lot of variation in how keyword testing is applied. In this webcast, Hans Buwalda, the pioneer of the keyword method, presents how to make automation with keywords effective.

VIEW THE PRESENTATION >>

Michael Hackett discusses how to avoid technical debt

Cut a little testing here and a little there and before you know it, you have a big pile of technical debt. In this webcast, Michael Hackett offers some tips on how to avoid a nightmare testing situation.

VIEW THE PRESENTATION >>


Videos

How to get Automated Testing “Done”

Hans discussed the following solutions on how one can apply better test design to drive better automation, a number of technical strategies, what developers and product owners can do to help, and how to handle the testing and automation work that is still left after a sprint has finished.

WATCH THIS VIDEO >>

Application Performance Across The Software Development Lifecycle

Two trends are driving major changes in the way performance testing is done in software development teams today: an increase in the pace of development and the requirement for multi-screen applications. With these trends in mind, teams can no longer push off performance testing tasks until the last minute before a release.

WATCH THIS VIDEO >>

Paul Holland on Rapid Software Testing, Part 1

In this video interview, testing consultant Paul Holland discusses rapid software testing with Hung Nguyen.

WATCH THIS VIDEO >>

Overview of TestArchitect for Visual Studio

A discussion of the module-based approach to coded UI test automation and the TestArchitect for Visual Studio automation tool.

WATCH THIS VIDEO >>

Harry Robinson Talks Training

At VISTACON 2011, Harry sat down with LogiGear Sr. VP Michael Hackett to discuss various training methodologies.

WATCH THIS VIDEO >>

Michael Hackett: Agile Automation

Agile Automation

Michael Hackett, Senior Vice President, LogiGear Corporation

WATCH THIS VIDEO >>

Views from Around the World

Michael Hackett, LogiGear Senior VP, asks conference participants, "What is the most important issue to resolve in the Global Software Engineering?"

WATCH THIS VIDEO >>

VISTACON 2010 Keynote - The Future of Testing by BJ Rollison

THE FUTURE OF TESTING

BJ Rollison - Test Architect at Microsoft VISTACON 2010 - Keynote

WATCH THIS VIDEO >>

New Roles for Traditional Testers in Agile – Part 1/4

MICHAEL HACKETT - Certified ScrumMaster

Michael shares his thoughts on "A Primer - New Roles for Traditional Testers in Agile"

WATCH THIS VIDEO >>

New Roles for Traditional Testers in Agile – Part 1/4 (cont.)

MICHAEL HACKETT - Certified ScrumMaster

Michael shares his thoughts on "A Primer - New Roles for Traditional Testers in Agile"

WATCH THIS VIDEO >>

New Roles for Traditional Testers in Agile – Part 2/4

MICHAEL HACKETT - Certified ScrumMaster

Michael shares his thoughts on "The Common Problems and Misconception with Extreme Programming"

WATCH THIS VIDEO >>

New Roles for Traditional Testers in Agile – Part 2/4 (cont.)

MICHAEL HACKETT - Certified ScrumMaster

Michael shares his thoughts on "The Common Problems and Misconception with Extreme Programming"

WATCH THIS VIDEO >>

New Roles for Traditional Testers in Agile – Part 3/4

MICHAEL HACKETT - Certified ScrumMaster

Michael shares his thoughts on "The Common Problems and Misconception with Extreme Programming"

WATCH THIS VIDEO >>

New Roles for Traditional Testers in Agile – Part 4/4

MICHAEL HACKETT - Certified ScrumMaster

Michael shares his thoughts on "The Common Problems and Misconception with Extreme Programming"

WATCH THIS VIDEO >>