TestArchitect Corner: Capture Screens of Application Under Test during Automation Execution

Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain. 

Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure.

Suppose you executed a long, automated test that contains a good deal of interaction with the interface of the Application Under Test (AUT), such as mouse clicks, keyboard input, menu item selection, etc. When viewing the generated test results, it may be difficult to understand why some fails, errors, or warnings have occurred. It would be easier to identify any issues if the test results were accompanied by snapshots of the screen’s display just before, during, and after any interactivity between the test and the AUT’s UI.

To address this problem, TestArchitect allows snapshots to be automatically taken of the AUT’s display at various critical points during test execution. By letting you observe the display state of the AUT at each stage of the test, you can have a better grasp of where and how a test or application is going wrong. Users can tell TestArchitect to capture screenshots during Test Automation with each UI interactive action. These screenshots help you to better visualize what took place in order to more easily debug any problems that have occurred.

The number of screenshots retained by TestArchitect is determined by user settings in the Screenshot recording panel of the Execute Test dialog box just prior to the test run.

Users can specify the events (Passed, Failed, or Warning/Error) for which associated screenshots are to be retained. They can also specify the number of preceding screenshot sets that are to be retained for each
qualified event. A single screenshot set consists of all the screenshots captured during a single UI-interactive action. The below image indicates that three screenshot sets are to be retained and logged for each Failed and Warning/Error event of the test: the screenshot set of the associated Failed/Warning/Error action and the screenshot sets of the two UI-interactive actions preceding it. Note that if the Keep field is left blank, screenshot sets for all preceding UI-interactive actions are retained.

Screenshots captured during testing are displayed in the Result Details and Failure/Error Summary tabs of local test results.

Once users click on a captured screenshot thumbnail in the Result Details tab, the screenshot viewer appears.

The screenshot viewer incorporates a number of functions (below).

  1. Fit screenshot to the Image Viewer panel (full screen)
  2. Go to the previous recorded UI-interacting action
  3. Go to the next recorded UI-interacting action
  4. Click on the action name to launch TestArchitect
  5. Client, displaying detailed description of the UI-interacting built-in action

Click on the action line number text to launch TestArchitect Client, which displays the corresponding line in its execution context.

TestArchitect doesn’t only snap pictures. When you have screenshot recording enabled, it can also record video of the entire automation process, storing it at the end of the test run as a video (.mp4) file on users’ machines.

Not only is this screenshot recording feature supported for executing tests on computers, it’s also supported on Android and iOS mobile devices. To learn more about this feature, visit testarchitect.com and download TestArchitect for free here. See how beneficial this time-saving feature can be when you’re testing.

Van Pham
Van Pham has more than 10 years of experience in software automation testing on various platforms and Customer/Product Support. A key member of the organization, Van mentors, manages, and motivates LogiGear’s Support teams to provide an exceptional Customer Support Experience. Van has her B.S. in Software Engineering from National University, and an M.S. in Engineering Management.

The Related Post

Companies generally consider the software they own, whether it is created in-house or acquired, as an asset (something that could appear on the balance sheet). The production of software impacts the profit and loss accounts for the year it is produced: The resources used to produce the software result in costs, and methods, tools, or ...
In today’s mobile-first world, a good app is important, meaning an effective Mobile Testing strategy is  essential.  
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...
In software testing, we need to devise an approach that features a gradual progression from the simplest criteria of testing to more sophisticated criteria. We do this via many planned and structured steps, each of which brings incremental benefits to the project as a whole. By this means, as a tester masters each skill or area ...
Introduction Keyword-driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:
LogiGear Magazine – May 2011 – The Test Process Improvement Issue
When You’re Out to Fix Bottlenecks, Be Sure You’re Able to Distinguish Them From System Failures and Slow Spots Bottlenecks are likely to be lurking in your application. Here’s how you as a performance tester can find them. This article first appeared in Software Test & Performance, May 2005. So you found an odd pattern ...
Introduction This 2 article series describes activities that are central to successfully integrating application performance testing into an Agile process. The activities described here specifically target performance specialists who are new to the practice of fully integrating performance testing into an Agile or other iteratively-based process, though many of the concepts and considerations can be ...
For mission-critical applications, it’s important to frequently develop, test, and deploy new features, while maintaining high quality. To guarantee top-notch quality, you must have the right testing approach, process, and tools in place.
The key factors for success when executing your vision.   There is an often cited quote: “…unless an organization sees that its task is to lead change, that organization—whether a business, a university, or a hospital—will not survive. In a period of rapid structural change the only organizations that survive are the ‘change leaders.’” —Peter ...
David S. Janzen – Associate Professor of Computer Science Department California Polytechnic State University, San Luis Obispo – homepage LogiGear: How did you get into software testing and what do you find interesting about it? Professor Janzen: The thing I enjoy most about computing is creating something that helps people. Since my first real job ...
People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors. As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop with the lastest
software testing news

Subscribe