Errors And the Results File

You can expand the text of an error message or have Silk Test Classic find the error messages for you. To navigate from a test plan test description in a results file to the actual test in the test plan, click the test description and select Results > Goto Source.

Navigating to errors in the script

There are several ways to move from the results file to the actual error in the script:

  • Double-click in the margin next to an error line to go to the script file that contains the 4Test statement that failed.
  • Click an error message and select Results > Goto Source.
  • Click an error message and press Enter.

What the box icon means

Some expanded error messages are preceded by a box icon and three asterisks.

If the error message relates to an application’s behavior, as in Verify selected text failed, Silk Test Classic opens the Difference Viewer. The Difference Viewer compares actual and expected values for a given test case.

Application appearance errors

When you click a box icon followed by a bitmap-related error message, the bitmap tool starts, reads in the baseline and result bitmaps, and opens a Differences window and Zoom window.

Bitmap tool

In the Bitmap Tool:

  • The baseline bitmap is the bitmap that is expected, which means the baseline for comparison.
  • The results bitmap is the actual bitmap that is captured.
  • The Differences window shows the differences between the baseline and result bitmap.

The Bitmap Tool supports several comparison commands, which let you closely inspect the differences between the baseline and results bitmaps.

Finding application logic errors

To evaluate application logic errors, use the Difference Viewer, which you can open by clicking the box icon following an error message relating to an application’s behavior.

The Difference viewer

Clicking the box icon opens the Difference Viewer’s double-pane display-only window. It lists every expected (baseline) value in the left pane and the corresponding actual value in the right pane.

All occurrences are highlighted where expected and actual values differ. On color monitors, differences are marked with red, blue, or green lines, which denote different types of differences, for example, deleted, changed, and added items.

When you have more than one screen of values or are using a black-and-white monitor, use Results > Next Result Difference to find the next difference. Use Update Expected Values, described next, to resolve the differences.

Updating expected values

You might notice upon inspecting the Difference Viewer or an error message in a results file that the expected values are not correct. For example, when the caption of a dialog changes and you forget to update a script that verifies that caption, errors are logged when you run the test case. To have your test case run cleanly the next time, you can modify the expected values with the Update Expected Value command.

Note: The Update Expected Value command updates data within a test case, not data passed in from the test plan.

Debugging tools

You might need to use the debugger to explore and fix errors in your script. In the debugger, you can use the special commands available on the Breakpoint, Debug, and View menus.

Marking failed test cases

When a test plan results file shows test case failures, you might choose to fix and then rerun them one at a time. You might also choose to rerun the failed test cases at a slower pace, without debugging them, simply to watch their execution more carefully.

To identify the failed test cases, make the results file active and select Results > Mark Failures in Plan. All failed test cases are marked and test plan file is made the active file.