Test Suite

Learn how to test and validate changes made to your processes.

Overview

The Test Suite empowers you to test and validate any changes made to your processes. After modifying a process, run the Test Suite to compare the new results against expected outcomes. This makes it easy to assess the impact of your changes and ensure everything still works as intended.

How It Works

Test changes made to your processes with this 3-step process:

1

Create Test Cases

Create test cases from successful process runs. These runs serve as baselines that define the expected behavior and output when a process operates as intended.

2

Run Tests

Run tests to compare your current process against your test cases. The Test Suite analyzes output to identify changes between your baseline and current version.

3

Review Changes

Review the test results to check for any differences in expected outcomes.

Getting Started

Get started by creating test cases. To do this, you'll need to add test cases and identify output.

1. Add Test Cases

A test case is a process run that serves as the baseline for how a process should perform. Test cases establish the standard against which future runs are compared. They can be added from existing runs directly within the Test Suite or from individual process runs.

Add Test Cases From the Test Suite
  1. From the left navigation menu, go to the Test Suite.

  2. Click on the Test Cases tab.

  3. For a given process, click on + Add Test Cases.

  4. Select Draft or Published to filter runs.

  5. Select one or more process runs to add as test cases.

  6. Click Add Selected.

Add Test Cases From Processes
  1. From the left navigation menu, go to Processes.

  2. Select a process.

  3. Access a process run.

    1. To view a previous run, click on View Runs and choose a run from the list.

    2. To start a new run, click on Run.

  4. Click the menu button, then select Add to Test Suite.

  5. Confirm the addition of the test case by clicking Add.

2. Identify Output

The next step is to define what a successful outcome looks like for your test case. This is done by starring data — highlighting important data elements in a process run. By doing so, you tell the Test Suite which output (ex: total cost, destination, payment status) should be validated to ensure a process runs as expected.

How to Star Data
  1. From the left navigation menu, go to Processes.

  2. Select a process.

  3. Access a process run.

    1. To view a previous run, click on View Runs and choose a run from the list.

    2. To start a new run, click on Run.

  4. Select data by clicking on the name of a data element in your process.

  5. In the Information panel on the right, click the star icon (⭐️) next to the data element. The star will turn yellow to indicate the value is now starred.


Running Tests

Once you've added test cases and identified output, you're ready to run tests. During a test run, the Test Suite runs the current version of each selected process for every test case. Then, it compares the output generated by the runs against the expected output in your test cases.

Starting a Test Run

Follow these steps to start a test run:

1

From the left navigation menu, go to the Test Suite.

2

Run Tests

In the Test Suite, select Run Tests.

3

Select Processes

A Run Tests pop-up will appear, displaying a list of processes to select from:

  • Ready for Testing: At the top of the list, you'll see processes that are ready for testing. All eligible processes are selected by default. You can uncheck processes to exclude them from the test run.

  • Ineligible for Testing: At the bottom of the list, you'll find processes that cannot be tested yet. These will be categorized as either Missing Test Cases or Missing Outputs. You cannot select these processes until you've added test cases or identified output for them.

Note: The latest version of each process will be selected, ensuring you're always testing against the most recent version.

4

Confirm Test Run

Confirm the test run by clicking on Run Tests.

Run Tests for a Single Process

Stopping a Test Run

To stop a test run in progress, click Stop Test Run on the Test Run page.


Reviewing Test Run Results

All your test runs are summarized in the Test Runs tab of the Test Suite. Access the results of a specific test run by clicking on it from this overview page.

Overview

Here's a visual guide to reviewing your test run results:

Test Run Metrics

Displayed at the top of a Test Run are the following metrics:

  • Passed Runs: The number of runs where all the output matches the expected results.

  • Failed Runs: The number of runs where one or more outputs does not match the expected results.

  • Guidance Required: The number of runs that paused during execution and required user guidance to proceed. These runs are neither passed nor failed until guidance is provided.

  • Skipped Runs: The number of runs that were not executed or skipped.

By default, these metrics are displayed for the entire run. You can click into a process to update the metrics for that specific process.

Process Breakdown

Below the overall summary on the Test Run page is a list of processes included in the test run. For each process, you can see a summary of the overall status, pass percentage, and run metrics. Click into a process to drill down into individual test cases.

Test Case Breakdown

Once you've clicked into an individual process within a test run, you'll see a breakdown by test case. Click on any test case to view a side-by-side comparison between the expected output and the test run output.


Addressing Test Failures

Test failures generally occur in one of the following scenarios:

Output Mismatch

The test fails because the expected output doesn't match the current output from your process. To address this issue, you need to analyze the discrepancy and ask: is the new output correct?

  • If no, there is an issue with your process. Fix the automation logic causing the incorrect result.

  • If yes, the test case is outdated. Remove the test case and add a new one that contains the updated output.

Output Not Found

The expected output is missing entirely, pointing to an issue with the process itself. To address this, update your process to ensure it generates the expected output. This involves debugging your automation workflow and logic to understand why the data isn't being generated.


Removing Test Cases

Remove invalid or outdated test cases from your Test Suite to keep it up-to-date. Test cases can be removed directly from the Test Suite or from process runs.

Remove Test Cases from the Test Suite

  1. From the left navigation menu, go to the Test Suite.

  2. Click on the Test Cases tab.

  3. Click on a process to see its associated test cases.

  4. Select one or more test cases to remove.

  5. Click on Remove from Test Suite.

  6. Confirm the removal of the test case(s) by clicking Remove.

Remove Test Cases from Process Runs

  1. From the left navigation menu, go to Processes.

  2. Select a process.

  3. Access a process run.

    1. To view a previous run, click on View Runs and choose a run from the list.

    2. To start a new run, click on Run.

  4. Click the menu button, then select Remove from Test Suite.

  5. Confirm the removal of the test case by clicking Remove.

Last updated

Was this helpful?