Skip to content

74. Foretify Manager API (Python SDK)

Foretify Manager's Python SDK (Software Development Kit) allows you to interact with Foretify Manager's API. For example, you can extract data from a test suite and use it to create reports, graphs and other project-tracking materials. Foretify Manager’s Export to CSV feature offers a basic capability, but if that does not meet your needs, you can create Python scripts that incorporate test suite data into a format of your choosing. You can then execute these scripts from Foretify Manager as a user command (action).

The SDK provides, in the ftx namespace, a few modules:

  • shell: abstract APIs to simplify the interaction with Foretify Manager (as explained in detail in Foretify Manager SDK APIs below)
  • clients: concrete wrappers to the HTTP API exposed by Foretify Manager
  • model: helper builder classes to assist with creating complex objects, such as Filters.

74.1 Installation

The Python SDK needs to be installed in your Python development environment. It is available in a .whl ("wheel") package format, located in the /client/sdk directory of the Foretify Manager distribution package.

The wheel package is versioned, so its filename (and internal metadata) specifies the Foretify Manager's server version whose API it matches, and the Python version it was built with: fmanager_python_sdk-[server_version]-[python_version]-none-any.whl (for example, fmanager_python_sdk-22.12.0.4-py3-none-any.whl).

For the user's convenience, a permanent symlink named fmanager_python_sdk-ver-py3-none-any.whl is placed at the same directory, pointing to the versioned filename.

To install Foretify Manager’s Python SDK wheel into your environment, run:

pip install "$(readlink -f fmanager_python_sdk-ver-py3-none-any.whl)"

Note

Some SDK features require "extra" dependencies which are not installed by default, and need to be specified explicitly. These features are:

  • Uploading to AWS S3 cloud storage (s3)
  • Uploading to Azure cloud storage (azure)
  • Authenticating with Okta (okta)

Additional dependencies need to be specified when installing the wheel.

For example, to install Foretify Manager’s Python SDK wheel with AWS S3 and Okta, run:

pip install "$(readlink -f fmanager_python_sdk-ver-py3-none-any.whl)[s3,okta]"

74.2 Authenticating and Managing Credentials

Logging into Foretify Manager requires calling the login API with username/password credentials or an access token.

74.2.1 Using Access Tokens

Credentials are frequently used by scripts to authenticate with the server. Instead of providing the actual username/password credentials, it is possible to generate a user-specific access token, which can only be used to authenticate with Foretify Manager (even if the credentials are managed by LDAP or some Single Sign-On service).

Access tokens can also be invalidated after they are created, either by setting an expiration period during their generation (unlimited by default), or by specifically requesting to invalidate them.

Multiple access tokens can be created for each user.

For detailed information about creating and invalidating access tokens, see Access Tokens API below.

74.2.2 Managing Credentials

Instead of typing a password when prompted, you can store the connection details (such as a password or access token) in a hidden file located in the $HOME directory, which is protected from being read by other users on this machine by default.

After creation of this file, Python scripts can be executed without prompting for the user's password.

Connection details can be added by using the add_credentials utility, or manually by the following steps:

  1. Create the file named $HOME/.ftxpass:

    vi $HOME/.ftxpass
    
    or
    gedit $HOME/.ftxpass
    

  2. The content of the file should be a JSON object with a field named credentials, containing a list of objects of the form {"host": string, "port": string, "https": boolean, "user": string, "password": string, "access_token": string} as shown in the following example:

    {
        "credentials": [
            {
                "host": "fmanager-prod.domain.com",
                "port": "8080",
                "user": "a_user",
                "password": "a_password"
            },
            {
                "host": "fmanager-prod.domain.com",
                "port": "8080",
                "user": "some_user",
                "access_token": "some_token"
            },
            {
                "host": "fmanager-qa.domain.com",
                "port": "443",
                "https": true,
                "user": "test_user",
                "access_token": "test_token"
            }
        ]
    }
    

    Notes

    • The list can contain multiple credentials for any hostname, so the first one from the top that matches the required hostname (and username, if provided) will be used.
    • The https field is optional and is set to false by default.
    • You must set either a password or a access_token. Only one is required.
  3. Make sure this file is only readable by your OS user:

    sudo chmod 600 $HOME/.ftxpass
    

  4. Set the user authentication required to upload run results from the $HOME/foretify/runs directory to the Foretify Manager server for analysis and visualization:

    • Without user authentication, i.e., no password required:

      upload_runs --runs_top_dir $HOME/foretify/runs
      
    • With user authentication:

      upload_runs --runs_top_dir $HOME/foretify/runs --user some_user
      

      If only a username was provided but an appropriate password wasn't found in the .ftxpass file, the script will prompt the user to type the password.

    For more information on uploading runs, see upload_runs.

74.2.3 Session Management Utilities (Admin/Support Only)

invalidate_sessions - Command-line utility to invalidate user sessions

Important

This utility requires ADMIN, FMANAGER_ADMIN, or FMANAGER_SUPPORT roles. Regular users will receive a 403 Forbidden error.

This utility allows administrators and support staff to forcibly terminate user sessions and release associated licenses from the command line.

Usage:

# Invalidate all sessions for a specific user
invalidate_sessions --user admin@fmanager.com --target_user john.doe \
                   --reason "Security incident"

# Invalidate a specific session by ID
invalidate_sessions --user admin@fmanager.com --session-id A1B2C3D4E5F6 \
                   --reason "Stuck session cleanup"

# With HTTPS and custom port
invalidate_sessions --user admin@fmanager.com --hostname manager.company.com \
                   --port 8443 --https --target_user john.doe

Options:

Option Description Required
--user Admin or support username for authentication Yes
--hostname Foretify Manager server hostname (default: localhost) No
--port Server port (default: 8080) No
--https Use HTTPS for connection No
--session-id Specific session ID to invalidate Yes*
--target_user Target username to invalidate all sessions for Yes*
--reason Reason for invalidation (for audit trail) No

*Either --session-id OR --target_user must be provided (mutually exclusive)

Getting Session IDs:

Use the list_users utility (also admin/support only) to view active sessions:

list_users --user admin@fmanager.com

This displays all logged-in users with their session counts and license consumption.

74.3 Foretify Manager SDK Entities

The SDK provides several entities that represent aspects of a full verification environment.

Singular entities are Python dictionaries (dict), so specific attributes are accessed using the square brackets. For example, the name of a Test Run Group element trg is provided by trg['name'].

On the other hand, multiple entities (e.g. the result of page() or get() calls) are almost always a Pandas DataFrame, which is essentially a table of entities (and not a list). For example, on a given DataFrame of Test Runs df:

  • A collection of all the IDs is provided by df['id'].
  • The first Test Run (as a DataFrame with 1 row, not a dict) is provided by df[0:1].
  • The first two are provided by df[0:2].
  • mainIssueKinds of the first two run are provided by df['mainIssueKind'][0:2].
Entity Description
metric_model A representation of an OSC2 metric model (contains scenarios, metric groups, items and buckets)
vplan_template A representation of a user VPlan file (contains sections hierarchy and regex rules)
test_run_group A representation of a run set (for example nightly, run suite, test suite…)
test_run A representation of a single run
workspace A representation of user work/analysis environment
dispatcher A service responsible for running Foretify processes (jobs) on scale
test_run_group_definition A configuration used in order to execute a test suite via dispatcher
dispatcher_environment_settings Part of the test_run_group_definition: environment specific settings, which are used for test-suite execution
project A representation of a Foretify Manager project, in which one can create workspaces, upload test suites, and manage permissions
attribute A representation of a Foretify Manager Test Run Attribute
test A representation of a Test entity (a managed test that can be added to suites and executed)
test_suite A representation of a collection of Tests, including per-test configuration
flow_definition A representation of a Flow Definition used to define extract, extract-and-run, or run-tests flows
flow_execution A representation of a Flow Execution instance created from a flow definition

74.3.1 Metric Model

A metric model is a representation of metrics domain (e.g. coverage items and KPIs) which was defined for a given group of test runs. A metric model is defined by using OpenScenario2.0 constructs (cover() or record()) as part of scenario definition.

Attribute Description
id ID
createdAt Time Created
name Name
osUser User who executed the run
runDir The directory from which the metric model was uploaded
structs Dictionary of structs, which maps name to Struct
version Scheme version (currently always 1.0)

74.3.1.1 Struct model

Attribute Description
id Unique ID
name Name
groups Dictionary of nested groups, which maps name to Group
intervalModels List of interval models, Each defines the structure of the intervals, see Interval Model

74.3.1.2 Interval Model

An Interval model is a detailed representation of the structure of the intervals in this TSR. Interval models can have multiple types that correspond to the interval types.

74.3.1.3 Watcher Model

Attribute Description
id Unique ID
watcherName Name
watcherType Type of the watcher model (CHECKER, WATCHER)
_type Type of the interval model, will always be WatcherIntervalModelDef

74.3.1.4 Group Model

Attribute Description
id Unique ID
name Name
items Dictionary of nested items, which maps name to Item
_type Type of the interval model, will always be GroupDef

74.3.1.5 Item

Attribute Description
id Unique ID
name Name
crossedItems List containing names of crossed items, comprising this item (empty for a non-CROSS item)
buckets Dictionary of nested buckets, which maps name to Bucket
type (optional) Type of the item (INT64, UINT64, DOUBLE, STRING, ENUM, BOOL, CROSS)
target Hits target (i.e. the number of samples that must be collected for the bucket to be considered covered)
record A Boolean value, True if the item is a record (KPI) item
description (optional) Description for the item
ignoreStr (optional) A string valued predicate, which describes which nested buckets to ignore when calculating the coverage
minValue (optional) The minimal value a numeric item can obtain
maxValue (optional) The maximal value a numeric item can obtain
unit (optional) Physical units in which the item is measured

Note

The buckets field of CROSS items is potentially very large and thus will not be included when retrieving a workspace's VPlan (workspace.sections). To retrieve buckets in this case, a separate request needs to be made. See Buckets retrieval.

74.3.1.6 Bucket

Attribute Description
name Name
target Hits target (i.e. the number of samples that must be collected for the bucket to be considered covered)

74.3.1.7 Scenario Model

Attribute Description
_type Type of the interval model, will always be ScenarioIntervalModelDef

74.3.1.8 Global Modifier Model

Attribute Description
_type Type of the interval model, will always be GlobalModifierIntervalModelDef

74.3.2 VPlan Template

A VPlan template is a verification plan that facilitates analysis by organizing metrics and run data into a hierarchical framework.

Attribute Description
id ID
name Name
createdAt Time created
description Description
sections List of top-level sections. A section can be of type Section, Checker or Reference
excludedVplanPaths List of node paths to be excluded when calculating coverage grades
requirementsProjectId Requirements Management Tool Project ID synced with the VPlan (see Integrate an RMT)
userName Creator's username
ownerId Creator's ID
filePath Path from the where the template was uploaded
accessLevel EDITOR if the current user has editing privileges, VIEWER (read-only) otherwise
userPermissionLevel Current user's permission level on the VPlan template
parentVplanTemplateId ID of the parent VPlan template, if this is a vplan view
lastUpdatedAt The time of the last update.
lastUpdatedByUserId The ID of the user who last updated the VPlan.
lastUpdatedByUserName The username of the user who last updated the VPlan.
vplanBaseName The name of the base VPlan.
childrenCount Number of children in the VPlan.
version The version of the VPlan.

74.3.2.1 VPlan Template Section

A VPlan template Section is a node in the VPlan tree hierarchy, containing coverage metrics and possibly additional nested sections.

Attribute Description
name Name
sections List of top-level sections. A section can be of type Section, Checker or Reference
items List of nested items represented in string as struct.group.item
weight Weight of the grade this section has in the total parent grade
source Path from the which the template was uploaded
description Description
filters List of VPlan filters, each is an object comprised of item (as struct.group.item), value, op, kind, included
attributeValues List of VPlan attributes, each is an object comprised of name, value, url, propagation
aggregationType Type of the aggregation method the section will use to determine its grade from its children (NONE, AVERAGE, MIN, MAX, STANDARD_DEVIATION, VARIANCE)
requirementId Requirement Element ID synced with the section (see Integrate an RMT)
readOnly True if this section cannot be edited, False otherwise
gradeIfEmpty True if this section's grade will be evaluated in its parent's grade even if it is empty of hits, False otherwise

74.3.2.2 VPlan Template Checker

A VPlan template Checker is a node in the VPlan tree hierarchy, used to contain KPIs.

Attribute Description
name Name
sections Empty list of sections
items List of nested items represented in string as struct.group.item
weight Weight of the grade this section has in the total parent grade
source Path from the which the template was uploaded
description Description
filters List of VPlan filters, each is an object comprised of item (as struct.group.item), value, op, kind, included
attributeValues List of VPlan attributes, each is an object comprised of name, value, url, propagation
aggregationType Type of the aggregation method the section will use to determine its grade from its children (NONE, AVERAGE, MIN, MAX, STANDARD_DEVIATION, VARIANCE)
requirementId Requirement Element ID synced with the checker section (see Integrate an RMT)
readOnly True if this section cannot be edited, False otherwise
gradeIfEmpty True if this section's grade will be evaluated in its parent's grade even if it is empty of hits, False otherwise

74.3.2.3 VPlan Template Reference

A VPlan template Reference is a node in the VPlan tree hierarchy, used to mirror (for the purpose of re-use) another section in its entirety (either from the same VPlan template or from a different one).

Attribute Description
name Name
vplanRefId ID
vplanRefPath Path of the referenced node in the referenced VPlan
validSource Status of the reference (True if valid, False otherwise)
validSourceDescription Status of the reference (VALID, NON_EXISTING_VPLAN, NON_EXISTING_PATH, CIRCULAR_REFERENCING)
referenceTo Type of node the the reference is pointing to (SECTION, CHECKER or REFERENCE)
sections List of top-level sections, a section can be of type Section, Checker or Reference
items List of nested items represented in string as struct.group.item
weight Weight of the grade this section has in the total parent grade
source Path from the which the template was uploaded
description Description
filters List of VPlan filters, each is an object comprised of item (as struct.group.item), value, op, kind, included
attributeValues List of VPlan attributes, each is an object comprised of name, value, url, propagation
aggregationType Type of the aggregation method the section will use to determine its grade from its children (NONE, AVERAGE, MIN, MAX, STANDARD_DEVIATION, VARIANCE)
requirementId checker Requirement Element ID synced with the reference section (see Integrate an RMT)
readOnly True if this section cannot be edited, False otherwise
gradeIfEmpty True if this section's grade will be evaluated in its parent's grade even if it is empty of hits, False otherwise

74.3.3 Test Run Group

A Test Run Group, also referred to as Regression, is a representation of a group of Foretify runs, together with their corresponding metric models and some meta-data (as described below).

Attribute Description
id ID
name Name
createdAt Creation time
ownerId Creator's ID
userName Creator's username
passed Number of passed runs
failed Number of failed runs
errored Number of errored runs (dispatching error)
totalRuns Total number of runs, including ones which are pending execution by Dispatcher
metricModelIds List of the metric model IDs
totalDuration Time duration (in milliseconds) of all runs in the group
status Status of execution (LAUNCHING, PENDING, RUNNING, COMPLETED, STOPPING, STOPPED, LAUNCH_FAILED)
locked True if group is protected from automatic clean-up (see The Regression Table)
failedRuns List of issue counters encountered in the group (each is a dict of {"category", "kind", "count"})
testRunGroupDefinitionId Test Run Group Definition's ID, used to run this group (assigned by Dispatcher)
definitionOnLaunch Copy of the Test Run Group Definition used to run this group (assigned by Dispatcher)
launchJobId Dispatcher's Job ID, used to launch the group (assigned by Dispatcher)
failureData Data which indicates why the test run group failed to launch. Exists only for test run groups with status LAUNCH_FAILED (assigned by Dispatcher)
accessLevel EDITOR if the current user has editing privileges, VIEWER (read-only) otherwise
projectId The ID of the project that the test run group belongs to
labels List of labels associated with the group
totalDistanceTraveled Sum of the distanceTraveled run attribute for all runs in the group

74.3.4 Test Run

A Test Run is a representation of a single Foretify run, saved in Foretify Manager.

Note

Attributes listed below marked with '+' in the "Detailed" column are not returned by a page() or get() request by default (only if detailed=True or include_trace=True). See Test Runs API for details.

Attribute Detailed Description
id Run ID generated by Foretify Manager
name Test Run name (not Test name)
foretifyRunUid + Run ID generated by Foretify
metricModelId Metric model ID
testRunGroupId Test Run Group (Regression) ID containing this run
testName Test name
testMap Test map
testFile Test (OSC) filename
testPlanIndex Test index generated by frun tool
compilerOutputId Checksum of the model received by the solver (variables, constraints, programs)
planId Checksum of the solver plan
accessLevel + EDITOR if the current user has editing privileges, VIEWER (read-only) otherwise
osUser User who executed the run
ownerId + User ID who uploaded the run
status Status
seed Simulation seed #
serialNumber + Serial number within the regression
runIndex + Index of the run within the crun group
logFiles + List of log files
tags List of tags applied to the run
bucketSummaries + List of metric buckets and hit counts, see Test Run Bucket Summary
groupSamples + List of trace information of metrics and their sampled values, see Test Run Group Sample
intervals + List of trace information of metrics and their interval values, see Test Run Interval
traceData + Trace data (requires include_trace=True)
issues + List of all issues, see Test Run Issue
hasRunDir + True if the run's directory is accessible
runDir Directory the run was uploaded from
mainIssueCategory Main issue's category
mainIssueKind Main issue's kind
mainIssueSeverity Main issue's severity
mainIssueOscSeverity Main issue's osc (original) severity (OSC_NA, OSC_IGNORE, OSC_INFO, OSC_WARNING, OSC_ERROR_CONTINUE, OSC_ERROR, OSC_ERROR_STOP_NOW)
mainIssueDetails + Main issue's details
mainIssueNormalizedDetails Main issue's normalized details
mainIssueFullMessage + Main issue's full message
mainIssueReason + Main issue's reason
mainIssueResult Main issue's result
mainIssueClusterString + Main issue's cluster
mainIssueTime + Main issue's time (in milliseconds)
verdictAnalysisComment + Comment provided during Verdict Analysis
hasVizData + True if the visualization data is accessible
hasRecordedVideo + True if run has a video file
recordedVideoFile + Video filepath
startTime + Time (in UTC) when the run started
endTime + Time (in UTC) when the run finished
startTimeFromLogStart + Duration (in milliseconds) from the start of the original (unsliced) run to this run's startTime (where applicable)
endTimeFromLogStart + Duration (in milliseconds) from the start of the original (unsliced) run to this run's endTime (where applicable)
simulationTime + Virtual time duration (in milliseconds) the run spent in the simulation
duration Time duration (in milliseconds) from startTime to endTime
loadDuration Time duration (in milliseconds) spend loading the run (only the first crun run will have a non-zero value)
runDuration Time duration (in milliseconds) spend executing the run (added to loadDuration will be equal to duration)
firstRunLoadDuration + Time duration (in milliseconds) spend loading the first crun run (equal for all runs)
topLevelScenarios Top scenarios
foretifyRunCommand + Foretify command-line to execute the run
version + Foretify version used to execute
reranJob + True if the run is a re-run of another
isRerunOf + ID of a Test Run this is a re-run of
rerunCount + Number of times the run was re-run
jobId + Dispatcher's job ID
jobDirectory Dispatcher's job directory
groupExists + True if the test run group exists
projectId ID of the project the run belongs to
comparedRunId + The previous corresponding run, in the previous test suite, computed by the triage comparison procedure (or set manually). Relevant only when fetching a test run in a workspace context.
comparisonStatus + The comparison status, computed by the triage comparison algorithm. Relevant only when fetching a test run in a workspace context. Possible values: NOT_COMPARED, NEW_RUN, MATCH, MISMATCH.
runType Type of the run set by the source (e.g. UNKNOWN, GENERATIVE, ROAD_DATA, SMART_REPLAY)
objectListPath Path of object list used in the run (only available for ROAD_DATA runs)
testRunGroupDefinition Test Run Group Definition (ID & Name) used to run this group (assigned by Dispatcher)
stepSizeMs Foretify step time in milliseconds
distanceTraveled Overall distance traveled by the Ego in meters

Note

The groupSamples attribute is deprecated and will be removed in a future Foretify Manager release; the intervals attribute contains all the coverage data of the run and should be used instead.

74.3.4.1 Test Run Issue

A Test Run Issue is a representation of a single issue in a run.

Attribute Description
id Issue's ID
category Issue's category
kind Issue's kind
severity Issue's severity
osc_severity Issue's osc (original) severity (OSC_NA, OSC_IGNORE, OSC_INFO, OSC_WARNING, OSC_ERROR_CONTINUE, OSC_ERROR, OSC_ERROR_STOP_NOW)
details Issue's details
normalizedDetails Issue's normalized details
fullMessage Issue's full message
time Issue's time (in milliseconds)
modificationReason Issue's modification reason (if OpenScenario2 code was used for modifying the default severity of the issue)
result Issue's result

74.3.4.2 Test Run Bucket Summary

A Test Run Bucket Summary is a representation of a single cover/record|kpi bucket hit (in case buckets for it are defined in OpenScenario2).

Attribute Description
qualifiedName Fully-qualified name of a bucket (prefixed with struct, group and item) that was hit
hits Number of times the bucket was hit

74.3.4.3 Test Run Group Sample

A Test Run Group Sample is a detailed representation of the sampled items in a run.

Attribute Description
index Serial number for the Group Sample
qualifiedName Name of the struct and group that were sampled
items List of sampled items, each item consists of: name, valueStr, valueNum. valueStr is a string representation of the sampled value. For numeric values, valueNum will hold a numeric representation

74.3.4.4 Test Run Interval

A Test Run Interval is a detailed representation of the occurrences and events in a run. An interval has a start time and an end time and hence a duration. Intervals can have multiple types and relations between them.

74.3.4.4.1 Scenario Interval

A representation of a scenario execution during simulation

Attribute Description
id Unique Identification of the interval
foretifyIntervalId Identification number of the interval in the run
testRunId Identification of the test run this interval was sampled
parentId Identification number of the interval parent this interval is part of
startTime Start time of the interval
endTime End time of the interval
startTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's startTime (where applicable)
endTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's endTime (where applicable)
actorId Identification number of the actor involved in the interval
relatedIntervals List of relations these intervals hold with the other intervals; each relation consists of relatedIntervalId
name Name of the scenario which is represented by the interval
ancestorScenarioNames All scenario names that the interval is included in
duration Duration of the interval
label Label of the scenario
_type Type of the interval, will always be ScenarioIntervalData
createdByUserName Name of the user who appended the interval, None if not user appended
createdByUserId Id of the user who appended the interval, None if not user appended
createdAt Creation date of the appended interval, None if not user appended
tags List of additional notes on the appended interval, None if not user appended
isUserAppended Boolean, specifies if the interval is user appended or created by foretify
74.3.4.4.2 Coverage Interval

A representation of a coverage sampling event during simulation

Attribute Description
id Unique Identification of the interval
foretifyIntervalId Identification number of the interval in the run
testRunId Identification of the test run this interval was sampled
parentId Identification number of the interval parent this interval is part of
startTime Start time of the interval
endTime End time of the interval
startTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's startTime (where applicable)
endTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's endTime (where applicable)
actorId Identification number of the actor involved in the interval
relatedIntervals List of relations these intervals hold with the other intervals; each relation consists of relatedIntervalId
name Name of the struct and group of the sample
ancestorScenarioNames All scenario names that the interval is included in
duration Duration of the interval
items List of sampled items, each item consists of: name, bucket, value, valueNum. value is a string representation of the sampled value. For numeric values, valueNum will hold a numeric representation
_type Type of the interval, will always be CoverageIntervalData
74.3.4.4.3 Watcher Interval

A representation of a watcher sampling event during simulation

Attribute Description
id Unique Identification of the interval
foretifyIntervalId Identification number of the interval in the run
testRunId Identification of the test run this interval was sampled
parentId Identification number of the interval parent this interval is part of
startTime Start time of the interval
endTime End time of the interval
startTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's startTime (where applicable)
endTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's endTime (where applicable)
actorId Identification number of the actor involved in the interval
relatedIntervals List of relations these intervals hold with the other intervals; each relation consists of relatedIntervalId
name Name of the struct of the sample
ancestorScenarioNames All scenario names that the interval is included in
duration Duration of the interval
watcherName The name of the watcher
label Full path of the watcher instance
issueId ID of the matching issue, if it exists
watcherType The type of the viewed watcher (WATCHER, CHECKER)
_type Type of interval, which will always be WatcherIntervalData
issueCategory Issue category
issueSeverity Issue severity
issueOscSeverity Issue's OSC (original) severity (OSC_NA, OSC_IGNORE, OSC_INFO, OSC_WARNING, OSC_ERROR_CONTINUE, OSC_ERROR, OSC_ERROR_STOP_NOW)
issueKind Kind of issue
issueDetails Issue details
issueTime Time of issue in milliseconds
issueFullMessage Issue's full message
issueNormalizedDetails Normalized details of the issue
issueModificationReason Reason for the issue
issueResult Result of the issue
isIssueUserAdded Was the issue manually added by the user
74.3.4.4.4 Matcher Interval

A representation of a scenario match sample detected by the Evaluation Pipeline.

Attribute Description
id Unique Identification of the interval
foretifyIntervalId Identification number of the interval in the run
testRunId Identification of the test run this interval was sampled
parentId Identification number of the interval parent this interval is part of
startTime Start time of the interval
endTime End time of the interval
startTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's startTime (where applicable)
endTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's endTime (where applicable)
actorId Identification number of the actor involved in the interval
relatedIntervals List of relations these intervals hold with the other intervals; each relation consists of relatedIntervalId
name Name of the scenario that is represented by the interval
ancestorScenarioNames All scenario names that the interval is included in
duration Duration of the interval
label Label of the scenario
actorsAssignments List of actors' assignments, each actor assignment consists of: name and value
_type Type of the interval, will always be MatchIntervalData
74.3.4.4.5 Anomaly Interval

A representation of an anomaly interval detected by the Evaluation Pipeline.

Attribute Description
id Unique Identification of the interval
foretifyIntervalId Identification number of the interval in the run
testRunId Identification of the test run this interval was sampled
parentId Identification number of the interval parent this interval is part of
startTime Start time of the interval
endTime End time of the interval
startTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's startTime (where applicable)
endTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's endTime (where applicable)
actorId Identification number of the actor involved in the interval
relatedIntervals List of relations these intervals hold with the other intervals; each relation consists of relatedIntervalId
name Name of the anomaly that is represented by the interval
ancestorScenarioNames All scenario names that the interval is included in
duration Duration of the interval
label Label of the anomaly
actorsAssignments List of actors' assignments, each actor assignment consists of: name and value
_type Type of the interval, will always be AnomalyIntervalData
74.3.4.4.6 Behavior Interval

A representation of a Behavior interval.

Attribute Description
id Unique Identification of the interval
foretifyIntervalId Identification number of the interval in the run
testRunId Identification of the test run this interval was sampled
parentId Identification number of the interval parent this interval is part of
startTime Start time of the interval
endTime End time of the interval
startTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's startTime (where applicable)
endTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's endTime (where applicable)
actorId Identification number of the actor involved in the interval
relatedIntervals List of relations these intervals hold with the other intervals; each relation consists of relatedIntervalId
name Qualified name of the Behavior Interval that is represented by the interval
ancestorScenarioNames All scenario names that the interval is included in
duration Duration of the interval
priority Priority of the interval
driverType Type of the driver in the interval
behaviorName Name of the interval
info Additional information on the interval
_type Type of the interval, will always be BehaviorIntervalData
74.3.4.4.7 Global Modifier Interval

A representation of a Global Modifier interval.

Attribute Description
id Unique Identification of the interval
foretifyIntervalId Identification number of the interval in the run
testRunId Identification of the test run this interval was sampled
parentId Identification number of the interval parent this interval is part of
startTime Start time of the interval
endTime End time of the interval
startTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's startTime (where applicable)
endTimeFromLogStart Duration (in milliseconds) from the start of the original (unsliced) run to this interval's endTime (where applicable)
actorId Identification number of the actor involved in the interval
relatedIntervals List of relations these intervals hold with the other intervals; each relation consists of relatedIntervalId
name Name of the Interval
ancestorScenarioNames All scenario names that the interval is included in
duration Duration of the interval
label Label of the global modifier
_type Type of the interval, will always be GlobalModifierIntervalData

74.3.5 Project

A Project is a representation of a V&V project in Foretify Manager. An admin can create projects and allow specific users and user groups to access it.

Attribute Description
id ID
createdAt Creation time
ownerId Creator's ID
name Project name
description Project description
defaultPermissionLevel The minimal access level that will be given to all Foretify Manager users on the project
userPermissionLevel Current user's permission level on the project
isDefault True if the project is the Foretify Manager legacy (auto generated) project
permittedUsersCount The number of users that were given permission to a project explicitly
testRunGroupsCount The number of test suites in the project

74.3.6 Attribute

Attribute is a representation of a test run field in Foretify Manager. Attributes can be created from Foretify and Foretify Manager.

Attribute Description
name Attribute internal unique name
displayName Attribute display name
description Attribute description
createdAt Creation time
readOnly True if the attribute is a built-in field which can't be edited
type Attribute value's type (STRING, LONG, FLOAT, LINK, ENUM)
possibleValues Possible values the attribute can be assigned from (when type is ENUM)
urlTemplate A string containing regex {}. When setting a link-attribute with some value, it will replace the regex

74.3.7 Workspace

A Workspace is a representation of a Foretify Manager Workspace, which allows annotating a VPlan template with coverage grades in the context of one or more Test Run Groups.

Attribute Description
id ID
createdAt Creation time
ownerId Creator's ID
name Workspace name
description Workspace description
vplanId (Deprecated field)
vplanTemplateId ID of VPlan template
metricModelId ID of metric model
runsFilter Current runs filter
workspaceHistory (Deprecated field)
sections List of top-level sections
grade Calculated VGrade
grading Calculated VGrade for additional grading schemes (see next section)
failedRunsCount Failed runs count
passedRunsCount Passed runs count
erroredRunCount Errored runs count
totalRunsCount Total runs count
effectiveChildrenCount (Internal field)
excludedChildren (Internal field)
currentOperations List of current status and/or operations (e.g. NONE, ANNOTATE, NEED_TO_ANNOTATE, RANK, COMPARE, UPDATE_RUNS_ATTRIBUTES)
aggregations Workspace Aggregations
rankings Ranking metadata (see Rank)
effectiveWeightedChildCount (Internal field)
vplanName VPlan name
mappedMetrics (Deprecated field)
vplanAttributeValues (Internal field)
compareResults Workspace comparison results
updateCoveragePerRunsStatus Status of runs to be considered for coverage caluclations (ALL_RUNS, ONLY_PASSED_RUNS, ONLY_FAILED_RUNS)
testRunGroups List of Test Run Groups in the workspace
runIssueCounters List of Test Run Issues Counters (each is a dict of {"status", "mainIssueKind", "mainIssueSeverity", "mainIssueCategory", "count"})
currentTimelinePointId (Internal field)
accessLevel EDITOR if the current user has editing privileges, VIEWER (read-only) otherwise
settings (Internal field)
projectId ID of the project that the workspace belongs to
metricPointId ID of the timeline point, from which the workspace takes metrics
metricModelSetMode Here, you have two options. You can choose either of the following actions:
MANUAL: The metric point will change only by a manual update from now on
SET_LATEST: The metric point will be automatically set to the latest captured test suite (captured by the workspace's capture rule)
userPermissionLevel Current user's permission level on the workspace

Note

The buckets field of CROSS items is potentially very large and thus will not be included when retrieving a workspace's VPlan (workspace.sections). To retrieve buckets in this case, a separate request needs to be made. See Buckets retrieval.

74.3.8 WorkspaceSection

A WorkspaceSection is a node in the workspace tree hierarchy, retrieved by applying a VPlan template section to some timeline point.

Attribute Description
name Name of the section
structs Dictionary of nested structs, mapping name to Struct
sections Dictionary of nested sections, mapping name to WorkspaceSection
grade Calculated coverage grade for this section
grading Grading breakdown for this section, see Workspace Grading
included Whether this section is included in grading calculations
isGraded Whether this section has been graded
effectiveChildrenCount Number of child elements (sections/structs) considered for grading
excludedStructs Set of struct names excluded from grading
excludedSections Set of section names excluded from grading
weight Weight of this section, used to calculate the parent node's grade
effectiveWeightedChildrenCount Sum of children's weights
filters List of filters applied to this section, see VplanTemplateInclusionFilter
unmatchedRules Whether there are unmatched rules in this section
description Optional description of the section
attributeValues List of custom attribute values, see VplanAttributeValue
requirementId Optional ID of a linked requirement
runIssueCounters List of issue counters/statistics, see RunIssueCount
vplanTemplateKind Type of section: SECTION, CHECKER, or REFERENCE

74.3.9 Struct

A Struct is a metric group or scenario grouping within a WorkspaceSection.

Attribute Description
name Name of the struct
groups Dictionary of groups, mapping name to Group
grade Calculated coverage grade for this struct
grading Grading breakdown for this struct, see Grading
included Whether this struct is included in grading calculations
graded Whether this struct has been graded
effectiveChildrenCount Number of child elements considered for grading
excludedChildren Set of child names excluded from grading
runIssueCounters List of issue counters or statistics, see RunIssueCount
weight Weight of this struct, used to calculate the parent node's grade

74.3.10 Group

A Group is a collection of items within a struct, used for organizing metrics.

Attribute Description
name Name of the group
items Dictionary of items, mapping name to Item
grade Calculated coverage grade for this group
grading Grading breakdown for this group, see Grading
included Whether this group is included in grading calculations
graded Whether this group has been graded
effectiveChildrenCount Number of child elements considered for grading
excludedChildren Set of child names excluded from grading
runIssueCounters List of issue counters or statistics, see RunIssueCount
weight Weight of this group, used to calculate the parent node's grade

74.3.11 Item

An Item is a metric or coverage point within a group.

Attribute Description
name Name of the item
buckets Dictionary of buckets, mapping name to Bucket
isCross Whether this item is a cross-coverage item
isRecord Whether this item is a record (KPI) item
grade Calculated coverage grade for this item
grading Grading breakdown for this item, see Grading
included Whether this item is included in grading calculations
graded Whether this item has been graded
effectiveChildrenCount Number of child elements considered for grading
description Optional description of the item
target Target value for the item
targetEditLevel Level at which the effective target was retrieved from, see TargetEditLevel
aggregationType Aggregation type for the item, see NumericAggregationTypeE
ignoreStr String to ignore certain values
runIssueCounters List of issue counters or statistics, see RunIssueCount
minValue Optional minimum value for the item
maxValue Optional maximum value for the item
unit Optional unit for the item
crossedItems List of crossed item names
weight Weight of this item, used to calculate the parent node's grade

74.3.12 Bucket

A Bucket is a container for hit counts and targets within an item.

Attribute Description
name Name of the bucket
hits Number of hits recorded in this bucket
included Whether this bucket is included in grading calculations
target Target value for the bucket
targetEditLevel Level at which the effective target was retrieved from, see TargetEditLevel

74.3.13 VplanTemplateInclusionFilter

A filter for including or excluding runs/items in a section based on attributes or criteria.

Attribute Description
item The item to filter on
value The value to compare against
op The comparison operator, see Conditions
kind The kind of filter (e.g., attribute, value)
included Whether the filter includes or excludes items

74.3.14 VplanAttributeValue

A custom attribute and its value assigned to a section.

Attribute Description
name Name of the attribute
value Value of the attribute
url Optional URL associated with the attribute
propagation Whether the attribute propagates to child elements

74.3.15 RunIssueCount

Tracks issue statistics (e.g., errors, warnings) for a section, struct, group, or item.

Attribute Description
status Status of the test run, see ITestRun.Status
mainIssueKind Main issue kind, if any
mainIssueSeverity Severity of the main issue, see Severity
mainIssueCategory Main issue category, if any
count Number of issues

74.3.16 NumericAggregationTypeE

Type of aggregation method for grading.

Value
NONE
AVERAGE
MIN
MAX
STANDARD_DEVIATION
VARIANCE

74.3.17 TargetEditLevel

Level at which a target was edited.

Value
DEFAULT
ITEM_OSC
ITEM
BUCKET_OSC
BUCKET

74.3.18 Severity

Severity of an issue.

Value
NONE
NA
IGNORE
INFO
WARNING
ERROR
ERROR_CONTINUE
ERROR_STOP_NOW

74.3.19 Status

Status of a test run.

Value
PASSED
FAILED
ERRORED
UNKNOWN

74.3.20 Workspace Grading

A workspace grading object represents the workspace's VGrade, for various grading schemes.

Attribute Description
averagingGrade The workspace's grade according to the weighted averaging grade scheme: Each node’s grade is computed as the weighted average of its child nodes’ grades, where each child’s weight determines its contribution. Buckets are graded 1 if their hits meet the target, and 0 otherwise. This corresponds to the standard VGrade workspace["grade"].

Formula:
grade(node) = (Σ(gradeᵢ × weightᵢ)) / (Σ(weightᵢ))
bucketGrade The workspace's grade, according to the bucket grade scheme: Each node's grade is computed as the proportion of filled buckets (buckets where hits meet or exceed the target), relative to the total number of buckets underneath it (its bucket predecessors). For the entire workspace, the grade is the proportion of filled buckets across all buckets in the VPlan.
progressiveBucketGrade The workspace's grade according to the progressive bucket grade scheme, which is similar to the bucket grade scheme, but takes into account partially filled buckets. Each node is graded based on the proportion of hits in its bucket predecessors relative to the total targets, ignoring any hits that exceed the target.

74.3.21 Timeline Point

A timeline point is a view of a test suite in the context of a workspace. It can be constructed from one or more raw test suites. A timeline point also contains coverage data information (such as VGrade) which is workspace-specific.

Attribute Description
id ID
grading A workspace grading object, which represent the grades for this point
passed Passed runs count
failed Failed runs count
errored Errored runs count
totalRuns Total runs count
testRunGroups A list of Test Run Group IDs, from which the point is constructed
testRunGroupDefinitions A list of Test Run Group Definitions (IDs and Names), used to run the included test suites (assigned by Dispatcher)
currentOperation The current operation of the timeline point (same operations as in workspace)
createdAt Time of creation
runsFilter The runs filter that describes which runs the point references
name A name for the point (taken from the relevant test suite)
previousComparePointId An ID of the point that will be compared to this point, in a triage context
previousComparePointName The name of the point that will be compared to this point, in a triage context
triageRulesLastCalculated The time when triage rules were last calculated for this point
uncalculatedReasons A list of reasons why this point’s grade is not aligned with the workspace. An empty list means the point has never been graded or its grade is already up to date. Possible reasons: NEWLY_CREATED, BASE_VPLAN_EDITED, BASE_VPLAN_REPLACED, VPLAN_VIEW_EDITED, VPLAN_VIEW_REPLACED, VPLAN_ATTRIBUTES_CHANGED, METRIC_MODEL_CHANGED, UPDATE_COVERAGE_PER_RUNS_STATUS_CHANGED, TRIAGED, MERGED_INTO, PUBLISHED
lastCalculatedVplanVersion The version of the VPlan used for the last grade calculation.
lastCalculatedAt The time of the last grade calculation.
lastAnnotatedVplanName The name of the VPlan used for the last annotation.
isCalculated A Boolean value indicating whether the timeline point's grade has been calculated.
applyTriageLastCalculated The time when the triage was last applied to this point.
mergedWithKeepOriginal A Boolean value indicating whether the original timeline points were kept after merging.
statusMap A map that shows the status of runs for the timeline point.
labels A list of labels associated with the timeline point.
failedRuns A list of failed runs in the timeline point.
pointOfCreation A Boolean value indicating whether this is the point of creation.

74.3.22 Triage Comparison Settings

The Triage comparison settings are an object attached to a workspace, describing how test suites are compared in the workspace, in a triage context.

Attribute Description
id ID
workspaceId The corresponding workspace's id, for the settings.
correspondingTestMatchingRules A list of test run attributes. When two runs belonging to two consecutive test suites match for every attribute in this list, they are marked as corresponding test runs.
sameResultsMatchingRule When two corresponding runs (determined by correspondingTestMatchingRules) match for every attribute in this list, the latter's comparison status is marked as MATCH.

74.3.23 Triage View

Represents a view through which one can perform triage operations via his/her workspace. The view determines the way runs are clustered and shown in a triage context.

Attribute Description
id ID
name A name for the triage view.
workspaceId the workspace's id, the triage view belongs to.
createdAt Time of creation
triageTableColumns An ordered list of TriageViewTableColumns, which determines for shown runs what attributes/metric items will be shown in the triage screen.
aggregationFunctions A list of aggregation function names. Each function will be computed for all clusters in the triage and their results will be presented in the triage screen. Possible values :
COUNT, PERCENT_OF_ALL, SUT_FAILED, SUT_FAILED_PERCENT, OTHER_FAILED, OTHER_FAILED_PERCENT,
PASSED, PASSED_PERCENT, COUNT_COMPARED, PERCENT_OF_ALL_COMPARED, SUT_FAILED_COMPARED,
SUT_FAILED_PERCENT_COMPARED, PASSED_COMPARED, PASSED_PERCENT_COMPARED, OTHER_FAILED_COMPARED, OTHER_FAILED_PERCENT_COMPARED
runsFilter A runs filter for the triage. Is used as an extra filtering layer, in addition to the workspace's filter.
aggregationFields An ordered list of test run attribute names. Will determine the structure of the aggregated tree, shown in the triage screen. Test runs will be first clustered by the first attribute on the list, then the second value, and so on....
isDefault A boolean which states whether the view is the default triage view, which is global to the system.

74.3.24 Triage Rule

Represents a triage rule. A rule can be appliedin a context of a triage, in order to update all runs in a workspace which match some filter.

Attribute Description
id ID
name A name for the rule.
disabled When set to true, the apply rules action will skip the rule
attributes A list of workspace attributes (see below). These attributes will be set when the rule is applied.
conditions A test run filter. Any run matches this filter will be affected by applying the rule.
createdAt Time of creation
modifiedAt Time of the latest modification to the rule
lastModifiedBy Name of the latest user which modified the rule

74.3.25 Compound Rule

A compound rule combines two sets of intervals using a temporal relation and an action. A compound rule can be applied in a workspace or globally to create a new interval based on a relationship between two sets of intervals.

Attribute Description
id Unique ID of the compound rule.
name Name of the compound rule.
compoundRuleDefinition Contains the rule's definition, including the temporal relation and action.
creationContext Defines whether the rule was created globally or within a specific workspace.
createdAt Timestamp of when the rule was created.
modifiedAt Timestamp of the last modification made to the rule.
lastModifiedBy Name of the user who last modified the rule.
createdBy Name of the user who created the rule.
disabled Whether the compound rule is disabled. Disabled rules are not executed when running compound intervals.
lastCalculatedBy Name of the user who last ran the compound rule.
lastCalculatedAt Timestamp of when the compound rule was last run.

74.3.26 Compound Rule Definition

This defines the core logic of the compound rule, which combines interval filters and applies a temporal relation and action to create a new interval.

Attribute Description
temporalRelationParams Specifies the relationship between the two sets of intervals, including filters, time relation, and custom time relation.
temporalAction Defines the action to take on the intervals, such as intersection or union.
resultingIntervalName The name of the resulting interval created by the rule.
metricGroupPrefix A prefix for metric groups in the resulting interval that are related to the first interval
corrMetricGroupPrefix A prefix for metric groups in the resulting interval that are related to the corresponding interval

Note

A compound rule definition object can be created using intervals.build_compound_rule_definition() (see here)

74.3.27 Temporal Relation Parameters

Defines the filters, time relation, and custom time relation used to specify the relationship between two sets of intervals.

Attribute Description
intervalFilters List of filters applied to the intervals in the compound rule.
timeRelation The time relation between the two sets of intervals, such as "ANY_INTERSECTION".
customTimeRelation An optional custom time relation, if applicable.
workspaceId ID of the workspace in which the rule is applied.

74.3.28 Custom Time Relation

Defines specific relationships between intervals, such as the relation between their start and end points. This enables more advanced control over how intervals interact. All times are given in milliseconds.

Attribute Description
startToStart The relationship between the start times of the two intervals.
startToEnd The relationship between the start time of one interval and the end time of the other.
endToStart The relationship between the end time of one interval and the start time of the other.
endToEnd The relationship between the end times of the two intervals.

74.3.29 Open Range

Defines a range with upper and lower limits (bound), used in the CustomTimeRelation to specify the valid time range for the interval relationship.

Attribute Description
upperBound The upper bound of the range, if specified.
lowerBound The lower bound of the range, if specified.

74.3.30 Time Relation Enum

Define different types of temporal relationships between intervals, describing how they relate to one another in terms of time.

Value Description
ANY_INTERSECTION Indicates any intersection between intervals.
A_BEFORE_B Interval A occurs before interval B.
A_AFTER_B Interval A occurs after interval B.
A_CONTAINS_B Interval A contains interval B.
B_CONTAINS_A Interval B contains interval A.
CUSTOM A custom temporal relation defined by the user.

74.3.31 CompoundCreationContext Enum

Defines the context in which the compound rule is created, either globally or within a specific workspace.

Value Description
GLOBAL The compound rule is created in a global context.
WORKSPACE The compound rule is created within a specific workspace context.

74.3.32 Dispatcher service

Dispatcher is a service, used in order to run foretify on scale. It is responsible for managing test run group definitions, and using them to execute multiple test runs in a distributed environment.

74.3.33 Test Run Group Definition

A configuration used in order to execute a test suite via dispatcher.

Attribute Description
id ID
name Unique name
frunFiles Paths for csv/txt files, to be used by frun, to run a regression
createdAt Time of creation
ownerId Creator's ID
ownerName Creator's Username
environmentSettingsId ID of associated environment settings (see next section)
userPermissionLevel Current user's permission level on the Test Run Group Definition

74.3.34 Dispatcher Environment Settings

A part of the test run group definition, which contains the environment-specific settings for running foretify (such as docker images for foretify/sut, number of cores, and so on...). Number of fields is large, and can change between environment, and thus, the configuration is a generic object, which sits under the "settings" field.

Attribute Description
id ID
name A unique name for the settings
settings A JSON string, which describes the execution environment, in a format suitable for execution by dispatcher
ownerId Creator's ID
ownerName Creator's Name
userPermissionLevel Current user's permission level on the environment settings

74.3.35 Foretify Manager server exception

An exception that is raised when the Foretify Manager Server encountered an error.

Attribute Description
status_code HTTP code of the server response
message Detailed explanation of the server exception
url The request URL on which the exception occurred

74.3.36 Session Invalidation Response

Response object returned from session invalidation operations (admin/support only).

Attribute Description
invalidatedCount Number of sessions successfully invalidated
invalidatedSessions List of InvalidatedSessionDetails objects (see below)
adminUsername Username of the admin/support user who performed the invalidation
message Result message (e.g., "Successfully invalidated 3 sessions")

74.3.37 Invalidated Session Details

Details of a single invalidated session.

Attribute Description
sessionId Session ID (truncated for security, e.g., "A1B2C3...")
username Username whose session was invalidated
hadLicense Whether the session held a license
sessionDuration How long the session was active (e.g., "2h 30m")
createdAt When the session was created
lastActive Last activity timestamp

74.4 Foretify Manager SDK APIs

74.4.1 Common Parameter Models

Some parameters appear in multiple endpoints of the API:

  • Filter is a multi-level set of logical conditions. It is usually easiest to generate by helper methods like Filter.all() or Filter.any(). See detailed information about Filters below.

  • Pagination is a dictionary object of the format {"size": [Integer], "page": [Integer]}, specifying the max number of entities which is expected in the result, along with the page number. For example, the first 50 results will be returned when specifying pagination={"size": 50, "page": 0}, the next 50 results will be returned by pagination={"size": 50, "page": 1}, etc. Requesting multiple entities from the server is always done in pages by calling page() methods, as a best practice, in order to optimally utilize network resources. Helper get() methods contained in all of the clients (test_runs.get(), test_run_groups.get(), etc.) can help abstracting the iteration and concatenation of all pages into a single call.

  • OrderBy is a dictionary object of the format {"properties": ["property1", "property2", ...], "direction": "DIRECTION"}. The properties field is a list of property names (one or more) to sort the results by. The direction field specifies whether the results should be ordered by ascending ("ASC") or descending ("DESC") order. For example, the results will be sorted by time of creation, starting from the newest, by specifying orderBy={"properties": ["created_at"], "direction": "DESC"}.

74.4.2 Filters

Filters are objects used when querying for any entity (e.g. when calling page() or get() APIs). The search module provides API for creating filters from expressions.

A filter is always made up of one or more logical conditions, where all (AND) or any (OR) of the conditions need to be met.

In addition, a filter can be "inclusive" ("in-filter", i.e. passing the filter is required be included in the result) or an "exclusive" ("out-filter", i.e. not passing the filter is required be included in the result).

74.4.2.1 Conditions

A condition can be created from a string, specifying the entity's field to consider, a conditional operator on it and values to compare to.

Examples:

Python: Conditions
runs_condition_1 = Condition.from_term("testRunGroupId EQ " + trg_id)
runs_condition_2 = Condition.from_term("seed LTE 4")
trg_condition_1 = Condition.from_term("name MATCHES *nightly_regression*")
trg_condition_2 = Condition.from_term("passed GTE 1000")

The following condition operators are supported:

  • EQ (equals)
  • MATCHES (using wildcard pattern matching)
  • NOT_MATCHES (using wildcard pattern matching)
  • GT (greater than)
  • GTE (greater than or equals)
  • LT (less than)
  • LTE (less than or equals)
  • CONTAINS_ALL (contains all elements of a list)
  • CONTAINS_ANY (contains at least one element in a list)
  • CONTAINS_NONE (contains none of the elements in a list)
  • BUCKET_NAME_EQUALS (for ItemFilter)

Note

The MATCHES and NOT_MATCHES operators compare strings using Wildcard Pattern Matching (and not regular expressions), where the * character can match any sequence of characters (including the empty sequence).

74.4.2.2 Filter, RunFilter, ItemFilter, IntervalFilter & AdvancedIntervalFilter

A filter can be created from one or conditions by either the Filter.all() or Filter.any() builder methods, as described below:

Function Parameters Return Example Description
Filter.single a single condition Filter Filter.single("status EQ PASSED") Create a filter of a single condition
Filter.all comma-separated conditions Filter Filter.all("status EQ PASSED", "testName EQ test1") Create a filter of multiple conditions, all of which need to be met
Filter.any comma-separated conditions, any of which needs to be met Filter Filter.any("status EQ PASSED", "testName EQ test1") Create a filter of multiple conditions, at least one of which needs to be met

Any API which requires a filter will accept a basic filter created by Filter.all() or Filter.any() (which by default resolve to what is described below as a RunFilter). However, sometimes more complex filters are required. For example, it might be required to specify an exclusive filter (by setting include=False) or to create an ItemFilter to query only for runs for which meet some metric-related attributes conditions:

Function Parameters Return Example Description
RunFilter.of filter: a Filter object, include: Boolean (default is True) RunFilter RunFilter.of(filter=Filter.all("status EQ PASSED", "testName EQ test1"), include=False) Create a runs filter
ItemFilter.of group_qualified_name: item-group name, filter: a Filter object, include: Boolean (default is True) ItemFilter ItemFilter.of(group_qualified_name="vehicle.drive.end", filter=Filter.single("other_vehicle_side_of_collision_while_self EQ front_right")) Create an item ("group sample") filter
IntervalFilter.of interval_name (optional): searching by specific interval name
filter (optional): a Filter object for filtering by interval attributes
interval_type (optional): filter by specific interval type (IntervalType)
include (optional): Boolean (default is True)
IntervalFilter IntervalFilter.of(interval_name="vehicle.drive.end", filter=Filter.single("startTime EQ 0"), interval_type=IntervalFilter.IntervalType.WatcherIntervalData ) Create an interval filter
AdvancedIntervalFilter.of first_level_type: filter by specific interval type (IntervalType)
first_level_name (optional): searching by specific interval name
children (optional): List of filter elements, to filter intervals by their child interval attributes
AdvancedIntervalFilter AdvancedIntervalFilter.of(first_level_type=IntervalFilter.IntervalType.GlobalModifierIntervalData, children=[ItemFilter.of("top.info.main_issue", filter=Filter.single("result eq Passed"))]) Create an relations interval filter

74.4.2.3 FilterContainer

A FilterContainer can be created from one or more Filter (interpreted as inclusive RunFilters), RunFilters, ItemFilters or IntervalFilter interchangeably.

Function Parameters Return Example Description
FilterContainer.of filter_elements: one or more filter elements, comma-separated FilterContainer FilterContainer.of(runFilter, runFilter2, itemFilter) Create a filter container

Note: All page() and get() APIs described below expect either: - a single RunFilter (or the simpler Filter), or - a FilterContainer which contains only RunFilters.

Note: ItemFilter and IntervalFilter should only be contained in a Workspace's runsFilter attribute. In all other cases (i.e. outside the context of a Workspace), including an ItemFilter or an IntervalFilter inside a FilterContainer passed to page() or get() calls might return an empty result.

Example:

Python: Creating a complex FilterContainer
from ftx.model.search import Filter, RunFilter, ItemFilter, FilterContainer, Condition, LogicalOperator, IntervalFilter
from ftx.shell import client

client.login(host=host, user=user, port=port, https=https)

condition = Condition.from_term("passed GTE 1000")
ws_filter = FilterContainer.of(
  RunFilter.of(Filter.any("testRunGroupId EQ " + trg1_id, "testRunGroupId EQ " + trg2_id), include=True),
  RunFilter.of(Filter.single("testName EQ MyInterestingTest"), include=True),
  RunFilter.of(Filter.any("simulationTime GTE 5000", "testMap everywhereButThere"), include=False),
  RunFilter.of({"logicalOperator": LogicalOperator.AND, "conditions": [condition]}),
  ItemFilter.of(group_qualified_name="vehicle.drive.end", 
    filter=Filter.any(
      "other_vehicle_side_of_collision_while_self BUCKET_NAME_EQUALS front_right",
      "other_vehicle_side_of_collision_while_self BUCKET_NAME_EQUALS back_left"
    )
  ),
  IntervalFilter.of(
    interval_name="vehicle.drive.end", 
    interval_type=IntervalFilter.IntervalType.CoverageIntervalData, 
    filter=Filter.single("duration eq 1000")
  )
)

74.4.3 Client API

Provides methods to log in to and logout from a Foretify Manager server.

Function Parameters Return Example
login host (str): Foretify Manager server hostname
user (str)
password (str)
port (int): Foretify Manager server port number
https (bool): Whether to use HTTPS for the connection
access_token (str): JWT token (instead of username/password)
The logged-in username client.login(host="hostname", user="username", password="password", port=8080, https=False, access_token="token")
client.login(host="host", username="username") # password retrieved from .ftxpass if available
logout client.logout()
change_password new_password (optional, will prompt if not passed) client.change_password()

74.4.3.1 Access Tokens API

Access Tokens can be created by users to be used later for authentication, as an alternative to user/password credentials.

Function Parameters Return Example Description
generate_token time_to_expiration_in_seconds (optional): expiration period in seconds (default is unlimited)
alias: token alias (will be automatically generated if not provided)
dict containing jwtToken client.generate_token(300) Generate an access token
login access_token username client.login(access_token="example_token") Authenticate using a token
get_active_access_tokens None See description client.get_active_access_tokens() Retrieve a list of all active tokens (by alias) for this user, with their expiration time-stamps
invalidate_token token: token to invalidate (string) NONE client.invalidate_token("example_token") Invalidate a token
invalidate_token_by_alias alias: alias of token to invalidate (string) NONE client.invalidate_token_by_alias("token alias") Invalidate a token by an alias

Example:

Python: Generate, use and invalidate an access token
from ftx.shell import client

client.login(host=host, user=user, port=port, https=https)

# Generate an un-expiring token
token = client.generate_token(alias="my 1st token")["jwtToken"]

# Logout from current session, before logging-in with the token
client.logout()

# Start a new session using the token
client.login(host=host, access_token=token)

# Generate a second token
token = client.generate_token(alias="my 2nd token")["jwtToken"]

# Token is invalidated, but session is still active
client.invalidate_token(token)

# Logout from session
client.logout()

# Running this expression will result in an authentication error, since the token was invalidated
# client.login(host=host, access_token=token)

74.4.4 Users API

Provides methods for user management and session administration.

74.4.4.1 Session Management (Admin/Support Only)

IMPORTANT: The following functions require ADMIN, FMANAGER_ADMIN, or FMANAGER_SUPPORT roles.

Function Parameters Return Example Description
get_logged_in_users None list(dict): List of dictionaries containing userName, loggedInCount, and consumedLicensesCount users.get_logged_in_users() Admin/Support only: Retrieve statistics about currently logged-in users and their consumed licenses
invalidate_sessions session_id (str, optional): Specific session ID to invalidate
username (str, optional): Username to invalidate all sessions for
reason (str, optional): Reason for invalidation (for audit trail)

Note: Either session_id OR username must be provided (mutually exclusive)
dict: SessionInvalidationResponse with invalidatedCount, invalidatedSessions, adminUsername, and message users.invalidate_sessions(username="john.doe", reason="Security incident") Admin/Support only: Forcibly terminate user session(s) and release associated licenses. Useful for troubleshooting, security incidents, or license management.

Example:

Python: List logged-in users and invalidate sessions (Admin/Support only)
from ftx.shell import client
from ftx.shell import users

# Login as admin or support user
client.login(host="localhost", user="admin@fmanager.com", port=8080)

# Get list of logged-in users
logged_users = users.get_logged_in_users()
for user_info in logged_users:
    print(f"User: {user_info['userName']}, "
          f"Sessions: {user_info['loggedInCount']}, "
          f"Licenses: {user_info['consumedLicensesCount']}")

# Invalidate all sessions for a specific user
response = users.invalidate_sessions(
    username="john.doe",
    reason="Security incident - suspected compromised account"
)

print(f"Invalidated {response['invalidatedCount']} session(s)")
for session in response['invalidatedSessions']:
    print(f"  - Session {session['sessionId']}: {session['sessionDuration']}, "
          f"License: {session['hadLicense']}")

# Invalidate a specific session by ID
# (Get session ID from get_logged_in_users response)
response = users.invalidate_sessions(
    session_id="A1B2C3D4E5F6",
    reason="Stuck session cleanup"
)

client.logout()

74.4.5 Metric Models API

Provides methods for uploading metric models from disk to the Foretify Manager server and for metric models analysis.

Function Parameters Return Example Description
collect file_path: OS path to metric model file metric_model model = metric_models.collect(".../demo/model.json") Load and create a metric model from file
collect_dict dict: A dictionary with fields name, osUser, runDir, structs metric_model metric_models.collect_dict({'name': 'admin_run_group_11-14-2022-13:35:12_model', 'osUser': '', 'runDir': '', 'version': '1.1', 'structs': [{'name': 'environment.timing', 'groups': [{'name': 'end', 'items': [{'name': 'specific_time', 'unit': 'hr', 'description': 'Specific time at the end of the scenario', 'type': 'DOUBLE', 'record': 'true', 'buckets': []}, {'name': 'time_of_day', 'description': 'Time of the day', 'type': 'ENUM', 'record': 'true', 'buckets': [{'name': 'undefined_time_of_day', 'target': 1}, {'name': 'sunrise', 'target': 1}]}]}]}]}) Create a metric model from a python dict object
collect_from_paths file_paths: OS paths to the run_data.pb.gz files, tsr_id: the Test Suite Result (TSR) to combine the metric_models to, skip_schema_version_validation: A flag to allow metric_models from past Foretify versions (default false) None metric_models.collect_from_paths(paths=["/path1/", "/path2/"], tsr_id=tsr["id"]) Appends Metric Model to an already existing TSR
create name, os_user, runDir, structs (all parameters are optional, and set to "" if not given) metric_model metric_models.create(name = "my_model", os_user = "me") Create a metric model (can be empty to be updated later)
get_by_id metric_model_id metric_model metric_model = metric_models.get_by_id(ID) Get a metric model by its ID
page filt, pagination, orderBy (see Common Parameter Models for details) a Pandas DataFrame of a page of metric model entities data_frame = metric_models.page(filt=someFilter, pagination={"page":0,"size": 50}, orderBy=someOrder) Retrieve a page of metric models (by Filter)
get filt, orderBy (see Common Parameter Models for details) a Pandas DataFrame of all metric model entities matching the filter df = metric_models.get(filt=someFilter, orderBy=someOrder) Retrieve metric models (by Filter)
update metric_model_id, update_metric_model_request: A dictionary with parameters: name,osUser, runDir, structs metric_model my_model = metric_models.get_by_id("1234")
new_group = {'name' : 'my_group', 'items' : []}
new_struct = {'name':'new_struct','groups': [new_group]}
metric_models.update(metric_model_id="1234", update_metric_model_request={"name": "new_name", "osUser": my_model["osUser"], "runDir": my_model["runDir"], "structs": [new_struct]})
Used in order to update a metric model. Only name and structs are actually allowed to be updated. The rest are ignored.
copy_metric_group_def id:metric model id
copy_from_struct: struct name to copy from
copy_to_struct: struct name to copy to (created automatically if doesn't exist)
copy_from_group: group name to copy from
copy_to_group: new group definition to be created, which imitates the original group (in terms of its items)
metric model metric_models.copy_metric_group_def(id=workspace["metricModelId"], copy_from_struct=struct_name, copy_to_struct=new_struct_name, copy_from_group=group_name, copy_to_group=new_group_name) Copy a metric group definition into a new struct and group, along with its entire list of items and buckets

Examples:

Python: Retrieve a workspace's metric model
from ftx.shell import client, metric_models, workspaces

client.login(host=host, user=user, port=port, https=https)

workspace = workspaces.get_by_id(ws_id)
model = metric_models.get_by_id(workspace["metricModelId"])
Python: Construct a list of all buckets under workspace's metric model
from ftx.shell import client, metric_models, workspaces

client.login(host=host, user=user, port=port, https=https)

workspace = workspaces.get_by_id(ws_id)
model = metric_models.get_by_id(workspace["metricModelId"])
buckets = list()
for struct in list(model["structs"].values()):
  for group in list(struct["groups"].values()):
    for item in list(group["items"].values()):
      for bucket in list(item["buckets"].values()):
        buckets.append(bucket)

74.4.6 VPlan Templates API

Provides methods for uploading VPlan template files to Foretify Manager server and for their analysis.

Function Parameters Return Example Description
collect file_path: OS path to VPlan template file VPlan template create object vplan = vplan_templates.collect(".../demo.vplan") Collect a VPlan Template JSON file from a specified path
create create_request: VPlan template create object (e.g. result of collect()), see notes below VPlan template vplan_templates.create(vplan) Create a VPlan template
get_by_id vplan_template_id VPlan template vplan_template = vplan_templates.get_by_id(ID) Retrieve a VPlan template by its ID
page filt, pagination, orderBy (see Common Parameter Models for details) A Pandas DataFrame of a page of VPlan template entities df = vplan_templates.page(filt=someFilter, pagination={"page":0,"size":50}, orderBy=someOrder) Retrieve a page of VPlan templates (by Filter)
get filt, orderBy (see Common Parameter Models for details) A Pandas DataFrame of all VPlan template entities matching the filter df = vplan_templates.get(filt=someFilter, orderBy=someOrder) Retrieve VPlan templates (by Filter)
delete vplan_template_id vplan_templates.delete(ID) Delete a VPlan template
add_empty_section vplan_template_id: str
parent_section_path: List[str]
new_section_name: str
Updated VPlan template vplan_templates.add_empty_section(ID,"ParentSection","NewSection") Add an empty section to a VPlan template under a specified parent section
add_new_section vplan_template_id: str
parent_section_path: List[str]
new_section: dict (see VPlan Template Section)
Updated VPlan template vplan_templates.add_new_section(ID,["Parent","Section","Path"],new_section) Add a new section (with content) to a VPlan template under a specified parent section.
To add a new reference, include "_type": "VplanTemplateReferenceRequest" in the new section, and include "_type": "VplanTemplateCheckerRequest" to add a new checker
delete_section vplan_template_id: str
section_path: List[str]
vplan_templates.delete_section(ID,["Section","Path"]) Delete a section from a VPlan template by its path
update_section vplan_template_id: str
section_path: List[str]
updated_section: dict (see VPlan Template Section)
Updated VPlan template vplan_templates.update_section(ID,["Section","Path"],updated_section) Update a section in a VPlan template by its path

Notes:

VPlan Template Create Request is a python dictionary of the following structure:

{
  "name": "String",
  "sections": [],
  "description": "String",
  "requirementsProjectId": "String",
  "source": "String",
  "workspaceId": "String",
  "excludedVplanPaths": [],
}

All the fields except name and sections are optional.

It is advised not to manually write (or code) a VPlan template. Instead, use the VPlan Editor of the Foretify Manager web-client, and download it as a JSON file if needed. See Creating VPlans for more details.

Examples:

Python: Create a VPlan from file
from ftx.shell import client, vplan_templates

client.login(host=host, user=user, port=port, https=https)

vplan_template = vplan_templates.collect("./demo/demo.vplan")
vplan_template = vplan_templates.create(vplan_template) 
Python: Add, edit and delete Regular and Reference Sections in a VPlan
from argparse import ArgumentParser
from ftx._common.utils import add_common_arguments
from ftx.shell import client, vplan_templates

def parse():
    parser = ArgumentParser(description="Arguments for add/edit VPlan sections example")
    add_common_arguments(parser)
    return parser.parse_args()

args = parse()
globals().update(vars(args))

client.login(host=host, user=user, port=port, https=https)

vplan_template = vplan_templates.create(vplan_templates.collect("./demo/demo.vplan"))
vplan_id = vplan_template["id"]


from ftx.shell import client, vplan_templates

client.login(host=host, user=user, port=port, https=https)

vplan_template = vplan_templates.get_by_id(vplan_id)

# Add a new regular section under 'top' section
new_section = {
    "name": "new_section",
    "description": "A new section under 'top' section",
    "weight": 5
}
vplan_template = vplan_templates.add_new_section(
    vplan_template_id=vplan_id,
    parent_section_path=["top"],
    new_section=new_section
)
                                              # top -> new_section
section_to_update = vplan_template["sections"][0]["sections"][1]
section_to_update["name"] = "new_section_name"
section_to_update["description"] = "Updated description for 'new_section'"
section_to_update["weight"] = 10

vplan_template = vplan_templates.update_section(
    vplan_template_id=vplan_id,
    section_path=["top", "new_section"],
    updated_section=section_to_update
)

# Add new reference section
new_reference_section = {
    "_type": "VplanTemplateReferenceRequest",
    "name": "reference_section",
    "vplanRefId": vplan_id,
    "vplanRefPath": ["top", "drive_section"]
}

vplan_template = vplan_templates.add_new_section(
    vplan_template_id=vplan_id,
    parent_section_path=["top"],
    new_section=new_reference_section
)

# Edit the reference section
                                              # top -> reference_section
section_to_update = vplan_template["sections"][0]["sections"][2]
section_to_update["name"] = "reference_section_updated"
section_to_update["vplanRefPath"] = ["top", "new_section_name"]
# can also change section_top_update["vplanRefId"] to refer to section from a different vplan template

vplan_template = vplan_templates.update_section(
    vplan_template_id=vplan_id,
    section_path=["top", "reference_section"],
    updated_section=section_to_update
)

# Delete the newly added section
vplan_template = vplan_templates.delete_section(
    vplan_template_id=vplan_id,
    section_path=["top", "new_section_name"]
)

# Delete the newly added reference section
vplan_template = vplan_templates.delete_section(
    vplan_template_id=vplan_id,
    section_path=["top", "reference_section_updated"]
)
Python: Retrieving a VPlan template by ID
from ftx.shell import client, vplan_templates

client.login(host=host, user=user, port=port, https=https)

vplan = vplan_templates.get_by_id(vplan_template_id=vplan_id)
Python: Retrieving a page of VPlan templates
from ftx.shell import client, vplan_templates
from ftx.model.search import Filter 

client.login(host=host, user=user, port=port, https=https)

filt = Filter.all("userName EQ example_user_name")
pagination = {"page": 0, "size": 200}

vplans = vplan_templates.page(filt=filt, pagination=pagination)
Python: Deleting a VPlan template by ID
from ftx.shell import client, vplan_templates

client.login(host=host, user=user, port=port, https=https)

vplan_templates.delete(vplan_template_id=vplan_id)

74.4.7 Test Run Groups API

Provides methods for creating and analyzing a test run group (also known as Test Suite Result or TSR).

Function Parameters Return Example Description
create name: A TSR name
metricModel (optional): The metric model ID for the group
project_id: The project to include the TSR
labels: List of labels
status (optional): Initial group status
total_runs (optional): Expected number of runs in the group
test_run_group object test_run_group = test_run_groups.create(name="test suite", project_id=proj["id"], status="RUNNING", total_runs=100) Create a new TSR
get_by_id test_run_group_id test_run_group test_run_group = test_run_groups.get_by_id("id") Get a TSR by ID
page filt, pagination, orderBy (see Common Parameter Models for details), project_id (optional) a Pandas DataFrame of a page of TSRs df = test_run_groups.page(filt=someFilter, pagination={"page":0,"size": 50}, orderBy=someOrder) Retrieve a page of TSRs (by Filter)
get filt, orderBy (see Common Parameter Models for details), project_id (optional) a Pandas DataFrame of all TSRs matching a filter df = test_run_groups.get(filt=someFilter, orderBy=someOrder) Retrieve TSRs (by Filter)
update test_run_group_id
name (optional): TSR name to set
metric_model_id (optional): Metric model ID to set
labels: List of labels to be replaced with the existing labels
failure_message (optional): Failure message to set (provided by Dispatcher)
detailed (optional): Detailed failure message to set (provided by Dispatcher)
frun_log (optional): The frun log which explains why the regression failed to launch (provided by Dispatcher)
frun_info_path (optional): Path to the frun information (provided by Dispatcher)
status (optional): New group status
test_run_group test_run_group = test_run_groups.update("id", name="my group", status="COMPLETED") Update a TSR
lock test_run_group_ids: ID or list of IDs test_run_groups.lock("id") Lock TSR(s) from automatic deletion ("clean-up")
unlock test_run_group_ids: ID or list of IDs test_run_groups.unlock("id") Un-lock TSR(s) (see lock() above)
delete test_run_group_id test_run_groups.delete("id") Delete a TSR
complete_group test_run_group_id: ID or list of IDs test_run_groups.complete_group("id")
test_run_groups.complete_groups(["id1", "id2"])
Marks a TSR as COMPLETED, indicating that all its runs have finished uploading
get_frun_info_json test_run_group_id: TSR ID frun_info dictionary frun_info = test_run_groups.get_frun_info_json("some_id") For TSRs is executed via Dispatcher, a JSON file named frun_info.json is generated, containing data about the Frun execution used to create the TSR.

Examples:

Python: Retrieving a TSR by ID
from ftx.shell import client, test_run_groups

client.login(host=host, user=user, port=port, https=https)

group = test_run_groups.get_by_id(test_run_group_id=trg_id)
Python: Retrieving a page of completed TSRs
from ftx.shell import client, test_run_groups
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

filt = Filter.all("status EQ COMPLETED")
pagination = {"page": 0, "size": 200}

groups = test_run_groups.page(filt=filt, pagination=pagination)
Python: Retrieving all stopped TSRs
from ftx.shell import client, test_run_groups
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

filt = Filter.all("status EQ STOPPED")

groups = test_run_groups.page(filt=filt)
Python: Updating and querying by labels
from ftx.shell import client, test_run_groups
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

trg = test_run_groups.get_by_id(trg_id)
labels = trg['labels']
my_new_label = 'my_new_label'
new_labels = labels + [my_new_label] if isinstance(labels, list) else [my_new_label]

trg_after = test_run_groups.update(test_run_group_id=trg_id, labels=new_labels)

filt = Filter.all(f"labels CONTAINS_ANY {my_new_label}")

groups = test_run_groups.page(filt=filt)

74.4.8 Test Runs API

Provide methods for uploading runs to the Foretify Manager server and for analyzing them.

Function Parameters Return Example Description
collect_from_paths test_run_group: TSR object to insert the test runs into
paths: List of paths from which to collect test runs
model: Model information. Defaults to None.
preserve_paths: Flag indicating whether to preserve original run directories during upload. Defaults to False.
s3_connection_details: Connection details for S3 upload. Defaults to None.
azure_credentials: Azure storage credentials for BlobStorage upload. Defaults to None.
skip_schema_version_validation: boolean (see below)
None test_runs.collect_from_paths(trg, runs_top_dirs, model, preserve_paths, s3_connection_details, azure_credentials, skip_schema_version_validation Collects test runs from multiple paths in the file system and uploads them to the fmanager server.
get_by_id test_run_id
include_trace: when True, the returned run will include trace data (default is False)
include_intervals: when True, the returned run will include intervals (default is True)
test_run test_run= test_runs.get_by_id(ID, True) Retrieve a specific test run
page filt, pagination, orderBy (see Common Parameter Models for details)
detailed: see notes below (default is False)
include_trace: see notes below (default is False)
project_id (optional)
interval_filter (optional): see notes below
Pandas DataFrame of test run entities for the requested filter and page df = test_runs.page(filt=someFilter, pagination={"page":0,"size": 50}, orderBy=someOrder, detailed=True, include_trace=True), interval_filter=IntervalFilter.of(filter=Filter.all("name EQ sut.vehicle_cut_in")) Retrieve a page of test runs (by RunFilter)
get filt, orderBy (see Common Parameter Models for details)
detailed: see notes below (default is False)
include_trace: see notes below (default is False)
project_id (optional)
interval_filter (optional): see notes below
Pandas DataFrame of test run entities for the requested filter df = test_runs.get(filt=someFilter, orderBy=someOrder, detailed=True, include_trace=True, interval_filter=IntervalFilter.of(filter=Filter.all("name EQ sut.vehicle_cut_in"))) Retrieve test runs (by RunFilter)
page_by_workspace_test_run_filter workspace_id
pagination (see Common Parameter Models for details)
detailed: see notes below (default is False)
include_trace: see notes below (default is False)
interval_filter (optional): see notes below
Pandas DataFrame of test-run entities for the requested workspace and page test_runs.page_by_workspace_test_run_filter(workspace["id"], {"page": 0, "size": 1000}, True, True, interval_filter=IntervalFilter.of(filter=Filter.all("name EQ sut.vehicle_cut_in"))) Retrieve a page of test runs in a workspace
get_by_workspace_test_run_filter workspace_id
detailed: see notes below (default is False)
include_trace: see notes below (default is False)
interval_filter (optional): see notes below
Pandas DataFrame of all test-run entities for the requested workspace test_runs.get_by_workspace_test_run_filter(workspace["id"], True, True, interval_filter=IntervalFilter.of(filter=Filter.all("name EQ sut.vehicle_cut_in"))) Retrieve all test runs in a workspace
append_issues_by_id test_run_id
issues (list of Issues)
main_issue (Issue)
test_runs.append_issues_by_id(test_run_id=test_run["id"], issues=[test_runs.Issue()], main_issue=test_runs.Issue()) Append a list of user-created issues to a test run
append_issues_by_filter filter (Filter)
issues (list of Issues)
main_issue (Issue)
test_runs.append_issues_by_filter(filter=some_filter, issues=[test_runs.Issue()], main_issue=test_runs.Issue()) Append a list of user-created issues to several test runs that match a filter
page_runs_in_triage view_id: Triage View ID
filter: An extra filter for the test runs, used to further filter runs in the triage (for example to imitate the "filter by cluster" behavior seen in the UI
pagination (see Common Parameter Models for details)
orderBy (see Common Parameter Models for details (optional)
detailed: see notes below (default is False)
coverage_items: List of coverage items to retrieve (see below)
interval_filter (optional): see notes below
Test Runs Pandas DataFrame test_runs.page_runs_in_triage(view_id=triage_view["id"], pagination={"page": 0, "size": 50}, filter=FilterContainer.of(Filter.any("mainIssueSeverity EQ ERROR")), detailed=True)

test_runs.page_runs_in_triage(view_id=triage_view["id"], pagination={"page": 0, "size": 50}, filter=FilterContainer.of(Filter.any("mainIssueSeverity EQ ERROR")), coverage_items=[{"interval": "top.info.issue", "item": "i_severity"}])

test_runs.page_runs_in_triage(view_id=triage_view["id"], pagination={"page": 0, "size": 50}, filter=FilterContainer.of(Filter.any("mainIssueSeverity EQ ERROR")), interval_filter=IntervalFilter.of(filter=Filter.all("name EQ sut.vehicle_cut_in")))
Retrieve a page of test runs in a triage view
get_runs_in_triage view_id: Triage View ID
filter: An extra filter for the test runs, used to further filter runs in the triage
orderBy (see Common Parameter Models for details (optional)
detailed: see notes below (default is False)
coverage_items: List of coverage items to retrieve (see notes below)
interval_filter (optional): see notes below
Test Runs Pandas DataFrame test_runs.get_runs_in_triage(view_id=triage_view["id"], filter=FilterContainer.of(Filter.any("mainIssueSeverity EQ ERROR")))

test_runs.get_runs_in_triage(view_id=triage_view["id"], filter=FilterContainer.of(Filter.any("mainIssueSeverity EQ ERROR")), coverage_items=[{"interval": "top.info.issue", "item": "i_severity"}])

test_runs.get_runs_in_triage(view_id=triage_view["id"], filter=FilterContainer.of(Filter.any("mainIssueSeverity EQ ERROR")), interval_filter=IntervalFilter.of(filter=Filter.all("name EQ sut.vehicle_cut_in")))
Get all test runs in the triage view, which pass the filter.
get_runs_in_triage_by_encoded_context encoded_context: An encoded context string copied from the Foretify Manager user interface Test Runs Pandas DataFrame test_runs.get_runs_in_triage_by_encoded_context(my_copied_context) A UI + python SDK combined feature. A user can copy a "triage context" from the UI which represents a specific cluster in the triage. The user can then paste the context as an input to this function, and get the DataFrame defined by the cluster.
page_runs_in_triage_by_encoded_context encoded_context: An encoded context string copied from the Foretify Manager user interface
pagination (see Common Parameter Models for details)
orderBy (see Common Parameter Models for details (optional)
detailed: see notes below (default is False),
interval_filter (optional): see notes below
Triage views page page_runs_in_triage_by_encoded_context(my_encoded_context,{"page": 1, "size": 20}, my_order, False, interval_filter=IntervalFilter.of(filter=Filter.all("name EQ sut.vehicle_cut_in"))) A paged version of get_runs_in_triage_by_encoded_context
create_from_s3 url: S3 URL (prefix)
test_run_group_id: Test Suite ID
skip_schema_version_validation: boolean (see below)
None test_runs.create_from_s3("s3://example/test",trg_id, False) Create runs from the data hosted on Amazon S3, under the prefix given by URL
create_from_s3_async url: S3 URL (prefix)
test_run_group_id: Test Suite ID
skip_schema_version_validation: boolean (see below)
Task ID task_id = test_runs.create_from_s3_async("s3://example/test",trg_id, False) Create runs from the data hosted on Amazon S3, under the prefix given by URL. Returns a task ID, which can be used to follow the upload progress (see get_s3_upload_progress)
get_s3_upload_progress task_id: Task ID returned from create_from_s3_async Progress indication dict: see notes below test_runs.get_s3_upload_progress(task_id) Used to keep track of an S3 upload process started by create_from_s3_async

Notes

  • detailed: If True, the retrieved DataFrame will contain additional fields which aren't available by default: groupsSamples, bucketSummaries and issues.

  • include_trace: If True, the retrieved DataFrame will contain traceData.

  • skip_schema_version_validation: If True, Foretify Manager will attempt to receive uploaded runs, regardless of whether the Foretify schema version isn't up-to-date (which usually indicates the run data was created with a very old Foretify release). False by default, meaning the runs will be rejected immediately and an appropriate error message will be displayed.

  • coverage_items: List of coverage items to be retrieved together with the runs. In case of multiple hits, the value of the latest hit in each run will be returned. The list should contain objects of the form {"interval": ..., "item": ...}. For example: [{"interval": "some_interval_name", "item": "some_item_name"}, {"interval": "some_interval_name_2", "item": "some_item_name_2"}]. Should be passed with detailed=False.

  • interval_filter: If provided, the intervals caught by the filter will be returned with the corresponding run's intervals field. Should be passed with detailed=False, and can't be passed together with coverage_items.

  • Page Size Limitation: Pagination size of test-runs is limited to 2000 - calling test_runs.page() with a larger size will throw an exception.

  • Progress Indication: Progress of task, of the form {"progress": int, "completed": boolean, "status": "SUCCESSFUL", "EXCEPTION", "ABORTING" or "UNKNOWN"}.

Examples:

Python: Retrieving a run by ID
from ftx.shell import client, test_runs

client.login(host=host, user=user, port=port, https=https)

run = test_runs.get_by_id(test_run_id=run_id, include_trace=True)
Python: Retrieving a page of runs matching a filter
from ftx.shell import client, test_runs
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

filt = Filter.all("testRunGroupId EQ " + trg_id, "status EQ PASSED")
pagination = {"page": 0, "size": 200}

runs = test_runs.page(filt=filt, pagination=pagination, detailed=True, include_trace=True)
Python: Retrieving Issues of a page of runs
from ftx.shell import client, test_runs
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

test_run_group_id = trg_id
filt = Filter.all("testRunGroupId EQ " + test_run_group_id)
pagination = {"page": 0, "size": 200}

runs = test_runs.page(filt=filt, pagination=pagination, detailed=True)

issues = runs[['id', 'status', 'issues']]
Python: Retrieving a page of runs from a workspace
from ftx.shell import client, test_runs

client.login(host=host, user=user, port=port, https=https)

runs = test_runs.get_by_workspace_test_run_filter(workspace_id=ws_id, detailed=True, include_trace=True)
Python: Retrieving all runs matching a filter
from ftx.shell import client, test_runs
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

filt = Filter.any("testRunGroupId EQ " + trg_id)
runs = test_runs.get(filt=filt, detailed=True)
Python: Add issues to runs
from ftx.shell import client, test_runs
from ftx.model.search import Filter, FilterContainer

client.login(host=host, user=user, port=port, https=https)

issue_1 = test_runs.Issue(
  category="some_category", 
  severity="WARNING",
  kind="some_kind", 
  details="some_details",
  time=1460,  # in milliseconds
  fullMessage="some_message",
  normalizedDetails="some_details", 
  modificationReason="some_reason",
  result="some_result"
)

issue_2 = test_runs.Issue(
  category="some_other_category", 
  severity="ERROR", 
  kind="some_other_kind", 
  details="some_other_details",
  time=600,  # in milliseconds 
  fullMessage="some_other_message", 
  normalizedDetails="some_other_details", 
  modificationReason="some_other_reason",
  result="some_other_result"
)

# add issues to a specific run by ID
test_runs.append_issues_by_id(test_run_id=run_id, issues=[issue_1, issue_2])

# add issues to multiple runs by regression ID and setting the main issue
test_runs.append_issues_by_filter(filter=FilterContainer.of(Filter.any("testRunGroupId EQ " + trg_id)), issues=[issue_1, issue_2], main_issue=issue_1)
Python: View Coverage Data from intervals field
from ftx.shell import client, test_runs

client.login(host=host, user=user, port=port, https=https)

runs = test_runs.get_by_id(run_id)
for interval in runs['intervals']:
    if interval['_type'] == 'CoverageIntervalData':
        interval_start_time = interval['startTime']
        interval_end_time = interval['endTime']
        interval_struct_and_group = interval['name']
        interval_scenarios = interval['ancestorScenarioNames']
        interval_items = interval['items']
Python: View Watcher Data from intervals field
from ftx.shell import client, test_runs

client.login(host=localhost, user=username, port=port)

runs = test_runs.get_by_id(run_id)
for interval in runs['intervals']:
    if interval['_type'] == 'WatcherIntervalData':
        interval_start_time = interval['startTime']
        interval_end_time = interval['endTime']
        interval_struct = interval['name']
        interval_scenarios = interval['ancestorScenarioNames']
        interval_watcher_name = interval['watcherName']
        interval_watcher_type = interval['watcherType']

74.4.9 Intervals API

Provide methods for fetching intervals from Foretify Manager server and for analyzing them.

Function Parameters Return Example Description
page filt (optional), pagination (optional)
workspace_id (optional): fetch from specified workspace
test_run_filter (optional): first filter the runs by the given filter
coverage_items (optional): List of coverage items to include in the response
a Pandas DataFrame of interval entities for the requested filter and page df = intervals.page(filt=some_filter, test_run_filter=some_other_filter, pagination={"page":0,"size": 50}, workspace_id="some_workspace_id", coverage_items=[{"interval": "vehicle.general_info.end", "item": "vehicle_category", "multiplicity": "ALL"}]) Retrieve a page of intervals
get filt (optional)
size (optional)
workspace_id (optional): fetch from specified workspace test_run_filter (optional): first filter the runs by the given filter
coverage_items (optional): List of coverage items to include in the response
a Pandas DataFrame of interval entities for the requested filter df = intervals.get(filt=some_filter, test_run_filter=some_other_filter, workspace_id="some_workspace_id", , coverage_items=[{"interval": "vehicle.general_info.end", "item": "vehicle_category", "multiplicity": "MOST_RECENT"}]) Retrieve all intervals
page_pairs_by_temporal_relation workspace_id (str): Workspace ID
interval_filter (dict): First set of interval criteria
corr_interval_filter (dict): Corresponding interval criteria
relation (str): Temporal relation type
custom_relation (CustomTimeRelation, optional): Custom temporal relation
page_size (int, optional): Number of pairs per page (default 50)
paging_state (str, optional): Pagination state for fetching subsequent pages
dict with interval pairs and pagination state result = intervals.page_pairs_by_temporal_relation(workspace['id'], filter_a, filter_b, "A_BEFORE_B", page_size=20) Retrieve pairs of intervals with specified temporal relations
create_intervals_by_temporal_relation workspace_id (str): Workspace ID
interval_filter (dict): First set of interval criteria
corr_interval_filter (dict): Corresponding interval criteria
relation (str): Temporal relation type
custom_relation (CustomTimeRelation, optional): Custom temporal relation
result_interval_name (str): Name for resulting compound intervals
concat_names (bool): Whether to concatenate the result_interval_name with the original interval names. Defaults to False
delimiter (str): When concat_names is set to True, will add a delimiter between the original interval names. Defaults to '_'
temporal_action (TemporalAction): Specifies union or intersection
metric_group_prefix (str, optional): A prefix for metric groups in the resulting interval that are related to the first interval in the relation
corr_metric_group_prefix (str, optional): A prefix for metric groups in the resulting interval that are related to the corresponding interval in the relation
context (CompoundCreationContext, optional): Specifies where to save the intervals - globally or on the specific workspace. Defaults to GLOBAL
None intervals.create_intervals_by_temporal_relation(workspace['id'], filter_a, filter_b, "A_CONTAINS_B", custom_relation, "ResultInterval", TemporalAction.UNION, context=CompoundCreationContext.WORKSPACE) Create compound intervals using specified temporal relations
create_compound_intervals_playground workspace_id (str): Workspace ID
interval_filter (dict): First set of interval criteria
corr_interval_filter (dict): Corresponding interval criteria
relation (str): Temporal relation type
custom_relation (CustomTimeRelation, optional): Custom temporal relation
result_interval_name (str): Name for resulting compound intervals
concat_names (bool): Whether to concatenate the result_interval_name with the original interval names. Defaults to False
delimiter (str): When concat_names is set to True, will add a delimiter between the original interval names. Defaults to '_'
temporal_action (TemporalAction): Specifies union or intersection
size (int, optional): Number of intervals in the playground (default 20)
metric_group_prefix (str, optional): A prefix for metric groups in the resulting interval that are related to the first interval in the relation
corr_metric_group_prefix (str, optional): A prefix for metric groups in the resulting interval that are related to the corresponding interval in the relation
dict containing playground ID and compound intervals metadata playground = intervals.create_compound_intervals_playground(workspace['id'], filter_a, filter_b, "CUSTOM", custom_relation, "ResultInterval", TemporalAction.INTERSECTION, size=10) Create a limited set of compound intervals for experimentation
get_compound_intervals_playground id (str): Playground ID dict representing the playground object playground = intervals.get_compound_intervals_playground(playground['playgroundId']) Retrieve a previously created playground
create_compound_intervals_by_playground id (str): Playground ID
context (CompoundCreationContext, optional): Specifies where to save the intervals - globally or on the specific workspace. Defaults to GLOBAL
None intervals.create_compound_intervals_by_playground(playground['playgroundId'], context=CompoundCreationContext.WORKSPACE) Create the full set of compound intervals based on a playground
delete_compound_intervals_playground id (str): Playground ID None intervals.delete_compound_intervals_playground(playground['playgroundId']) Delete a playground
page_compound filt (optional), pagination (optional)
workspace_id (optional): fetch from specified workspace
test_run_filter (optional): first filter the runs by the given filter
coverage_items (optional): List of coverage items to include in the response
Intervals Pandas DataFrame df = intervals.page_compound(filt=some_filter, test_run_filter=some_other_filter, pagination={"page":0,"size": 50}, workspace_id="some_workspace_id", coverage_items=[{"interval": "vehicle.general_info.end", "item": "vehicle_category", "multiplicity": "ALL"}]) Retrieves a page of compound intervals based on specified filters, pagination, and workspace context.
create_compound_intervals_by_compound_rule workspace_id (str): Workspace ID
compound_rule_id: Compound rule ID
progress_callback (function)(optional): A function to be invoked each time a progress message is received from the server
None intervals.create_compound_intervals_by_compound_rule(workspace_id, rule_id) Creates compound intervals based on a compound rule.
delete_compound_intervals workspace_id (str): A workspace ID.
compound_rule_id (str): A compound run id
None intervals.delete_compound_intervals(workspace_id=ws["id"], compound_rule_id=rule["id"]) Deletes all compound intervals created by the given rule

Note

In all compound interval APIs, both interval_filter and corr_interval_filter are required to have a non-null interval_name. Supported interval_type pairs are:

  • MatchIntervalData -> MatchIntervalData
  • MatchIntervalData -> WatcherIntervalData
  • WatcherIntervalData -> MatchIntervalData
  • WatcherIntervalData -> WatcherIntervalData

Additional combinations may be supported in future Foretify Manager versions.

For more details, see Type relationship tables of compound intervals.

74.5.1 Common Parameters

  • relation (in compound APIs):
    One of the following strings: "ANY_INTERSECTION", "A_BEFORE_B", "A_AFTER_B", "A_CONTAINS_B", "B_CONTAINS_A", "CUSTOM"

  • temporal_action:
    An instance of the temporal_relations.TemporalAction enum class. Possible values are UNION or INTERSECTION.

  • custom_relation:
    An instance of the temporal_relations.CustomTimeRelation class, which specifies constraints on the distances between two interval endpoints.
    For a full description, see temporal_relations.CustomTimeRelation in the temporal_relations module of ftx.shell.

Examples:

Python: Retrieving a page of intervals by filters
from argparse import ArgumentParser
from ftx._common.utils import add_common_arguments

def parse():
    parser = ArgumentParser(description="Getting arguments")
    add_common_arguments(parser)
    parser.add_argument("--regressions", nargs="+", help="regression ids")
    return parser.parse_args()

args = parse()
globals().update(vars(args))
trg_id = args.regressions[0]


from ftx.shell import client, intervals
from ftx.model.search import Filter, FilterContainer, RunFilter, IntervalFilter

client.login(host=host, user=user, port=port, https=https)

# fetch first 200 intervals under this tsr that are either watcher intervals or coverage intervals
filter_container = FilterContainer.of(
    RunFilter.of(Filter.all("testRunGroupId EQ " + trg_id)),
    IntervalFilter.of(filter=Filter.any("_type EQ WatcherIntervalData", "_type EQ CoverageIntervalData"))
)

results = intervals.page(filt=filter_container, pagination={"page": 0, "size": 200})
Python: Retrieving a page of intervals first by fetching test runs that fit the query and then by interval filters
from argparse import ArgumentParser
from ftx._common.utils import add_common_arguments

def parse():
    parser = ArgumentParser(description="Getting arguments")
    add_common_arguments(parser)
    parser.add_argument("--regressions", nargs="+", help="regression ids")
    return parser.parse_args()

args = parse()
globals().update(vars(args))
trg_id = args.regressions[0]


from ftx.shell import client, intervals
from ftx.model.search import Filter, FilterContainer, RunFilter, IntervalFilter, ItemFilter

client.login(host=host, user=user, port=port, https=https)

test_runs_filter = FilterContainer.of(
    ItemFilter.of("sut.top.info", Filter.single("test EQ test")),
    RunFilter.of(filter=Filter.single(f'testRunGroupId EQ {trg_id}'))
)
interval_filter = FilterContainer.of(
    IntervalFilter.of(filter=Filter.single("_type EQ ScenarioIntervalData")),
)
# filter intervals in 2 steps. first filter runs with FilterContainer and then intervals by another FilterContainer
intervals.page(filt=interval_filter, test_run_filter=test_runs_filter)
Python: Retrieving a page of intervals first by child intervals attributes
from argparse import ArgumentParser
from ftx._common.utils import add_common_arguments

def parse():
    parser = ArgumentParser(description="Getting arguments")
    add_common_arguments(parser)
    parser.add_argument("--workspace", nargs="?", help="workspace id")
    return parser.parse_args()

args = parse()
globals().update(vars(args))
ws_id = args.workspace


from ftx.shell import client, intervals, workspaces
from ftx.model.search import Filter, IntervalFilter, ItemFilter, AdvancedIntervalFilter

client.login(host=host, user=user, port=port, https=https)

# add filter to workspace filters and fetch relevant intervals
interval_child_filter = AdvancedIntervalFilter.of(
    first_level_type=IntervalFilter.IntervalType.GlobalModifierIntervalData,
    children=[ItemFilter.of("top.info.main_issue", filter=Filter.single("result EQ Passed - no issue"))]
)
workspaces.filter_update(workspace_id=ws_id, children_relation_interval_filter=interval_child_filter)
matching_intervals = intervals.page(workspace_id=ws_id)

# resetting the filter
workspaces.filter_update(workspace_id=ws_id, children_relation_interval_filter=AdvancedIntervalFilter.empty())
matching_intervals = intervals.page(workspace_id=ws_id)

Creating a custom_relation which indicates "the distance between the starting point of the first interval to the starting point of the 2nd interval is between 1 and 2, and the distance between the endpoints is between 1 and 3":

Python: Create a custom time relation for a compound intervals rule
from ftx.shell.temporal_relations import *

client.login(host=localhost, user=username, port=port)

custom_relation = temporal_relations.CustomTimeRelation(start_to_start=OpenRange(1,2), end_to_end=OpenRange(1,3))

74.5.2 Workspaces API

Provides methods for creating Foretify Manager Workspaces, changing their Runs Filter, and performing calculation tasks such as annotating them and ranking their corresponding runs.

Function Parameters Return Example Description
create name: Workspace name
vplan_id (deprecated)
metric_model_id (deprecated): Metric model ID/s
vplan_template_id (optional): VPlan template ID
runs_filter (deprecated): Filter to query runs
dynamic_filter (optional): Filter to match future TSRs, i.e a capture rule for the workspace
project_id: Project to include the workspace in
copy_workspace_id (optional): Another workspace to create a copy of
included_test_suite_ids (optional) : List of test suite IDs for the workspace to capture (in addition to the capture rule)
excluded_test_suite_ids (optional): List of test suite IDs for the workspace to not capture
wait_for_capture_timeline_points_to_finish: whether to wait for the timeline points capturing operation to be finished before fetching the workspace.
workspace workspaces.create(name="my_workspace", project_id=proj["id"], included_test_suite_ids=[tsr_1["id"], tsr_2["id"]], excluded_test_suite_ids=[tsr_3["id"]], wait_for_capture_timeline_points_to_finish=True) Create a new workspace
get_by_id workspace_id workspace workspace = workspaces.get_by_id(ws_id) Get a workspace by its ID
get_by_timeline_point_id timeline_point_id: timeline point id Workspace workspaces.get_by_timeline_point_id(tp_id) Retrieve a workspace by one of its timeline points. Returns the global data of the selected timeline point, with no effect on the current user view.
page filt, pagination, orderBy (see Common Parameter Models for details), project_id (optional) a Pandas DataFrame of a page of workspace entities df = workspaces.page(filt=someFilter, pagination={"page":0,"size": 50}, orderBy=someOrder) Retrieve a page from the user's workspaces (by Filter)
get filt, orderBy (see Common Parameter Models for details), project_id (optional) a Pandas DataFrame of all the user's workspace entities matching a filter df = workspaces.get(filt=someFilter, orderBy=someOrder) Retrieve the user's workspaces (by Filter)
update workspace_id
new_name: A new name for the workspace
new_runs_filter: A new runs filter for the workspace
new_current_timeline_point: The ID of one of the workspace's timeline point, to be set as the new current timeline point
capture_rule: A new capture rule for the workspace that includes a Test Suite Result (TSR) filter and lists of included or excluded IDs in the form {"rule":filter, "includeTestSuiteIds":list(str),"excludeTestSuiteIds":list(str)}
wait_for_capture_timeline_points_to_finish: Specifies whether to wait for the timeline points capturing operation to complete before fetching the workspace
overwrite_protected_filters: Specifies whether to allow overwriting the protected elements of the current run filter
workspace workspaces.update(ws_id, new_name="WorkspaceUpdated")
workspaces.update(ws_id, new_runs_filter=Filter.any(condition1, condition2))
workspaces.update(ws_id, new_runs_filter=FilterContainer.of(run_filter, item_filter),new_current_timeline_point=timeline_point["id"], capture_rule={"rule":filter, "includeTestSuiteIds":[id_1,id_2],"excludeTestSuiteIds":[id_3]}, wait_for_capture_timeline_points_to_finish=False, overwrite_protected_filters=False)
Update various workspace settings
add_test_run_groups workspace_id
test_run_groups_ids: list of test run group IDs to add to the filter
workspaces.add_test_run_groups(id, ["trg1_id", "trg2_id"]) Add a Test Run Group to the workspace's filter
remove_test_run_groups workspace_id
test_run_groups_ids: list of test run group IDs to remove from the filter
workspaces.remove_test_run_groups(id, ["trg3_id", "trg4_id"]) Remove a Test Run Group from the workspace's filter
annotate workspace_id workspace workspace = workspaces.annotate(ws_id) Annotate the workspace (the current timeline point gets annotated)
annotate_by_timeline_point workspace_id: A workspace ID
timeline_point_id: A timeline point ID (one of the workspace's timeline points)
workspace workspace = workspaces.annotate_by_timeline_point(workspace_id=workspace["id"], timeline_point_id=timeline_point["id"]) Annotate the workspace (the specified timeline point gets annotated)
delete workspace_id workspaces.delete(ws_id) Delete the workspace
set_attribute workspace_id
filt: FilterContainer
attribute_name
attribute_value
workspaces.set_attribute(ws_id, filt, attribute_name, attribute_value) Sets an attribute value on the workspace's runs matching the filter
set_attributes_by_ids workspace_id
runs_to_attributes: Dict of test runs ids as keys and tuples of attributeName and attribute value as values
workspaces.set_attributes_by_ids(ws_id, { run_id: [('mainIssueKind', 'other')]}) Sets an attribute value on the workspace's runs matching the IDs
filter_update workspace_id
new_runs_filter: A run's filter object
children_relation_interval_filter: Advanced interval filter
overwrite_protected_filters: Specifies whether to allow overwriting the protected elements of the current run filter
workspace workspaces.filter_update(workspace["id"], Filter.single("status EQ FAILED"), overwrite_protected_filters=False) Update a workspace's filter
publish workspace_id workspace workspaces.publish(workspace["id"]) Publish local workspace changes to the global workspace
align workspace_id workspace workspaces.align(workspace["id"]) Align the user's workspace data with the global workspace
reset_filter workspace_id workspace workspaces.reset_filter(workspace["id"]) Resets the local (user's) workspace filter to its pure state (only test suite IDs)
reset_global_filter workspace_id None workspaces.reset_global_filter(workspace["id"]) Resets the global workspace filter to its pure state (only test suite IDs)
sync_requirements workspace_id None workspaces.sync_requirements(workspace["id"]) Sync the workspace sections' VGrades and error-rate with an external requirements management tool, if configured
set_metric_model_by_timeline_point workspace_id: A workspace ID
timeline_point_id: A timeline point ID (one of the workspace's timeline points)
workspace workspaces.set_metric_model_by_timeline_point(workspace_id=workspace["id"], timeline_point_id=timeline_point["id"]) Set a timeline point to be used as a metric point for the workspace. This point determines the metric model. Additionally, this sets the metric model mode to MANUAL, meaning new test suites (captured by the capture rules) do not change the model from now on (unless stated otherwise explicitly).
reset_metric_model_mode_to_automatic workspace_id:A workspace ID workspace workspaces.reset_metric_model_mode_to_automatic(workspace["id"]) Reset the metric model mode of the workspace back to automatic. This means next time when a test suite is captured, the metric model is changed accordingly
page_all_viewable_workspaces filt, pagination, orderBy (see Common Parameter Models for details) a Pandas DataFrame of all the user's workspace entities matching a filter df = workspaces.page_all_viewable_workspaces(filt=someFilter, pagination={"page":0,"size": 50}, orderBy=someOrder) Retrieve a page of all workspaces the user holds (at the very least) view permissions for.
get_triage_steps workspace_id list of TriageStep workspaces.get_triage_steps(workspace['id']) Fetches all available triage steps to apply.
apply_triage workspace_id
tasks: list of the steps to operate on
workspaces.apply_triage(workspace_id=workspace['id'], steps=workspaces.get_triage_steps(workspace['id'])) Apply triage steps onto a workspace.
upload_rules_directory workspace_id
rules_path: path of the rules' directory
rules package (metadata) workspaces.upload_rules_directory(workspace['id'], "/home/tdocs") Upload (triage) rules directory to fmanager and set it as the current rules package of the workspace.
delete_rules_directory workspace_id workspaces.delete_rules_directory(workspace['id']) Delete the rules directory from a workspace.
get_compound_rules workspace_id: ID of the workspace list of compound rules rules = workspaces.get_compound_rules(workspace_id) Get compound rules for the workspace.
create_compound_rule workspace_id: Workspace ID
interval_filter: Filter for first set of intervals
corr_interval_filter: Filter for corresponding set
relation: Temporal relation ("ANY_INTERSECTION", etc.)
custom_relation: Optional custom relation
result_interval_name: Name for result
concat_names (bool): Whether to concatenate the result_interval_name with the original interval names. Defaults to False
delimiter (str): When concat_names is set to True, will add a delimiter between the original interval names. Defaults to '_'
temporal_action: TemporalAction
context: CompoundCreationContext
name: Rule name
dict (compound rule) workspaces.create_compound_rule(workspace_id, interval_filter, corr_interval_filter, "ANY_INTERSECTION", None, "my_result", TemporalAction.UNION, context, "CompoundRuleName") Creates a compound rule. Note: Both filters must be of type MatchIntervalData with non-null interval_name.
build_compound_rule_definition interval_filter: Filter for first set of intervals
corr_interval_filter: Filter for corresponding set
relation: Temporal relation ("ANY_INTERSECTION", etc.)
custom_relation: Optional custom relation
result_interval_name: Name for result
concat_names (bool): Whether to concatenate the result_interval_name with the original interval names. Defaults to False
delimiter (str): When concat_names is set to True, will add a delimiter between the original interval names. Defaults to '_'
temporal_action: TemporalAction
workspace_id: Workspace ID
metric_group_name: A prefix for metric groups in the resulting interval that are related to the first interval in the relation
corr_metric_group_name: A prefix for metric groups in the resulting interval that are related to the corresponding interval in the relation
dict (compound rule definition request) intervals.build_compound_rule_definition(interval_filter, corr_interval_filter, "ANY_INTERSECTION", None, "my_result", TemporalAction.UNION, "ws_id") Helper method to build a compound rule definition request. Note: Both filters must be of type MatchIntervalData with non-null interval_name.
update_compound_rule workspace_id: Workspace ID
rule_id: Rule ID
name : Optional updated rule name
context Optional updated compound rule context
compound_rule_params: Optional updated compound rule definition
dict (updated rule) workspaces.update_compound_rule(workspace_id, rule_id, intervals.build_compound_rule_definition(interval_filter=filter1, corr_interval_filter=filter2, relation = "ANY_INTERSECTION", custom_relation=None, result_interval_name="res_name", temporal_action=t ⋮ emporal_relations.TemporalAction.UNION, workspace_id=workspace_id), temporal_relations.CompoundCreationContext.WORKSPACE, "new name") Updates an existing rule. You can modify filters, relation, action, or name.
delete_compound_rule workspace_id: Workspace ID
rule_id: Rule ID
None workspaces.delete_compound_rule(workspace_id, rule_id) Deletes a compound rule.
reorder_compound_rules workspace_id: Workspace ID
new_order: List of rule IDs
None workspaces.reorder_compound_rules(workspace_id, ["rule_id1", "rule_id2", "rule_id3"]) Modifies the rule order in the list.
delete_compound_rules workspace_id (str) : Workspace ID
rule_ids (list(str)) : Compound rule ids
None workspaces.delete_compound_rules(ws_id, [rule[0]["id"], rule[2]["id"]) Deletes multiple compound rules from the workspace.
switch_vplan_view workspace_id: str
vplan_view_id: str
workspace workspaces.switch_vplan_view(workspace_id, vplan_view_id) Switches the VPlan view of the workspace to the specified VPlan/view. Should be a view of the same VPlan as the existing VPlan/view, or the VPlan itself.

Note

workspaces.get() and workspaces.page() - When passing a project ID, users will only get workspaces for which they have at least view permissions. If the project_id is not passed, only workspaces created by the current user will be retrieved. In future releases, passing project_id will be required.

Note

Setting the workspace's runs filter by calling workspaces.update() or workspaces.filter_update() only affects the filter visible to the user ("local filter"). To make the change visible to all users of the workspace, a subsequent call to workspaces.publish() is required.

Examples:

Python: Creating a vplan view and switching to it
from ftx.shell import client, vplan_templates, workspaces

client.login(host=host, user=user, port=port, https=https)

vplan_base = vplan_templates.create(vplan_templates.collect("./demo/demo.vplan"))
workspace = workspaces.create(name="my_workspace_with_vplan_view",
                              vplan_template_id=vplan_base["id"],
                              project_id=project_id)
vplan_view_dict = {"name": "demo_vplan_view", "parentVplanTemplateId": vplan_base["id"], "source": "from_code"}
vplan_view = vplan_templates.create(vplan_view_dict)
workspace = workspaces.switch_vplan_view(workspace["id"], vplan_view["id"])  # Switch workspace to use the vplan view
workspace = workspaces.switch_vplan_view(workspace["id"], vplan_base["id"])  # Switch back to the base vplan template

74.5.2.1 Ranking

Function Parameters Return Example Description
rank workspace_id
vplan_path: Path to a VPlan node to be ranked (see notes below).
rank_limit: max. ranking results (default = 10, see notes below).
target_grade: max cumulative grade of the ranking results (default = 100, see notes below)
grade_type: The grading scheme to be used (default = AVG, see notes below).
a description of the ranking operation results (see notes below) workspaces.rank(id, {"sections":["top","section1","section2"], "elements":[]}, 1000, 100) Rank the workspace's runs according to the given VPlan node and grading parameters
ranked_runs workspace_id: ID of ranked workspace
pagination: page number and size of the returned results (see Common Parameter Models for details).
a Pandas DataFrame of a page of ranked test runs, ordered by their cumulative grade (rank) runs = workspaces.ranked_runs(id, {"page": 0, "size": 1000}) Retrieve a page of the ranked runs
get_ranked_runs workspace_id: ID of ranked workspace
a Pandas DataFrame all ranked test runs, ordered by their cumulative grade (rank) runs = workspaces.get_ranked_runs(id) Retrieve all of the ranked runs

Notes:

  • rank() waits for the ranking operation to finish. It returns a description of the ranking results (see below) and not the ranked runs themselves. The ranked test runs, paged and ordered by their cumulative grade (rank), are returned when calling ranked_runs().

  • rank_limit and target_grade are the ranking calculation's limits: it will stop evaluating additional runs as soon as one of these parameters is achieved (i.e. either evaluated rank_limit runs or the cumulative grade of the run just evaluated achieved target_grade).

  • vplan_path is a dictionary containing the fields sections and element, both are lists. For example, to rank the top.section1.section2 section:

    vplan_path = {
        "sections" :["top", "section1","section2"],
        "elements": []
    }
    
    Or, to rank the cut_in.end.speed item under the top.section1.section2 section:
    vplan_path = {
        "sections" :["top", "sections1", "section2"],
        "elements": ["cut_in", "end", "speed"]
    }
    

  • grade_type is the grading scheme to be used while ranking (when calculating the cumulative grade). The possible types are: 1) AVG: The averaging grade scheme 2) BUCKET: The bucket grade scheme 3) PROG_BUCKET: The progressive bucket grade scheme For detailed information about the schemes, see Workspace grading object.

  • The results of a rank() request is an object containing a description of the requested rank (rankedNodeName, rankLimit & targetGrade) and the results of the rank (cumulativeGrade & rankedRuns), for example:

    rankings = {
        "rankedNodeName": "top.section1.section2.cut_in.end.speed",
        "rankLimit": 1000,
        "targetGrade": 100.0,
        "cumulativeGrade": 88.53,
        "rankedRuns": 1000
    }
    

74.5.2.2 Buckets retrieval

Provides a service for fetching buckets of a given item.

Function Parameters Return Example Description
get_item_buckets workspace_id
vplan_path: Path to a VPlan node which represents an Item (see notes above regarding vplan_path)
a list of buckets workspaces.get_item_buckets(id, {"sections":["section1","section2"], "elements":["scenario_name","group_name","item_name"]}) Returns the buckets of a given item

Note

Retrieving buckets of cross items is only possible with workspaces.get_item_buckets(): the bucket list of cross items will appear to be empty when retrieved via a workspace.sections object.

74.5.2.3 Target editing

Provides a service for editing targets on items and buckets.

Function Parameters Return Example Description
update_element_target workspace_id
vplan_path: Path to a VPlan node which represents an item or a bucket (see notes above regarding vplan_path)
target: a new target for the item/bucket
workspace workspaces.update_element_target(id, {"sections":["section1","section2"], "elements":["scenario_name","group_name","item_name"],"bucket": "bucket_name"}, 2) Updates the item's or bucket's target described by vplan_path
reset_element_target workspace_id
vplan_path: Path to a VPlan node which represents an item or a bucket (see notes above regarding vplan_path)
workspace workspaces.reset_element_target(id, {"sections":["section1","section2"], "elements":["scenario_name","group_name","item_name"]}) Updates the item's or bucket's target described by vplan_path, with its original value, defined in the metric model
reset_element_targets_from_path workspace_id
vplan_path: Path to a VPlan node (see notes above regarding vplan_path)
include_buckets: Whether to reset the target value of all buckets under the VPlan path
include_path_root: Whether to reset the target of the VPlan path itself, if the target field exists
workspace workspaces.reset_element_targets_from_path(id, {"sections":["section1"], "elements":["scenario_name","group_name","item_name"]}, True, True) Resets all targets under given vplan_path, each with its original value, defined in the metric model

74.5.2.4 Weight editing

Provides a service for editing weights on VPlan nodes.

Function Parameters Return Example Description
update_element_weight workspace_id: string
vplan_path: dict (see notes above regarding vplan_path)
weight: float
workspace workspaces.update_element_weight(id, {"sections":["section1","section2"], "elements":["scenario_name","group_name","item_name"]}, 2) Updates the node's weight
reset_element_weight workspace_id: string
vplan_path: dict (see notes above regarding vplan_path)
workspace workspaces.reset_element_weight(id, {"sections":["section1","section2"], "elements":["scenario_name","group_name","item_name"]}) Updates the node's weight, with its value in the vplan base (if exists), or the default value, 1

74.5.2.5 Metrics Exclude/Include

Provides a service for selective excluding/including of VPlan nodes (or buckets) from/in coverage calculation.

Function Parameters Return Example Description
exclude_elements workspace_id
vplan_paths: Paths to a VPlan node which will be excluded from coverage calculation
Workspace with exclusion impacted coverage grade path1 = {"sections":["section1"], "elements": ["scenario_name","group_name","item_name"],"bucket": "bucket_name"}
path2 = {"sections":["section2"], "elements": ["scenario_name","group_name","item_name"]}
workspace_with_excludes = workspaces.exclude_elements(workspace_id,[path1,path2])
Excludes specific VPlan nodes or buckets (which were included) in the coverage calculation of a workspace and returns the modified workspace
include_elements workspace_id
vplan_paths: Paths to a VPlan node which will be included in coverage calculation
Workspace with inclusion impacted coverage grade path1 = {"sections":["section1"], "elements": ["scenario_name","group_name","item_name"],"bucket": "bucket_name"}
path2 = {"sections":["section2"], "elements": ["scenario_name","group_name","item_name"]}
workspace_with_includes = workspaces.include_elements(workspace_id,[path1,path2])
Includes specific VPlan nodes or buckets (which were excluded) in the coverage calculation of a workspace and returns the modified workspace

VPlan Views: Inclusion Limitation

When working with VPlan views (a VPlan that has a base VPlan), and once a node is excluded in the base VPlan, the entire VPlan branch, including the node, its ancestors, and its descendants, is ruled out for inclusion in the view. Attempting to include such a node will result in an error.

74.5.2.6 Filter runs in/out

Provides services for workspace runs inclusion (filter_in()) and exclusion (filter_out()). They address simple run-related attributes as well as metric-related attributes.

Note

The methods below effectively add a filter element (RunFilter, ItemFilter, or IntervalFilter) to the workspace. To make more complex changes in the workspace's runs filter, constructing a FilterContainer and passing it into update() is required.

Note

The methods below only change the workspace's filter visible to the user ("local filter"), and any user viewing the workspace can execute them (not requiring edit permissions). To make the change visible to all users of the workspace, a subsequent call to workspaces.publish() is required.

Function Parameters Return Example Description
filter_in workspace_id: workspace ID.
filter: a new filter that will be added for the given workspace.
group_qualified_name: a metric group name for metrics based filtering.
item_conditions: list of metric conditions.
updated workspace workspace = workspaces.filter_in(workspace["id"], filt=Filter.all("status EQ PASSED"))
workspace = workspaces.filter_in(workspace["id"], group_qualified_name="drive.end", item_conditions=["me_dut_collision EQ TRUE"])
Add an inclusive filter to the workspace's runs filter
filter_out workspace_id: workspace ID.
filter: a new filter that will be added for the given workspace.
group_qualified_name: a metric group name for metrics based filtering.
item_conditions: list of metric conditions.
updated workspace workspace = workspaces.filter_out(workspace["id"], filt=Filter.all("status EQ PASSED"))
workspace = workspaces.filter_out(workspace["id"], group_qualified_name="drive.end", item_conditions=["me_dut_collision EQ TRUE"])
Add an exclusive filter to the workspace's runs filter

74.5.2.7 Triage comparison

Provides services for managing triage comparison settings for a workspace, and running the comparison algorithm.

Function Parameters Return Example Description
get_triage_comparison_settings workspace_id: workspace ID triage comparison settings settings = workspaces.get_triage_comparison_settings(workspace["id"]) get triage comparison settings, corresponding to the workspace.
set_triage_comparison_settings workspace_id: workspace ID
corresponding_test_matching_rules (optional) : A list of attributes to determine correspondence
same_results_matching_rules (optional) : A list of attributes to determine a match
triage comparison settings new_settings = workspaces.set_triage_comparison_settings(workspace_id = workspace["id"], corresponding_test_matching_rules=["testName", "seed"],same_results_matching_rules=["status"]) Set the triage comparison settings for the workspace. If not set, the default ones will be used for comparison. Used both for create and update.
triage_compare workspace_id: workspace ID NONE workspaces.triage_compare(workspace["id"]) Run the triage comparison algorithm on the workspace. As a result, the attributes comparedRunId and comparisonStatus are computed for each run in the current test suite of the workspace.
apply_rules workspace_id : workspace ID None workspaces.apply_rules(workspace["id"]) Apply all (enabled) triage rules attached to given workspace. After calling this method, test runs fetched in this workspace's context will have their attributes updated, according to the rules

74.5.3 Triage rules

Provides services for managing and applying triage rules.

Function Parameters Return Example Description
create_triage_rule workspace_id
name
attributes
filter
disabled (default False)
triage rule workspaces.create_triage_rule(workspace["id"], "seed 0 to fail", True, [{"attributeName" : "status", "attributeValue" : "FAILED"}], filter = FilterContainer.of(Filter.any("seed EQ 0"))) Create a triage rule
get_triage_rules workspace_id triage rule workspaces.get_triage_rules(workspace["id"]) Get triage rules of a workspace
update_triage_rule workspace_id
rule_id
name (optional)
disabled (optional)
attributes (optional)
filter (optional)
triage rule workspaces.update_triage_rule(workspace_id=workspace["id"], rule_id=rule["id"], name="updated") Update a triage rule
delete_triage_rule workspace_id
rule_id
None workspaces.delete_triage_rule(workspace_id=workspace["id"], rule_id= rule["id"]) Delete a triage rule
delete_triage_rules workspace_id
rule_ids
None workspaces.delete_triage_rules(workspace_id=workspace["id"], rule_ids = [rule1["id"], rule2["id"]]) Batch delete triage rules
reorder_triage_rules workspace_id
ids_in_new_order
list of triage rules workspaces.reorder_triage_rules(workspace["id"], [rule1["id"],rule2["id"]]) Reorder the list of rules for the workspace. Note: list must include all the rule ids in this workspace.
import_triage_rules_from_zip path: Path to a triage rules zip-file
workspace_id: Workspace ID to import into
copy_mode: OVERRIDE, APPEND or REPLACE (default OVERRIDE) (see below)
None workspaces.import_triage_rules_from_zip(path = "/my/home/my_rules.zip", workspace_id=workspace["id"], copy_mode="REPLACE") import triage rules from a zip-file (previously exported by exports.export_triage_rules_as_zip) into another workspace

Note: copy_mode controls how to handle possible conflicts when importing rules into a workspace. 1. OVERRIDE: in case of conflicting rules (where an identical name exists), prefer the imported rule by overriding the existing one. 2. APPEND: in case of conflicting rules, an error will be returned. 3. REPLACE: delete all existing rules in the workspace, and add the imported ones instead.

74.5.4 Timeline points API

Provides methods for managing a workspace's timeline points.

Function Parameters Return Example Description
get_timeline_points workspace_id: workspace ID
filt (optional): A timeline points filter
orderBy( optional)
detailed (optional)
Timeline points data frame workspaces.get_timeline_points(workspace["id"]) Get all timeline points of a workspace.
merge workspace_id:: workspace ID
src_timeline_point_ids: A list of timeline point IDs
dst_timeline_point_id : A timeline point ID
The merged timeline point point_ids = workspaces.get_timeline_points(ws_id)["id"].tolist()
workspaces.merge(ws_id, point_ids[0:1], point_ids[2])
Merge all the timeline points given in the list src_timeline_point_ids into the point dst_timeline_point_id. This is used to aggregate multiple test suite results into a single test suite result within the context of a workspace. All points must belong to the given workspace; otherwise, an exception is thrown. This function corresponds to "grouping" in the UI.
split workspace_id: workspace ID
timeline_point_id: A timeline point ID
None workspaces.split(ws_id, merged_point_id) Split a previously merged point back into the original points from which it was derived. The point must belong to the given workspace and must be a merged point. This function corresponds to "un-grouping" in the UI.
set_previous_compare_point workspace_id: Workspace ID
timeline_point_id: A timeline point ID
compare_point_id: A timeline point ID to set as previous to timeline_point_id
Timeline point workspaces.get_timeline_points(ws_id)["id"].tolist()
workspaces.set_previous_compare_point(ws_id, point_ids[1], point_ids[0])
Set a previous compare point to a timeline point. This compare point will be used when comparing run in triage

Note: Creating or updating a workspace with a filter which contains more than 100 conditions is not allowed. Instead, it is advised to create a tag, applying it on the relevant runs iteratively, and finally creating a workspace which filters according to that tag.

74.5.5 Projects API

Provides methods for managing projects

Function Parameters Return Example Description
create name: must be unique in Foretify Manager
description
default_permission (optional) : NONE by default
Project projects.create(name ="proj_name_21222", default_permission = 'VIEWER') Create a new project
get_by_id id: project ID Project project = projects.get_by_id(ID) Retrieve a project
update id
name
description
default_permission
projects.update(ID, ) projects.update(id = ID, name = "new_name1" , default_permission="OWNER")
list_all_projects NONE a list of projects list_all_projects()
list_my_viewable_projects NONE a list of projects list_my_viewable_projects() List all projects for which the user has at least viewer permission on

Note: Actions on projects require the user to have a suitable permission on the project. This also goes for interacting with other entities such as test suites and workspaces, which belong to a particular project.

74.5.6 Tags API

Provides methods for tagging runs so that they can be easily retrieved later.

Tags can be created with a filter, and then applied on runs which match that filter. In addition, tags can also be applied on an arbitrary list of runs, given their IDs.

After a tag was created, it can be used directly as a condition in other run filters.

Function Parameters Return Example Description
create name: tag name to be created
filter: filter to be associated with the tag
'Tag' object tag = tags.create("my_collision_tag3", workspace["runsFilter"]) Create a tag with a specific runs filter (it isn't applied to runs until apply() is called)
get_by_id id: tag ID 'Tag' object tag = tags.get_by_id(ID) Retrieve a tag
page filt, pagination, orderBy (see Common Parameter Models for details) a Pandas DataFrame of a page of tag entities data_frame = tags.page(filt=someFilter, pagination={"page":0,"size": 50}, orderBy=someOrder) Retrieve a page of tags (by Filter)
delete id: tag ID tags.delete(tag["id"]) Delete the tag
apply id: tag ID tags.apply(tag["id"]) Apply the tag on the runs matching its runs filter
apply_on_ids id: tag ID
run_ids: list of test-run IDs
tags.apply_on_ids(tag["id"], ["run1_id", "run2_id", "run3_id"]) Apply the tag on the runs whose IDs were given

Examples:

Python: Create and apply a tag by a filter
from ftx.shell import client, tags, workspaces, test_runs
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

ws = workspaces.get_by_id(ws_id)

# apply the tag on the workspace's runs
my_tag_name = "my_workspace_tag"
my_tag = tags.create(my_tag_name, ws['runsFilter'])
tags.apply(my_tag['id'])

# retrieve the runs by the tag name
tagged_runs = test_runs.get(Filter.single("tags CONTAINS_ANY " + my_tag_name))
Python: Create and apply a tag by Test Run IDs
from ftx.shell import client, tags, workspaces, test_runs
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

ranking = workspaces.rank(ws_id,vplan_path, 10, 100)
ranked_runs = workspaces.get_ranked_runs(ws_id)

# apply the tag on the workspace's ranked runs
my_tag_name = "my_ranked_workspace_tag"
my_tag = tags.create(my_tag_name)
tags.apply_on_ids(my_tag['id'], list(ranked_runs['id']))

# retrieve the runs by the tag name
tagged_runs = test_runs.get(Filter.single("tags CONTAINS_ANY " + my_tag_name))

Notes:

It is not allowed to create tags with names which contain a comma (',').

74.5.7 Attributes API

Provides methods for creating and editing attributes for workspace triage.

After an attribute is created, it can be used to edit and set values for a test run in workspaces.

Function Parameters Return Example Description
create display_name
description
attribute_type
possible_values
name (str, optional): The name of the attribute. If not provided, it may be generated based on other parameters
'Attribute' object attribute = attributes.create("custom seed", None, "STRING") Create an attribute
get_by_name name 'Attribute' object attribute = attributes.get_by_name(attribute['name']) Retrieve an attribute
get_all A list of attributes all_attributes = attributes.get_all() Retrieve a list of all attributes
delete name attributes.delete(attribute['name']) Delete an attribute (only an admin user can perform this action)
update name
display_name
description
possible_values
'Attribute' object attributes.update(attribute["name"], "new display name") Updates an attribute

74.5.8 Triage views API

Provides method for creating and editing triage views.

Function Parameters Return Example Description
create workspace_id
name
table_columns
aggregation_functions
aggregation_fields
filter
A triage view object view = triage_views.create(workspace["id"], "my_view", [TriageViewTableColumn.to_attribute_column("status"), TriageViewTableColumn.to_metric_item_column("sut.top", "speed_on_collision")], ["COUNT"], ["status", "mainIssueCategory"]) Create a triage view
get_by_id triage_view_id A triage view object view = triage_views.get_by_id("some_id") Retrieve a triage view by id
page workspace_id : A workspace id
filt : A filter on triage views
pagination: a page request object
orderBy: An order by request object
A triage views Pandas Dataframe triage_views.page(workspace["id"], pagination={"page": 0, "size": 10}) Page triage views in a workspace
delete triage_view_id None triage_views.delete(triage_view["id"]) Delete a triage view
update triage_view_id
name (optional)
table_columns (optional)
aggregation_functions (optional)
filter (optional)
aggregation_fields (optional)
A triage view object triage_views.update(triage_view_id = "some_id", name="new_name", table_columns=[TriageViewTableColumn.to_attribute_column("seed"), TriageViewTableColumn.to_metric_item_column("sut.top", "side_on_collision")] Update a triage view
import_triage_views_from_zip path: Path to a triage rules zip-file
workspace_id: Workspace ID to import into
copy_mode: OVERRIDE, APPEND or REPLACE (default OVERRIDE) (see below)
None triage_views.import_triage_views_from_zip(path = "/my/home/my_view.zip", workspace_id=workspace["id"], copy_mode="REPLACE") import triage views from a zip-file (previously exported by exports.export_triage_views_as_zip) into another workspace

Note: copy_mode controls how to handle possible conflicts when importing views into a workspace. 1. OVERRIDE: in case of conflicting views (where an identical name exists), prefer the imported view by overriding the existing one. 2. APPEND: in case of conflicting views, an error will be returned. 3. REPLACE: delete all existing views in the workspace, and add the imported ones instead.

74.5.9 Dispatcher API

Provides services for executing, monitoring and stopping regressions via Dispatcher.

74.5.9.1 Test Run Group Definition

Function Parameters Return Example Description
_build_test_run_group_definition_request name,
environment_settings_id,
frun_files (see note #1 below)
create_test_run_group_definition_request object def_req = dispatcher._build_test_run_group_definition_request("def_name", settings["id"], ["path/to/some/file"]) Build a request needed to create a test run group definition
create_test_run_group_definition request: JSON object with fields: name, environment_settings_id, frun_files. Calling _build_test_run_group_definition_request() is recommended test_run_group_definition trg_def = dispatcher.create_test_run_group_definition(dispatcher._build_test_run_group_definition_request("def_name", settings["id"], ["path/to/some/file"])) Create a new test run group definition
create_test_run_group_definition_v2 name, environment_settings_id, frun_files (optional), local_frun_files (optional, see note #1 below) test_run_group_definition trg_def = dispatcher.create_test_run_group_definition_v2("def_name", settings["id"], ["path/to/some/file"]) Create a new test run group definition
get_test_run_group_definition_by_id id test_run_group_definition trg_def = dispatcher.get_test_run_group_definition_by_id(trg_def_id) Retrieve a test run group definition by ID
get_definition_by_name name test_run_group_definition trg_def = dispatcher.get_definition_by_name(trg_def_name) Retrieve a test run group definition by its name
get_or_create_test_run_group_definition name, environment_settings_id (optional), frun_files (optional, see note #1 below) test_run_group_definition trg_def = dispatcher.get_or_create_test_run_group_definition("existing_name", frun_files=["new_file.csv"]) Create a test run group definition, or update an existing one with the specified name (see note #2 below)
get_or_create_test_run_group_definition_v2 name, environment_settings_id, frun_files (optional), local_frun_files (optional, see note #1 below) test_run_group_definition trg_def = dispatcher.get_or_create_test_run_group_definition_v2("existing_name", frun_files=["frun_file.csv"], local_frun_files=["new_frun_file.csv"]) Create a test run group definition, or update an existing one with the specified name (see note #2 below)
delete_test_run_group_definition id dispatcher.delete_test_run_group_definition(trg_def_id) Delete a test run group definition by ID
get_all_definition_options list of TSR options dispatcher.get_all_definition_options() Retrieve all TSR definition options and their preset values.
get_or_create_definition_option_presets name (string) list of string values dispatcher.get_or_create_definition_option_presets("envVar") Retrieve TSR definition option presets by name (create it if it doesn't exist).
add_definition_option_presets name (string), values (list of strings) list of string values dispatcher.add_definition_option_presets("envVar", ['CARLA_HOME', 'FTX_CARLA_SERVER_EXE']) Add presets to a TSR definition option by name (create it if it doesn't exist).
reset_definition_option_presets name (string) dispatcher.reset_definition_option_presets("envVar") Remove all presets from a TSR definition option by name.
delete_definition_option name (string) dispatcher.delete_definition_option("envVar") Delete a TSR definition option by name.

Notes:

  1. frun_files or remote_frun_files is a list of Frun file paths, which are expected to exist in the Foretify image or in the "shared" Dispatcher directory (i.e. readable by the Foretify pods).
  2. Test Run Group Definition names need to be unique, so usage of get_or_create_test_run_group_definition is recommended.

74.5.9.2 Dispatcher Environment Settings

Function Parameters Return Example Description
build_simple_environment_settings_from_image env_name: Name for the environment settings,
image_name: Path to Foretify image, which can be found in the registry,
registry: registry for the image, available to dispatcher,
options (optional): special options (flags) to transfer to Foretify. Represented in JSON format (key/value pairs),
environment_variables (optional): Environment variables, to be defined on the container, before running foretify. Represented in json format (key/value pairs),
setupFiles (optional): Path to setup files, to be executed on the container, before running foretify
create_environment_settings_request dispatcher.build_simple_environment_settings_from_image(env_name="my_env", image_name="path/to/image", environment_variables={"FTX_PACKAGES": "/ftx/packages"}) Build a request, needed to create new environment settings, using few common fields
create_environment_settings create_environment_settings_request: JSON object with the fields:
"name": A unique name for the environment settings
"settings": a JSON string of the environment settings
environment_settings env = dispatcher.create_environment_settings(dispatcher.build_simple_environment_settings_from_image(env_name, image)) Creates new environment settings
get_environment_settings id environment_settings env = dispatcher.get_environment_settings(env_id) Retrieve environment settings by ID
get_environment_settings_by_name name environment_settings env = dispatcher.get_environment_settings_by_name(env_name) Retrieve environment settings by name
update_environment_settings id, name, settings: a JSON string of the environment settings environment_settings new_env = dispatcher.update_environment_settings(env['id'], env['name'] + '_v2', '{}') Update environment settings by ID
delete_environment_settings id dispatcher.delete_environment_settings(env_id) Delete environment settings by ID

74.5.9.3 Launching & Monitoring Regressions

Function Parameters Return Example Description
launch_regression definition_id: test run group definition ID
regression_name (optional): name for the new group
project_id: target project ID for the new group
environment_variables: a list of environment variables to set (strings in the format of key=value)
job_labels: a list of labels to pass (strings in the format of key=value)
start_test_run_groups_response: JSON object with one field: testRunGroupIds, which is a list of IDs of the created groups create_response = dispatcher.launch_regression(definition["id"],"regression_name","project_id") launch a test run group definition via dispatcher, and creates a test run group to contain the executed runs
stop_regression trg_id: Test Run Group ID none dispatcher.stop_regression("some_id") Stops a regression. This will stop all dispatcher jobs associated with the regression
stop_regressions trg_ids: A list of test run group IDs none dispatcher.stop_regressions(["trg_1","trg_2"]) Stops multiple regressions
get_regression_data group_id: Test Run Group ID regression_data: An overview of the jobs consisting the regression, with a counter for each job status dispatcher.get_regression_data("some_group_id") Used for monitoring the state of jobs, which consist the regression.
quick_rerun_test_run test_run_id: Test Run ID rerun_response: An object with "reruns" field, which is a list of runIds, paired with their new dispatcher job IDs dispatcher.quick_rerun_test_run("some_run_id") Rerun a test run via dispatcher, by its id. A new run will be ran by dispatcher, with the same settings and OSC files as the original run, and be appended to the original test run group
quick_rerun_test_runs test_run_ids: List of Test Run IDs rerun_response dispatcher.quick_rerun_test_runs(["some_run_id_1","some_run_id_2"]) Rerun multiple runs
rerun_test_runs request: JSON object with the fields:
testRunIds: List of Test Run IDs
environmentSettingsId (optional): new environment settings object, to run the new run with
newGroupName (optional): If specified, a new group will be created for the new run. Otherwise, the run will be appended to the original group
rerun_response dispatcher.rerun_test_runs({"testRunIds": ["id_1"], "environmentSettingsId": "env_id", "newGroupName": "my new group"}) Rerun a test run via dispatcher, possibly with new environment settings, with an option of creating a new test run group for it

Example:

Python: Launch a basic regression
from ftx.shell import client, dispatcher

image = "some/image/path/in/example/registry"
registry = "example_registry"
def_name = "example_definition_name"
env_name = "example_environment_name"
frun_files = ["file_1.csv", "file_2.csv"]

client.login(host=host, user=user, port=port, https=https)

settings_req = dispatcher.build_simple_environment_settings_from_image(env_name=env_name, image_name=image, registry=registry)
settings = dispatcher.create_environment_settings(settings_req)
def_req = dispatcher._build_test_run_group_definition_request(def_name, settings["id"], frun_files)
definition = dispatcher.create_test_run_group_definition(def_req)
create_response = dispatcher.launch_regression(definition["id"], regression_name="my_regression", project_id=project_id)

print("Launch completed, test run group id is: " + create_response['testRunGroupIds'][0])
User Role and License Consumption Comments: For the specific purpose of allowing users to launch a regression and monitor its progress without consuming a FTLX_DEVELOPER license, the following methods are also enabled for a user with the DATA_UPLOADER role:

dispatcher.launch_regression

dispatcher.stop_regression

dispatcher.get_regression_data

74.5.10 Exports API

Provides services for exporting data from Foretify Manager

Function Parameters Return Example Description
export_run test_run_id
dest_path: destination path for downloaded ZIP
Downloaded file exports.export_run(id, '/tmp') Download a single run as a ZIP file
export_regression_as_zip test_run_group_id
dest_path: destination path for downloaded ZIP
extra_data: include also additional files (False by default, see notes below)
file_filter: only fetch files that match the wildcard pattern (extra_data must be set, default is all files)
Downloaded file exports.export_regression_as_zip(id, '/tmp', False) Download all runs of a Test Run Group (see notes below)
export_triage_rules_as_zip workspace_id
triage_rule_ids
dest_path: destination path for downloaded ZIP
Downloaded file path exports.export_triage_rules_as_zip(workspace_src["id"], [triage_rule_1["id"],triage_rule_2["id"]], "/my/home/") Export triage rules, to be used later for import
export_triage_views_as_zip triage_view_ids
dest_path: destination path for downloaded ZIP
Downloaded file path exports.export_triage_views_as_zip([triage_view_1["id"],triage_view_2["id"]], "/my/home/") Export triage views, to be used later for import
export_rules_directory_as_zip workspace_id
dest_path: destination path for downloaded ZIP
Downloaded file path exports.export_rules_directory_as_zip(workspace_src["id"], "/my/home/") Export rules directory of a workspace as a zip file
export_workspace_vplan_results workspace_id
dest_path: destination path for downloaded Json
Downloaded file path exports.export_workspace_vplan_results(workspace_src["id"], "/my/home/") Export workspace results as a JSON file
export_workspace_vplan workspace_id
dest_path: destination path for downloaded Json
Downloaded file path exports.export_workspace_vplan(workspace_src["id"], "/my/home/") Export a workspace VPlan as a Json file
Notes:

Test runs in the database don't include additional data such as Visualization, trace data, logs, etc. That data is retrieved directly from the filesystem, as test runs retain the original path from where they were uploaded.

Therefore, export_regression_as_zip(id, dest, extra_data=False) retrieves only the run's PROTOBUF representation (metadata, bucket hits, etc.) and their coverage data (i.e., metric model), and the output ZIP file is relatively small in size (expect a few MBs for every 1000 runs).

On the other hand, export_regression_as_zip(id, dest, extra_data=True) retrieves the entire run directory (along with any additional data contained in it), so it is expected that these files still exist in the original path recorded in the test run's runDir field. The output ZIP file in this case might be very large in size (expect a few GBs for every 1000 runs).

Examples:

Python: Download a Test Run Group
from ftx.shell import client, exports, test_runs
from ftx.model.search import Filter

client.login(host=host, user=user, port=port, https=https)

# download the entire test run group

exports.export_regression_as_zip(trg_id, "/tmp")

# download one of the group's run separately
runs = test_runs.page(Filter.single("testRunGroupId EQ " + trg_id))
exports.export_run(runs['id'][0], "/tmp")
Python: Download a Test Run Group using extra_data and file_filter
from ftx.shell import client, exports, test_runs

client.login(host=host, user=user, port=port, https=https)

# download the entire test run group, with only the json files
exports.export_regression_as_zip(trg_id, "/tmp", extra_data=True, file_filter='*.json')

74.5.11 Tests API

Provides functions to manage tests and their lifecycle.

74.5.11.1 Test Types (tests)

Value Description
SR_EXACT Exact smart replay
SR_BEHAVIORAL Behavioral smart replay
SR_ADAPTIVE Adaptive smart replay
CONCRETE Concrete scenario test
Function Parameters Return Example Description
get_by_id test_id (str) dict (test) tests.get_by_id("test_id") Retrieve a test by its ID
create name (str), type (TestType), intent (optional), original_run_id (str, optional): ID of the original run this test is based on. dict (created test) tests.create("My Test", tests.TestType.SR_EXACT, intent="my test intent", original_run_id="run_id") Create a new test
delete test_id (str) None tests.delete("test_id") Delete a test by ID
add_to_test_suites test_id (str), test_suite_ids (list(str)) None tests.add_to_test_suites("test_id", ["ts1","ts2"]) Add a test to one or more test suites
remove_from_test_suites test_id (str), test_suite_ids (list(str)) None tests.remove_from_test_suites("test_id", ["ts1"]) Remove a test from one or more test suites
upload_file test_id (str), file_path (path), is_top_file (bool, default False) str tests.upload_file("test_id", "/path/to/file.osc", is_top_file=True) Upload a file to a test (optionally mark as top file)
download_files test_id (str), dest_path (str, optional) : Path to a destination directory None tests.download_files("test_id", dest_path="/tmp") Download all files associated with a test to a destination directory
page page (int, default 0), size (int, default 50), filter (optional dict), order_by (optional list), only_my (bool, default False), detailed (bool, default False) dict (page) tests.page(page=0, size=50, filter=my_filter, order_by=my_order) Retrieve a page of tests with optional filtering and sorting

74.5.12 Test Suites API

Provides functions to manage test suites and their contained tests.

Function Parameters Return Example Description
create name (str), description (optional) dict (test suite) test_suites.create("suite A", description="desc") Create a new test suite
get_by_id id (str) dict (test suite) test_suites.get_by_id("ts_id") Retrieve a test suite by ID
page page (int, default 0), size (int, default 50), filter (optional dict), order_by (optional list), only_my (bool, default False), detailed (bool, default False) dict (page) test_suites.page(page=0, size=50, filter=my_filter, order_by=my_order) Retrieve a page of test suites with optional filtering and sorting
get_tests_in_test_suites id (str) dict (page of tests) test_suites.get_tests_in_test_suites("ts_id") Retrieve tests contained in a test suite
update id (str), name (optional), description (optional) dict (updated suite) test_suites.update("ts_id", name="new name", description="new_description") Update a test suite
delete id (str) None test_suites.delete("ts_id") Delete a test suite
batch_delete ids (list(str)) None test_suites.batch_delete(["ts1","ts2"]) Delete multiple test suites
override_tests test_suite_id (str), test_ids (list(str) or list(dict)), default_count (int, default 1), default_seed (int, default 0), default_active (bool, default True) None test_suites.override_tests("ts_id", ["t1","t2"], default_count=2, default_seed=7) Replace all tests in a suite with the provided list; supports simple (IDs) and advanced (per-test config) modes

74.5.13 Flow Definitions API

Provides functions to create and manage flow definitions for extract, extract-and-run, and run-tests flows.

Function Parameters Return Example Description
create_extract_flow_definition name (str), description (optional), default_permission (optional), smart_replay_configuration_id (optional) dict (flow definition) flow_definitions.create_extract_flow_definition("My Extract Flow", description="desc", smart_replay_configuration_id="sr_config_1") Create a new extract flow definition
create_extract_and_run_flow_definition name (str), description (optional), environment_settings_id (optional), labels (optional list), run_params (optional dict), smart_replay_configuration_id (optional) dict (flow definition) low_defifnitions.create_extract_and_run_flow_definition("My SR Flow", environment_settings_id="env1", labels=["nightly"], smart_replay_configuration_id="sr_config_1") Create a new extract-and-run flow definition
create_run_tests_flow_definition name (str), description (optional), default_permission (optional), tests (optional list), test_suites (optional list) dict (flow definition) flow_definitions.create_run_tests_flow_definition("Run Tests", tests=["t1"], test_suites=["ts1"]) Create a new run-tests flow definition (optionally preconfigured with tests/test suites)
get_by_id flow_definition_id (str) dict (flow definition) flow_definitions.get_by_id("fd_id") Retrieve a flow definition by ID
page page (int, default 0), size (int, default 50), filter (optional dict), order_by (optional list), only_my (bool, default False), detailed (bool, default False) dict (page) flow_definitions.page(page=0, size=50, filter=my_filter, order_by = my_order) Retrieve a page of flow definitions with optional filtering and sorting
update flow_definition_id (str), name (optional), description (optional), default_permission (optional) dict (updated flow definition) flow_definitions.update("fd_id", name="new name", description = "new description" ) Update a flow definition
delete flow_definition_id (str) None flow_definitions.delete("fd_id") Delete a flow definition

74.5.14 Flow Executions API

Provides functions to execute flows for extracting scenarios and running tests, and to manage executions.

Function Parameters Return Example Description
execute_extract_flow project_id (str), flow_definition_id (str), test_run_id (str), extract_time_start_ms (int), extract_time_end_ms (int), actors (list(dict)), warmup_ms (int), cooldown_ms (int), scenario_name (str), osc_config (str), test_suite_id (optional str), test_type (TestType, default SR_EXACT), comment (optional str), warmup_limitation_group_id (optional str), warmup_limitation_auto_fix (bool, default False), environment_settings (optional dict), labels (optional list(str)) dict (execution) flow_executions.execute_extract_flow(project_id, fd_id, run_id, 0, 10000, actors, 1000, 500, "scenario", "osc_config") Execute an extract flow to generate smart replay tests
execute_run_tests project_id (str), flow_definition_id (str), run_params (optional dict), test_suites_ids (optional list(str)), environment_settings (optional dict), environment_settings_id (optional str), labels (optional list(str)) dict (execution) flow_executions.execute_run_tests(project_id, fd_id, run_params={"count": 10}, test_suites_ids=["ts1"], environment_settings_id = env_settings["id"], labels=["l1","l2"]) Execute a run-tests flow
execute_extract_and_run Same as execute_extract_flow + run_params (dict) dict (execution) flow_executions.execute_extract_and_run(project_id, fd_id, run_id, 0, 10000, actors, 1000, 500, "scenario", "osc_cfg", run_params={"count":5}) Execute an extract-and-run flow to generate and immediately run tests
get_by_id execution_id (str) dict (execution) flow_executions.get_by_id("exec_id") Retrieve a flow execution by ID
delete execution_id (str) None flow_executions.delete("exec_id") Delete a flow execution by ID

example:

Execute run tests
from ftx.shell import client, flow_definitions, flow_executions, dispatcher
from ftx.model.search import Filter, FilterContainer
from ftx.model.cross import ItemQualifiedName


client.login(host=host, user=user, port=port, https=https)

poll_interval = 5

env_settings = dispatcher.create_environment_settings_from_json_file( env_settings_path)
# Execute RUN_TESTS flow
created = flow_executions.execute_run_tests(
    project_id=args.project_id,
    flow_definition_id=args.flow_definition_id,
    environment_settings_id=env_settings['id']
)

execution_id = created.get('id')


# Poll until terminal status
terminal_statuses = {"COMPLETED", "FAILED"}
last_status = None
start_time = time.time()

while True:
    details = flow_executions.get_by_id(execution_id)
    status = details.get('status')

    if status != last_status:
        print(f"Execution {execution_id} status: {status}")
        last_status = status

    if status in terminal_statuses:
        if status == 'COMPLETED':
            print("Flow execution completed successfully")
            sys.exit(0)
        else:
            print("Flow execution failed")
            sys.exit(4)


    time.sleep(poll_interval)

74.5.15 Assets API

Provides functions to manage assets and their associated files.

Function Parameters Return Example Description
get_by_id asset_id (str) dict (asset) assets.get_by_id("asset_id") Retrieve an asset by its ID
create name (str), description (optional) dict (created asset) assets.create("My Asset", description="desc") Create a new asset
update asset_id (str), name (optional), description (optional) dict (updated asset) assets.update("asset_id", name="new name", description="new_description") Update an existing asset
delete asset_id (str) None assets.delete("asset_id") Delete an asset by its ID
batch_delete asset_ids (list(str)) None assets.batch_delete(["asset1","asset2"]) Delete multiple assets
upload_file asset_id (str), file_path (str or Path) None assets.upload_file("asset_id", "/path/to/file.zip") Upload a file to an asset
download_files asset_id (str), dest_path (str) None assets.download_files("asset_id", dest_path="/tmp") Download all files associated with an asset to a destination directory
page page (int, default 0), size (int, default 50), filter (optional dict), order_by (optional list), detailed (bool, default False) dict (page) assets.page(page=0, size=50, filter=my_filter, order_by=my_order) Retrieve a page of assets with optional filtering and sorting

74.6 Early access

74.6.1 Test Runs API

Provide methods for uploading runs to the Foretify Manager server and for analyzing them.

Function Parameters Return Example Description
append_metrics_by_ids append_metrics_requests test_runs.append_metrics_by_ids(append_metrics_requests=[test_runs.AppendMetricByIdRequest(run_id=test_run['id'], [test_runs.MetricSample()])]) Execute provided AppendMetricByIdRequests
append_metrics_by_filter filter
metrics
test_runs.append_metrics_by_filter(filter=some_filter, metrics=[test_runs.MetricSample()]) Add coverage samples to existing metrics for all runs matching provided filter
append_intervals_by_ids append_intervals_requests: List of AppendIntervalRequest
skip_model_validation: Accept coverage intervals which don't exist in the metric model ('False' by default).
test_runs.append_intervals_by_ids(append_intervals_requests=[test_runs.AppendIntervalRequest(run_id=test_run['id'], [intervals.CreateInterval()])]) Execute provided AppendIntervalRequests

74.6.1.1 Test Runs API classes

Class Attributes Parameters
MetricSample structName (str): The name of the scenario or structure for the metric sample
group (str): The group to which the metric sample belongs
scenario_name (str): Name of the scenario or structure
group_name (str): Name of the group the metric sample belongs to
item_name (str): Name of the metric item
bucket_name (str): Bucket or category of the metric item
value: Value associated with the metric item
AppendMetricByIdRequest run_id (str): The ID of the test run
metrics (list): A list of MetricSamples to be appended
AppendIntervalRequest run_id (str): The ID of the test run
intervals (list): A list of CreateInterval to be appended

Notes

  • You can append any metric samples to any run, regardless of whether the metric exists in the run or not.
  • Whether the samples show up in the user interface depends on the metrics model.
  • You can have a "dummy" run with coverage items loaded and if this is included in the metrics model, then you will see those appended samples even if that metric does not exist in the real run itself.
  • To determine what to include in bucket_name when defining the test_runs.MetricSample() object, you will need to view the corresponding bucket you want to add a sample to, in the metricModel of the test suite results or in the VPlan tree of the workspace if one is defined. This can be done with metric_models.getById() or workspaces.getById()['sections'].
  • Warning: The append_metrics_by_ids() and the append_metrics_by_filter() methods are deprecated - use append_intervals_by_id() instead.
  • Warning: append_metrics_by_ids() will throw an error when uploading a coverage interval that does not exist in the metric model. To skip this validation, set skip_model_validation=True.

Examples:

Add coverage metrics to runs
from ftx.shell import client, test_runs
from ftx.model.search import Filter, FilterContainer

client.login(host=host, user=user, port=port, https=https)

metric_1 = test_runs.MetricSample(
    scenario_name="some_scenario",
    group_name="some_group",
    item_name="some_item",
    bucket_name="some_bucket",
    value="some_value",
)

metric_2 = test_runs.MetricSample(
    scenario_name="another_some_scenario",
    group_name="another_some_group",
    item_name="another_some_item",
    bucket_name="another_some_bucket",
    value="another_some_value",
)

append_metric_request_1 = test_runs.AppendMetricByIdRequest(run_id=run_id_1, metrics=[metric_1, metric_2])
append_metric_request_2 = test_runs.AppendMetricByIdRequest(run_id=run_id_2, metrics=[metric_2])
# add metrics to a specific run ID's
test_runs.append_metrics_by_ids(append_metrics_requests=[append_metric_request_1, append_metric_request_2])

# add metrics to multiple runs by regression ID
test_runs.append_metrics_by_filter(filt=FilterContainer.of(Filter.any("testRunGroupId EQ " + trg_id)), metrics=[metric_1, metric_2])
Add intervals to runs
from ftx.shell import test_runs, client
from ftx.shell.intervals import *

client.login(host=host, user=user, port=port, https=https)

scenario_interval = CreateScenarioInterval(struct_name="scenario_struct", label="some_label")
coverage_item = CreateCoverageItem(item_name="dut_distance_travelled", bucket="12", value="12")
coverage_interval = CreateCoverageInterval(struct_name="top.info", group="end", items=[coverage_item])
watcher_interval = CreateWatcherInterval(struct_name="watcher_struct", watcher_name="watcher_name",
                                         watcher_type=WatcherType.WATCHER, label="some_label")
actor_assignment = ActorAssignment(name="assignment_name", value="value")
match_interval = CreateMatchInterval(struct_name="match_struct", actor_assignments=[actor_assignment],
                                     label="some_label")
anomaly_interval = CreateAnomalyInterval(struct_name="anomaly_struct", actor_assignments=[actor_assignment],
                                         label="some_label")
global_modifier_interval = CreateGlobalModifierInterval(struct_name="global_modifier_struct")

create_interval1 = CreateInterval(start_time=0, end_time=1, actor_id=1, scenario_data=scenario_interval)
create_interval2 = CreateInterval(start_time=1, end_time=2, actor_id=2, coverage_data=coverage_interval)
create_interval3 = CreateInterval(start_time=2, end_time=3, actor_id=3, watcher_data=watcher_interval,
                                  child_intervals=[create_interval1, create_interval2])
create_interval4 = CreateInterval(start_time=3, end_time=4, actor_id=4, match_data=match_interval,
                                  child_intervals=[create_interval3])
create_interval5 = CreateInterval(start_time=4, end_time=5, actor_id=5, anomaly_data=anomaly_interval,
                                  child_intervals=[create_interval4])
create_interval6 = CreateInterval(start_time=5, end_time=6, actor_id=6,
                                  global_modifier_data=global_modifier_interval)

append_interval_by_id_1 = test_runs.AppendIntervalRequest(run_id_1, [create_interval5])
append_interval_by_id_2 = test_runs.AppendIntervalRequest(run_id_2, [create_interval6])

test_runs.append_intervals_by_ids([append_interval_by_id_1, append_interval_by_id_2])

74.6.1.2 Intervals API classes

74.6.1.2.1 CreateInterval
Attribute Description
parent_id Identification number of the parent interval to which this interval belongs
start_time Start time of the interval
end_time End time of the interval
actor_id Identification number of the actor involved in the interval
child_intervals List of CreateInterval objects to be created under this interval as children
scenario_data Create a scenario interval if this interval is a scenario interval
coverage_data Create a coverage interval if this interval is a coverage interval
watcher_data Create a watcher interval if this interval is a watcher interval
match_data Create a match interval if this interval is a match interval
anomaly_data Create a anomaly interval if this interval is a anomaly interval
global_modifier_data Create a global modifier interval if this interval is a global modifier interval
74.6.1.2.2 CreateScenarioInterval
Attribute Description
struct_name Struct of the scenario
label Label of the scenario
74.6.1.2.3 CreateCoverageInterval
Attribute Description
struct_name Struct of the coverage
group Group of the coverage
duplicates Duplicates of the coverage group
items List of CreateCoverageItem objects
74.6.1.2.4 CreateCoverageItem
Attribute Description
item_name Name of the metric item
bucket Bucket or category of the metric item
value Value associated with the metric item
74.6.1.2.5 CreateWatcherInterval
Attribute Description
struct_name Struct of the watcher
label Label of the watcher
watcher_name Name of the watcher
watcher_type The type of the viewed watcher (WATCHER, CHECKER)
issueId ID of the matching issue if exists
74.6.1.2.6 CreateMatchInterval
Attribute Description
struct_name Struct of the watcher
label Label of the watcher
actor_assignments List of ActorAssignment
74.6.1.2.7 ActorAssignment
Attribute Description
name Name of the actor assignment
value Value of the actor assignment
74.6.1.2.8 CreateMatchInterval
Attribute Description
struct_name Struct of the match
label Label of the match
actor_assignments List of ActorAssignment objects
74.6.1.2.9 CreateAnomalyInterval
Attribute Description
struct_name Struct of the anomaly
label Label of the anomaly
actor_assignments List of ActorAssignment objects
74.6.1.2.10 CreateGlobalModifierInterval
Attribute Description
struct_name Struct of the modifier
label Label of the modifier

74.6.2 Test

A Test is a managed test artifact in Foretify Manager. It can be generated from flows, uploaded, added to test suites, and executed.

Attribute Description
id Test ID
name Test name
intent A description of the test's intent
createdAt Time of creation
originalRunId Optional source Test Run ID from which the test originated (for example, Smart Replay)
ownerId Owner's user ID
type Test type (SR_EXACT, SR_BEHAVIORAL, SR_ADAPTIVE, etc.)
createdBy Creator's username
defaultPermissionLevel Default permission level on the test
userPermissionLevel Current user's permission level on the test

74.6.3 Test Suite

A Test Suite groups multiple Tests and optionally stores per-test execution configuration.

Attribute Description
id Test Suite ID
name Test Suite name
description Optional description
createdBy Creator's username
createdAt Time of creation
modifiedBy Username of the last user who modified the suite (optional)
modifiedAt Time of last modification (optional)
ownerId Owner's user ID
defaultPermissionLevel Default permission level on the test suite
userPermissionLevel Current user's permission level on the test suite
testsInTestSuite Optional list of tests with their per-test configuration

74.6.4 Flow Definition

A Flow Definition describes an executable flow for scenario extraction and/or running tests. It may reference environment settings and labels.

Attribute Description
id Flow Definition ID
name Flow Definition name
type Flow type (e.g., EXTRACT, EXTRACT_AND_RUN, RUN_TESTS)
default Whether this is the default flow definition of its type
ownerId Owner's user ID
description Optional description
createdBy Creator's username
createdAt Time of creation
modifiedBy Username of the last user who modified the flow (optional)
modifiedAt Time of last modification (optional)
environmentSettingsId Optional ID of Dispatcher Environment Settings to use when executing
runTestsFlowData Optional run-tests flow parameters (when the flow type is RUN_TESTS)
labels Labels associated with the flow definition
userPermissionLevel Current user's permission level on the flow definition
defaultPermissionLevel Default permission level on the flow definition
smartReplayConfigurationId Optional ID of Smart Replay configuration to use when executing

74.6.5 Flow Execution

A Flow Execution is a single execution instance of a Flow Definition (e.g., extract, extract-and-run, run-tests), with progress, tasks and status.

Attribute Description
id Flow Execution ID
name Flow Execution name
flowType Flow type (e.g., EXTRACT, EXTRACT_AND_RUN, RUN_TESTS)
createdAt Time of creation
createdBy Creator's username
ownerId Owner's user ID
status Execution status (e.g, 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED')
progress Execution progress (percentage)
tasks List of execution tasks
launchRequest The launch request snapshot used to start the execution
detailedException Detailed server exception if the execution encountered an error (optional)