Skip to content

88. Detailed directory structure

The Evaluation Solution Reference (ESR) Kit is organized into several directories to structure the end-to-end evaluation flow. The key directories include:

  • drive_logs: This folder contains the input logs used by the evaluation pipeline. You can change the pointer to this directory if needed.
  • work_dir: This directory stores the output data generated during the evaluation process. Like drive_logs, the pointer can be modified by the user.

Each task has a main processing script that wraps the FTX core tool except for the parser task, parser task wraps custom converter script. These scripts simplify the user interface by reducing the number of parameters the user needs to set.

Note

The directory structure is designed with flexibility in mind. The ESR Kit folder can be copied and renamed to tailor it to a specific custom project. If you decide to split the flow into different tasks or move them to separate NFS/storage locations, you can manage the folder pointers in the set_esr_env.sh script to ensure the flow continues to function properly.

.
├── README.md
├── drive_logs/
├── evaluators/
│   └── eval_scenarios/
├── doc/
│   ├── esr_flow_configuration_structure.png
│   └── esr_flow_diagram.png
├── flow/
│   ├── 0_convert/
│   │   ├── config/
│   │   │   └── <converter_config>.yaml
│   │   ├── object_lists/
│   │   ├── logs/
│   │   ├── scripts/
│   ├── 1_preprocess/
│   │   ├── config/
│   │   │   └── <preprocess_roi_config>.yaml
│   │   ├── roi/
│   │   ├── logs/
│   ├── 2_ingestion/
│   │   ├── config/
│   │   │   └── <ingestion_config>.yaml
│   │   ├── runs/
│   │   ├── logs/
│   ├── 3_matching/
│   │   ├── config/
│   │   │   └── <matching_config>.yaml
│   │   ├── compiled/
│   │   ├── matches/
│   │   ├── logs/
│   ├── 4_analysis/
│   │   ├── config/
│   │   │   └── <analysis_tool_config>.yaml
│   │   └── reports/
│   │   ├── notebooks/
│   │   ├── logs/
│   ├── config/
│   │   └── <pipeline_general_config>.yaml
│   ├── eval_config.bash
│   ├── evaluate.sh
│   ├── run_eval.sh
│   ├── tests/
│   └── upload_esr_runs.sh
└── set_esr_kit_env.sh

88.1 0_convert

The conversion task processes raw data formats and converts them into object lists (.pb files) that can be used in subsequent preprocessing and evaluation tasks. This task supports multiple input formats including generative runs directories and MCAP files, providing a unified entry point for various data sources.

The example script for converting generative runs is convert_generative_runs.sh:

./convert_generative_runs.sh \
    --src <path to source directory> \
    --dest <path to destination directory>

88.1.1 0_convert parameters

Parameter Description
--src Path to the source directory containing generative run subdirectories to be converted
--dest (Optional) Path to destination directory for object lists. Defaults to $ESR_KIT_WORK_DIR/0_convert/object_lists
-h, --help Display usage information

88.1.2 0_convert features

  • Batch Processing: Automatically processes all subdirectories within the source directory
  • Error Handling: Robust validation with colored output for success/failure status
  • Progress Tracking: Reports conversion success/failure for each subdirectory
  • Partial Success Support: Continues processing even if some conversions fail
  • Automatic Naming: Generated object lists follow consistent naming convention (subdirectory_name_object_list.pb)

88.1.3 Integrating custom converters

Each custom converter script requires a special implementation and sanity testing prior to integration, currently the following converters are integrated, these examples can be used as a reference to integrate other converters:

  1. convert_generative_runs.sh: Script described above takes Foretify runs as input and converts to object lists.
  2. convert_mcap.sh: Converts the custom-MCAP data format to object lists.
  3. convert_pandaset.sh: Converts the Pandaset opensource raw data to object lists.

88.2 1_preprocess

The preprocessing supports the object list denoiser application to generate the ROI (Region of Interest) file. The ROI represents an area of interest within a large map where the scenario is happening. In future phases, preprocessing will include additional denoising features (indicated in the configuration files) and a more streamlined flow for processing raw logs into object lists.

The main script for this flow is generate_roi.sh:

./generate_roi.sh \
        --input=<path to object list> \
        --config=<path to config YAML file>

88.2.1 1_preprocess parameters

Parameter Description
--input Path to the input object list .pb file
--config Path to the configuration file for ROI denoising and refining

88.3 2_ingestion

The ingestion task processes the object list and creates a runs directory containing the ingested data. This directory will be used in the subsequent matching task.

The main script for the ingestion step is ingest.sh:

./ingest.sh \
    --object_list <path to object list> \
    --map <path to map file> \
    --roi <path to roi file>

88.3.1 2_ingestion parameters

Parameter Description
--object_list Path to the input object list .pb file
--map Path to the .xodr map file for the object list
--roi Path to the ROI JSON file generated by the ROI denoising process
--video_file (Optional) Path to video file for single object list processing
--videos (Optional) Directory containing video files for object_list_dir processing, video file name has to match *.pb file prefix e.g., for the object_list file 001_ol.pb, respective video file should be named 001_ol.mp4 only in case of processing directory of object lists

88.4 3_matching

The matching task processes the ingested runs and evaluation scenarios (which will include watchers and checkers in the next release). It runs a Foretify Matcher to produce a database of matched intervals.

The main script for the matching step is match.sh:

./match.sh --ingestion_runs $ESR_KIT_WORK_DIR/runs \
          --osc_file <path to evaluation scenarios top file> \
          --match_only <optional> \
          --compile_only <optional>

match.sh takes many default options, minimum requires is ingestion_runs folder and evaluators.

To optimize matching time, if the scenarios top file is fixed, it can be pre-compiled. This optimization happens automatically in the background without requiring user intervention. However, if debugging is needed, users can explicitly run the compilation step.

88.4.1 3_matching parameters

Parameter Description
--ingestion_runs Directory containing multiple run directories to process (required)
--comp_dir Compilation directory (default: $ESR_KIT_WORK_DIR/matching)
--osc_file Path to evaluation scenarios top file (default: $FTX_ESR_KIT_HOME/evaluators/eval_scenarios/scenarios.osc)
--output_prefix Output directory prefix (default: match_output_)
--match_only Run matching only (requires --comp_dir or ESR_KIT_MATCH_COMP_DIR)
--compile_only Run compilation only (requires --comp_dir and --osc_file)

88.5 4_analysis

The analysis task is a key part of the workflow, where the end-user can make sense of the matches found in the drive_logs. The three different mechanisms available to analyze the large set of matches are explained below.

88.5.1 Foretify Manager Visual Analysis

This mechanism provides a visual representation of the results. When overlayed with vPlan, it can present a clear view of the given use case, including percentage coverage, gaps, failures, etc. Users can drill down into different matches to identify gaps and areas for further investigation. For more detailed help, refer to the Foretify Manager User Document.

88.5.1.1 Coverage histogram visualization

  1. Create a workspace with the uploaded test suite from above evaluation pipeline executions.
  2. Click on the metric of interest from the VPlan view.
  3. Click on histogram in the Buckets view on the right.
  4. Analyze the coverage as histogram.

  5. For more information of the coverage visualization, refer the section View buckets as histogram chart.

88.5.1.2 Creating compound-intervals

  1. From the workspace, click the Intervals tab located in the left navigation bar.
  2. Click the Create New Compound Intervals icon on the right.
  3. In the Create Compound Intervals Rule tab, perform the following actions:

    • Provide the required name in the Interval name field.

    • Select the time relation between intervals, and the interval relation logic. For example, approach_junc_traversal.

    • Select the required intervals. You can also add primary and child conditions to the selected intervals.

    • Click Find Results and Save.

      Example

      • First interval(A): sut.sut_approach_junction
      • Second interval(B): sut.sut_junction_traversal
      • Time relation between Intervals (seconds): Any order
      • Interval relation logic: Union
  4. After the compound interval is created, click the Compound intervals rules icon to edit the rule.

For more information on compound intervals, refer to the section compound-intervals.

88.5.2 Jupyter Notebook-Based Analysis

This approach automates the visual analysis tasks and generates a report that can be shared with others in the team. It's ideal for users who want to script the analysis process or automate repetitive tasks.

88.5.2.1 Setup

!!! Note

  Create and switch to the venv, Python 3.11 is the tested version.<br>
  Ensure that Foretify Manager dependency is addressed by editing the path.<br>
  ``` 
  cd $FTX_ESR_KIT_HOME/flow/4_analysis/config
  pip install -r requirements.txt 
  ```

88.5.2.2 Run Example

  • Launch the provided Jupyter notebook example and set the kernel to match the virtual environment.
  • Follow the instructions within the notebook to perform the analysis

!!! note "Tip"

  This example serves as a starting point. Users are encouraged to experiment and create more custom analytics tailored to their specific needs.

88.5.3 Advanced Analytics Using SDK App

This method is used to run regular analyses on the results and generate reports at the project level. It allows users to track different use cases or different Operating Design Domains (ODDs) over time and provide valuable insights for large scale evaluation.

The FTLX SDK app supports advanced analytics. Some of the interesting reports can be found in the flow/4_analysis/reports directory.

For detailed setup instructions and the user guide, please refer to the SDK Apps user guide. If you are external user, please contact one of the Application Engineers (AEs) to receive the detailed reference.

88.5.3.1 Likelihood Analysis

The likelihood page allows you to select a scenario and ranges for multiple coverage items to analyze the likelihood of a certain event happening.

  1. Select the scenario of interest and optionally the metric.

  2. Click on submit and a sample output is shown below:

    Coverage analysis of sut.follower_vehicle:
    
    agent_max_speed: [0.0 - 100.0]
    
    Total distance traveled in workspace: 1.34 miles
    
    Total hours of driving: 0 hours and 4 minutes
    
    Total number of occurrences: 20
    
    Likelihood(distance): 20/1.34 = 14.88 occurrences per mile
    
    Likelihood(time): 20/0.07 = 285.71 occurrences per hour
    

88.5.3.2 TSR diff

The TSR diff page allows you to visualize coverage differences between two TSRs as histograms, and get links to the intervals contributing to each bar. To use the app:

  1. Select two TSRs that need to be compared.

  2. Select the metric for the comparison.

  3. The results can be visualized as a histogram, heat-map and a report that can be downloaded as a CSV.