Skip to content

510. Setting up Execution Manager and Apache Airflow

Controlled Availability (CA)

The Execution Manager feature is in Controlled Availability (CA) and available to a limited number of users. This section is intended for CA users only.

CA users must set up Foretify Manager to enable advanced Execution Manager flows, such as running CI/CD executions.

Note

This section explains how to configure and run Airflow using Docker, assuming you are familiar with Docker. For more information, see official Docker documentation.

510.1 Prerequisites

To use Execution Manager flows:

  • Foretify Manager and Kubernetes cluster pods must be configured for access to a common cloud storage resource. (See the following sections for details.)
  • Foretify Manager must be integrated with Dispatcher and Orchestrator. (See Dispatcher Configuration.)
  • Foretify Manager must be integrated with Apache Airflow. (See the following sections for details.)

Note

Setting up and configuring the cloud storage and service dependencies is a prerequisite for running most flows (particularly "Extract Scenario & Run" and "Run CI/CD Tests").

510.1.1 Configuring common cloud storage resources

CI/CD user flows require AWS S3 cloud storage to be accessible to both Foretify Manager and Kubernetes cluster pods for storing and reading Test Scenario source code.

510.1.1.1 Initializing an AWS S3 bucket for Tests

To initialize an S3 bucket for storing Tests' source code, follow the instructions provided in Integrate with AWS S3 Cloud Storage: AWS S3 Setup.

510.1.1.2 Grant Foretify Manager Web Server and Kubernetes pods permissions on bucket

To grant Foretify Manager and Kubernetes cluster pods read/write permissions for the Tests' bucket (created above), follow the instructions described in Integrate with AWS S3 Cloud Storage: configure the Foretify Manager server.

Note

Since multiple Foretify Manager functions may integrate with different S3 buckets, it is recommended to manage their permissions and credentials collectively. For example, collective permissions can be granted to a single IAM role. Refer to AWS documentation for additional details.

510.1.1.3 Creating ftx_tools image

To enable Foretify jobs within Kubernetes cluster pods to read and write to the cloud storage, the ftx_tools image must be built, pushed to the image registry, and its name configured in Orchestrator's configuration file (toolsImage).

For more details, see Dispatcher Configuration: Tools Image.

510.1.1.4 Configuring the S3 bucket as the target storage for Tests

To configure, add the URL of the S3 bucket in Foretify Manager's application.properties file.

application.properties: Tests cloud storage path
test.storage.base-path=https://ftx-fmanager-tests.s3.eu-central-1.amazonaws.com/ui_fun_tests

510.1.2 Configuring Airflow's connection details

By default, Airflow's start-up scripts configure it to generate a built-in user named airflow and to listen to HTTP requests on port 8082.

If the Airflow server was started on a different machine than the Foretify Manager's web server, configure the connection details to Airflow in the Foretify Manager's application.properties file:

application.properties: Default connection details to Airflow
airflow.username=airflow
airflow.password=airflow
airflow.url=http://localhost:8082

510.2 Starting Airflow

Apache Airflow is a platform designed for programmatically authoring, scheduling, and monitoring data pipelines and workflows. It uses a Docker worker image pre-configured to communicate with Foretify Manager (via Python SDK) to execute & report on the progress of specific flow stages.

Airflow can be started using Docker via a docker-compose file included with the Foretify Manager release. This file is configured to use an internal worker image that utilizes the Python SDK package ("wheel").

Use start.sh as described below to stop a currently running Airflow server (if it exists), re-build the worker image in-place, start Airflow, and wait for its web server to respond successfully:

Shell command: Start Airflow Server via Docker
export FMANAGER=<fmanager-install-path>
cd $FMANAGER/airflow
./start.sh --also-wait

510.3 Enabling Execution Manager

Execution Manager flows are disabled by default for the available release. To enable them, configure the feature flag in the Foretify Manager's application.properties file:

application.properties: Enable Execution Manager
ui-features.execution-manager.enabled=true