514. Integrate with AWS S3 Cloud Storage
Foretify Manager supports integration with Amazon Web Services (AWS) Simple Storage Service (S3) as a cloud storage service. During uploading of runs, it is possible to also upload the debug data to S3, making debugging of runs possible without the need to keep the run files on disk.
514.1 AWS S3 Setup
To initialize an S3 storage bucket, follow the instructions in the AWS documentation.
514.2 Configuring Foretify Manager
514.2.1 Configure the Foretify Manager server
Configure the Foretify Manager server by providing AWS credentials. These can be provided to the server in multiple ways, including:
- Setting-up credentials in the
application.propertiesfile:
aws.accessKeyId=<ACCESS_KEY_ID>
aws.secretKey=<SECRET_ACCESS_KEY>
- Setting environment variables (AWS documentation example for using environment variables):
AWS_ACCESS_KEY_ID=<ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY=<SECRET_ACCESS_KEY>
- Setting up instance profile credentials delivered through the Amazon EC2 metadata service (AWS documentation for using roles for applications)
514.2.1.1 Configure AWS_S3_ENDPOINT
When integrating with S3-compatible cloud services, it might be required to set AWS_S3_ENDPOINT. This can be configured in application.properties:
aws.s3endpoint=<AWS_S3_ENDPOINT>
Note
Foretify Manager isn't affected by setting the AWS_S3_ENDPOINT environment variable - it needs to be set in application.properties.
514.2.2 Configure the Python Client (SDK)
When uploading runs, the AWS S3 connection details should be provided as command-line arguments to the upload script.
As with the Foretify Manager server, the AWS credentials can be configured by environment variables (AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY) or by setting up instance profile credentials.
For example:
$ upload_runs --user "mylogin" \
--host 169.23.41.523 --port 8080 \
--runs_top_dir myworkdir/sessions/20221123_150125_25680_WD19/foretify_runs/batch \
--s3_bucket ftx_regressions_2022 --s3_region eu-central-1 \
--project "My Team's Project"
Alternatively, the bucket's name and region can be configured by environment variables:
$ export AWS_REGION=eu-central-1
$ export AWS_S3_BUCKET_NAME=ftx_regressions_2022
$ upload_runs --user "mylogin" \
--host 169.23.41.523 --port 8080 \
--runs_top_dir myworkdir/sessions/20221123_150125_25680_WD19/foretify_runs/batch \
--project "My Team's Project"
514.2.3 Configure Dispatcher
To configure Dispatcher to work with S3 storage, add the following to dispatcher.env:
S3_AWS_ACCESS_KEY_ID=<AMAZON_S3_KEY>
S3_AWS_SECRET_ACCESS_KEY=<AMAZON_S3_SECRET>
S3_AMAZON_REGION=<AMAZON_S3_REGION>
S3_AWS_BUCKET_NAME=<AMAZON_S3_BUCKET>