Skip to content

522. Dispatcher Installation

The Dispatcher and Kubernetes Orchestrator have been successfully deployed to public clouds, private data centers, and single workstations. Installing the Dispatcher and Kubernetes Orchestrator in any of these environments entails identifying the best way to provision and configure the various system components that are required.

522.1 Components

  • Dispatcher Application

The Dispatcher is located in the dispatcher/dispatcher directory of the Foretify Manager installation. The Dispatcher is started by running the dispatcher executable and, optionally, specifying a configuration file.

See the Dispatcher Configuration documentation for details.

  • Kubernetes Orchestrator Application

The Kubernetes Orchestrator is located in the dispatcher/k8s-orchestrator directory of the Foretify Manager installation. The Kubernetes Orchestrator is started by running the k8s-orchestrator executable and, optionally, specifying a configuration file.

See the Kubernetes Orchestrator Configuration documentation for details.

  • Kubernetes Cluster

The Kubernetes Orchestrator interfaces with a Kubernetes cluster to create and monitor Kubernetes Jobs. The Kubernetes Orchestrator has been used with managed Kubernetes services on public clouds (EKS, GKS, AKS), clusters in private data centers, and single workstation installations such as MicroK8s.

  • PostgreSQL Database

The Dispatcher requires a PostgreSQL database. The Dispatcher can either share the PostgreSQL database used by Foretify Manager (preferred) or use its own.

  • File Storage

The Kubernetes Orchestrator requires file storage to store job data and shared files. The file storage is shared by the Kubernetes Orchestrator, Kubernetes Pods created by the Kubernetes Orchestrator, and Foretify Manager. The primary types of storage that are supported are NFS, host path (local storage), and object storage, such as AWS S3, Google Cloud Storage, and Azure Blob. The storage for job data and shared files is configured in the Kubernetes Orchestrator configuration file.

  • Container Registry

Kubernetes Pods pull containers from a container registry. The container registries used by the Kubernetes Orchestrator when creating jobs are configured in the Kubernetes Orchestrator configuration file.

522.2 Workstation Installation Using MicroK8s

MicroK8s is a Kubernetes distribution maintained by Canonical that can easily be installed on a single Ubuntu machine. This makes MicroK8s a good tool for running a mini-cluster for local development with the Dispatcher and Kubernetes Orchestrator.

For more information, see the MicroK8s site.

This section will walk through the installation of the Dispatcher and Kubernetes Orchestrator using MicroK8s.

522.2.1 Install the Dispatcher and Kubernetes Orchestrator

Follow the Foretify Manager installation instructions to install Foretify Manager, which includes installation of the Dispatcher and Kubernetes Orchestrator.

522.2.2 Install MicroK8s

  1. Install MicroK8s following the instructions in the MicroK8s documentation. Kubernetes versions up to 1.31 are known to work.

  2. To run microk8s without sudo, add your user to the microk8s groups.

    Shell command: add user to microk8s group
    sudo usermod -a -G microk8s $USER
    

    You have to log out and then log back in for this update to take effect (or run every microk8s command with sudo from here on).

  3. Enable the Core DNS MicroK8s add-on.

    Shell command: invoke microk8s to enable add-ons
    microk8s enable dns
    
  4. Verify MicroK8s is up and running.

    Shell command: invoke microk8s to verify installation
    microk8s status
    
    microk8s is running
    

522.2.3 Use Local Storage

  1. Choose a location to store the Dispatcher and Kubernetes Orchestrator configuration files, job data, and shared files. Set an environment variable named MICROK8S_HOME to this location. Here $HOME/microk8s is used. You can put this in your ~/.bashrc if you want this setting to be more permanent.

    Shell setting: MICROK8S_HOME
    export MICROK8S_HOME=$HOME/microk8s
    
  2. Create the following directories:

    Shell command: make directories
    mkdir -p $MICROK8S_HOME/config
    mkdir -p $MICROK8S_HOME/log
    mkdir -p $MICROK8S_HOME/jobs
    mkdir -p $MICROK8S_HOME/shared
    
  3. Generate the kubeconfig file. This file contains all of the information needed to connect to the Kubernetes cluster.

    Shell command: invoke microk8s to generate the file
    microk8s config > $MICROK8S_HOME/config/kubeconfig
    
  4. Set KUBECONFIG to allow use of kubectl.

    kubectl is an extremely useful Kubernetes utility application that allows you to use most of the Kubernetes API from the command line. You can put this in your ~/.bashrc if you want this setting to be more permanent.

    Shell command: set the KUBECONFIG variable
    export KUBECONFIG=$MICROK8S_HOME/config/kubeconfig
    
  5. Verify kubectl is configured correctly.

    microk8s kubectl get pods
    
    No resources found in default namespace.
    

522.2.4 Use the MicroK8s Docker Registry

Enable the MicroK8s Docker Registry add-on. This will create a Docker registry that Kubernetes will pull from when starting pods. This registry runs at localhost:32000. The size of the Docker registry is set when it is enabled. Some of our Docker images can be large (15+ GB).

```bash title="Shell command: enable the microk8s registry add-on"
microk8s enable registry:size=50Gi
```

522.2.5 Configure the Dispatcher

  1. Copy the default Dispatcher configuration file dispatcher.env from the Dispatcher installation directory to $MICROK8S_HOME/config.

  2. Update the LOG_DIRECTORY setting to be $MICROK8S_HOME/log (use the actual value of $MICROK8S_HOME)

  3. No other changes are necessary, but it is good to review the default settings and the Dispatcher Configuration documentation.

522.2.6 Configure the Kubernetes Orchestrator

Create a file named config.json in the $MICROK8S_HOME/config directory and copy the contents below updating where needed.

Some of the important fields to note and set:

  • kubeconfig: The path to the kubeconfig file that the Kubernetes Orchestrator uses to access the Kubernetes API. Replace $MICROK8S_HOME with the actual path.
  • volumes: Defines host path volumes, which mount directories on the local file system.
    • jobs: The volume that holds job data. Replace $MICROK8S_HOME with the actual path.
    • shared: The volume that holds shared data. Replace $MICROK8S_HOME with the actual path.
  • dockerRegistries: Defines a Docker registry named foretellix that is mapped to the MicroK8s Docker registry.
  • foretify
    • licenseServer: Defines the FTX_LIC_FILE environment variable used by Foretify to access the license server. Replace license_server with the license server's IP address. See the Foretify installation instructions for more information.
  • logging
    • logDirectory: Defines the directory where the Kubernetes Orchestrator writes log files. Replace $MICROK8S_HOME with the actual path.
JSON file: sample config file
{
    "name": "microk8s",
    "id": "microk8s",
    "foretifyJobMaximumRunTime": 1800,
    "foretifyJobRunTimeout": 300,
    "pluginJobRunTimeout": 3600,
    "kubernetes": {
        "kubeconfig": "$MICROK8S_HOME/config/kubeconfig",
        "namespace": "default",
        "gpuNodeLabel": {
            "key": "nvidia.com/gpu",
            "value": "true"
        }
    },
    "volumes": [
        {
            "name": "jobs",
            "localPath": "$MICROK8S_HOME/jobs",
            "podPath": "$MICROK8S_HOME/jobs",
            "hostPath": {
                "path": "$MICROK8S_HOME/jobs"
            }
        },
        {
            "name": "shared",
            "localPath": "$MICROK8S_HOME/shared",
            "podPath": "$MICROK8S_HOME/shared",
            "hostPath": {
                "path": "$MICROK8S_HOME/shared"
            }
        }
    ],
    "results": {
        "useLocalDirectory": true,
        "compression": "none",
        "volume": {
            "name": "jobs"
        }
    },
    "environmentVariables": [
    ],
    "dockerRegistries": [
        {
            "name": "foretellix",
            "url": "localhost:32000"
        }
    ],
    "foretify": {
        "licenseServer": "5280@license_server"
    },
    "logging": {
        "logToConsole": true,
        "logDirectory": "$MICROK8S_HOME/log",
        "maximumDays": 10
    }
}

522.2.7 Use the Foretify Manager PostgreSQL Database

To configure the Dispatcher to use the PostgreSQL database set up during Foretify Manager installation, make sure the DATABASE_URL setting in dispatcher.env is set to the Foretify Manager PostgreSQL connection string. For example:

DATABASE_URL=postgres://fmanager:fmanager@localhost:5432/fmanager

Note that only one DATABASE_URL line should be uncommented.

522.2.8 Test

  1. Start the Dispatcher.

    Shell commands: start the Dispatcher
    cd <fmanager installation directory>
    cd dispatcher/dispatcher
    ./dispatcher -c $MICROK8S_HOME/config/dispatcher.env
    
  2. Start the Kubernetes Orchestrator.

    Shell commands: start the Orchestrator
    cd <fmanager installation directory>
    cd dispatcher/k8s-orchestrator
    ./k8s-orchestrator -c $MICROK8S_HOME/config/config.json
    
  3. Run a test job.

    Shell commands: run the test job
    cd <fmanager installation directory>
    cd dispatcher/client
    ./bin/client
    
    Dispatcher output
    API: Dispatcher
    URL: http://localhost:8081
    
    Main Menu
    0. Get system information
    1. Get Foretify job
    2. Stop Foretify job
    3. Query Foretify jobs
    4. Create Foretify job
    5. Rerun Foretify job
    11. Get Plugin job
    12. Stop Plugin job
    13. Query Plugin jobs
    14. Create Plugin job
    15. Rerun Plugin job
    100. Get system statistics
    101. Get active groups
    102. Stop active jobs
    103. Stop group
    q. Quit
    
    Select option:
    
  4. Select option 14 and type in the values shown below at the prompt.

    Dispatcher prompts
    Select option: 14
    JSON file path: jobs/plugin.json
    {'id': '54bc4ecb-96e0-47c6-8510-1d8141d196c6'}
    

You should see activity in the Dispatcher and Kubernetes Orchestrator output.

522.3 Run a Foretify job

Make sure you are able to run the plugin test job successfully before moving to this section. To run a Foretify job, first a Foretify Docker image needs to be built and pushed to the MicroK8s Docker registry.

522.3.1 Build the Foretify Docker image

  1. Obtain the Foretify Dockerfile and supporting files from Foretellix if not already present in the Foretify installation. Copy these files to the directory that also contains the Foretify installation (i.e. the ftx directory).

  2. Modify the Foretify Dockerfile to copy the Foretify installation into the Docker image by adding the following line:

    COPY ftx /ftx
    
  3. Build the image

    Shell command: invoke docker build
    cd $FTX/..
    docker build -t localhost:32000/foretify:latest .
    
  4. Check that the image was created.

    Shell command: invoke docker images
    docker images
    

    You should see the new image with tag localhost:32000/foretify:latest.

  5. Push the image to the MicroK8s Docker registry.

    Shell command: invoke docker push
    docker push localhost:32000/foretify:latest
    

    If the image was tagged correctly, you should see a progress bar in the prompt that indicates the upload.

522.3.2 Create the Foretify job

  1. Make sure the Dispatcher and Kubernetes Orchestrator are running, and then run the Dispatcher client app.

    Shell commands: run the test job
    cd <fmanager installation directory>
    cd dispatcher/client
    ./bin/client
    
    Dispatcher output
    API: Dispatcher
    URL: http://localhost:8081
    
    Main Menu
    0. Get system information
    1. Get Foretify job
    2. Stop Foretify job
    3. Query Foretify jobs
    4. Create Foretify job
    5. Rerun Foretify job
    11. Get Plugin job
    12. Stop Plugin job
    13. Query Plugin jobs
    14. Create Plugin job
    15. Rerun Plugin job
    100. Get system statistics
    101. Get active groups
    102. Stop active jobs
    103. Stop group
    q. Quit
    
    Select option:
    
  2. Select option 4 and type in the values shown below at the prompt.

    Dispatcher prompts
    Select option: 4
    JSON file path: jobs/foretify.json
    {'id': '2c502e9d-9262-40a0-9492-6f51078ae0e3'}
    
  3. Copy the job ID from the output and ask for data about the job by selecting option 1. Get Foretify job from the Main menu and providing the ID.

  4. When the status is STATUS_COMPLETED, you can go to the results URL and see all the relevant data for the run.

522.3.3 Viewing the log files

Dispatcher logs: The logging directory for the dispatcher is defined in the file Dispatcher configuration file under LOG_DIRECTORY, which was set to $MICROK8S_HOME/log.

Orchestrator logging: The logging section in the Kubernetes Orchestrator configuration file defines the location of its log file, which was set to $MICROK8S_HOME/log.