Kestra needs an internal storage to store data processed by tasks. This includes files from flow inputs and data stored as task outputs.

The default internal storage implementation is the local storage which is not suitable for production as it will store data inside a local folder on the host filesystem.

This local storage can be configured as follows:

yaml
kestra:
  storage:
    type: local
    local:
      base-path: /tmp/kestra/storage/ # your custom path

Other internal storage types include:

S3

First, make sure that the S3 storage plugin is installed in your environment. You can install it with the following Kestra command: ./kestra plugins install io.kestra.storage:storage-s3:LATEST. This command will download the plugin's jar file into the plugin's directory.

Then, enable the storage using the following configuration:

yaml
kestra:
  storage:
    type: s3
    s3:
      accessKey: "<your-aws-access-key-id>"
      secretKey: "<your-aws-secret-access-key>"
      region: "<your-aws-region>"
      bucket: "<your-s3-bucket-name>"

If you are using an AWS EC2 or EKS instance, you can use the default credentials provider chain. In this case, you can omit the accessKey and secretKey options:

yaml
kestra:
  storage:
    type: s3
    s3:
      region: "<your-aws-region>"
      bucket: "<your-s3-bucket-name>"

Minio

If you use Minio or similar S3-compatible storage options, you can follow the same process as shown above to install the Minio storage plugin. Then, make sure to include the Minio's endpoint and port in the storage configuration:

yaml
kestra:
  storage:
    type: minio
    minio:
      endpoint: "<your-endpoint>"
      port: "<your-port>"
      secure: "<your-secure>"
      accessKey: "<your-accessKey>"
      secretKey: "<your-secretKey>"
      region: "<your-region>"
      bucket: "<your-bucket>"

Azure

First, install the Azure storage plugin. To do that, you can leverage the following Kestra command: ./kestra plugins install io.kestra.storage:storage-azure:LATEST. This command will download the plugin's jar file into the plugins directory.

Adjust the storage configuration shown below depending on your chosen authentication method:

yaml
kestra:
  storage:
    type: azure
    azure:
      endpoint: "https://unittestkt.blob.core.windows.net"
      container: storage
      connection-string: "<connection-string>"
      shared-key-account-name: "<shared-key-account-name>"
      shared-key-account-access-key: "<shared-key-account-access-key>"
      sas-token: "<sas-token>"

GCS

You can install the GCS storage plugin using the following Kestra command: ./kestra plugins install io.kestra.storage:storage-gcs:LATEST. This command will download the plugin's jar file into the plugins directory.

Then, you can enable the storage using the following configuration:

yaml
kestra:
  storage:
    type: gcs
    gcs:
      bucket: "<your-bucket-name>"
      service-account: "<service-account key as JSON or use default credentials>"
      project-id: "<project-id or use default projectId>"

If you haven't configured the kestra.storage.gcs.service-account option, Kestra will use the default service account, which is:

  • the service account defined on the cluster (for GKE deployments)
  • the service account defined on the compute instance (for GCE deployments).

You can also provide the environment variable GOOGLE_APPLICATION_CREDENTIALS with a path to a JSON file containing GCP service account key.

You can find more details in the GCP documentation.