Deploy Kestra on Kubernetes with Helm
Install Kestra in a Kubernetes cluster using a Helm chart.
Deploy Kestra on Kubernetes with Helm Charts
Kestra provides an official Helm chart to simplify deployment on Kubernetes. This guide walks you through adding the chart repository, installing Kestra, accessing the UI, and scaling services for production-grade deployments.
Before you begin, ensure you have the following tools installed:
- kubectl — to interact with your cluster
- Helm — to install and manage charts
Refer to their documentation if installation is required.
Helm chart repository
Kestra maintains three Helm charts:
kestra— production-ready chart. No dependencies included. Best suited for production deployments with customizable database and storage.kestra-starter— includes PostgreSQL and MinIO for evaluation only. Great for getting started quickly and experimenting with Kestra.kestra-operator— installs the Enterprise Edition Kubernetes Operator.
Chart sources:
- Repository: helm.kestra.io
- Source code: kestra-io/helm-charts
All default image tags are listed in the Docker installation guide.
Chart configuration resources
To understand available configuration options and compare versions:
- Compare versions: See differences between two Helm chart versions on ArtifactHub using the values comparison modal.
- Full values reference: Review all available configuration options in the values.yaml file on GitHub.
Starter chart dependencies
The kestra-starter chart installs:
- MinIO (object storage)
- PostgreSQL (database)
These are not suitable for production.
Enterprise Edition
To deploy the Enterprise Edition, authenticate before pulling images:
docker login registry.kestra.io --username $LICENSEID --password $FINGERPRINTUse:
registry.kestra.io/docker/kestra-ee:latest- or a pinned version such as
registry.kestra.io/docker/kestra-ee:v1.0
Review Enterprise requirements before deploying. Compare editions in Open Source vs Enterprise if you are deciding between versions.
To manage flows declaratively using CRDs, install the Kestra Kubernetes Operator (Enterprise Edition).
Install Kestra
Add the chart repository:
helm repo add kestra https://helm.kestra.io/helm repo updateInstall the kestra-starter chart:
helm install my-kestra kestra/kestra-starterThis deploys pods for Kestra, PostgreSQL (database), and MinIO (storage).
Alternatively, install the kestra production chart:
helm install my-kestra kestra/kestraThis deploys Kestra in standalone mode—all core components run in a single pod.
The kestra chart does not include PostgreSQL or object storage. Configure these before production deployment.
Access the Kestra UI
To list all pods run:
kubectl get pods -n default -l app.kubernetes.io/name=kestraIf you installed the kestra-starter chart, you will likely see something like:
my-kestra-kestra-starter-xxxxxx-xxxxx Runningmy-kestra-postgresql-0 Runningmy-kestra-minio-0 RunningThe pod you want to port-forward is the Kestra standalone pod, usually named:
my-kestra-kestra-starter-xxxxxIf your release is my-kestra, the label selector will reliably find it.
Export the pod name:
export POD_NAME=$(kubectl get pods \ -l "app.kubernetes.io/name=kestra,app.kubernetes.io/instance=my-kestra,app.kubernetes.io/component=standalone" \ -o jsonpath="{.items[0].metadata.name}")Check it with:
echo $POD_NAMEPort-forward the UI:
kubectl port-forward $POD_NAME 8080:8080Open http://localhost:8080 in your browser and create your user.
Scaling Kestra on Kubernetes
For production deployments, run each Kestra component in its own pod for improved scalability and resource isolation.
Example values.yaml:
deployments: webserver: enabled: true executor: enabled: true indexer: enabled: true scheduler: enabled: true worker: enabled: true standalone: enabled: falseApply changes:
helm upgrade my-kestra kestra/kestra -f values.yamlValidate pod layout:
kubectl get pods -l app.kubernetes.io/name=kestraConfiguration
Kestra configuration is provided through Helm values and rendered into ConfigMaps and Secrets.
Minimal example (H2 database for testing only)
configurations: application: kestra: queue: type: h2 repository: type: h2 storage: type: local local: basePath: "/app/storage"
datasources: h2: url: jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE username: kestra password: kestra driverClassName: org.h2.DriverUsing secrets
Secrets can be mounted into Kestra through the secrets section and referenced via manifests.
Example: enabling Kafka using a Secret
configurations: application: kestra: queue: type: kafka
secrets: - name: kafka-server key: kafka.ymlSecret manifest:
extraManifests: - apiVersion: v1 kind: Secret metadata: name: kafka-server stringData: kafka.yml: | kestra: kafka: client: properties: bootstrap.servers: "localhost:9092"Environment variables
Use extraEnv or extraEnvFrom to load values from existing Secrets or ConfigMaps.
Example:
common: extraEnvFrom: - secretRef: name: basic-auth-secretSecret manifest:
extraManifests: - apiVersion: v1 kind: Secret metadata: name: basic-auth-secret stringData: basic-auth.yml: | kestra: server: basic-auth: enabled: true username: admin@localhost.com password: ChangeMe1234!Docker-in-Docker (DinD)
Kestra workers support rootless Docker-in-Docker by default. Some clusters restrict this.
Disable rootless mode
dind: image: repository: docker pullPolicy: IfNotPresent tag: dind args: - --log-level=fatalTroubleshooting DinD
docker run -it --privileged docker:dind shdocker logs <container-id>docker inspect <container-id>Disable DinD and use Kubernetes task runner
dind: enabled: falseUse the Kubernetes task runner instead:
pluginDefaults: - type: io.kestra.plugin.scripts forced: true values: taskRunner: type: io.kestra.plugin.ee.kubernetes.runner.Kubernetes type: local local: basePath: "/app/storage" datasources: h2: url: jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE username: kestra password: kestra driverClassName: org.h2.DriverBelow is an example that shows how to enable Kafka as the queue implementation and configure its server property using a secret:
configurations: application: kestra: queue: type: kafka
secrets: - name: kafka-server key: kafka.ymlThe Kafka secret key then points towards a secret in extraManifests:
extraManifests: - apiVersion: v1 kind: Secret metadata: name: kafka-server stringData: kafka.yml: | kestra: kafka: client: properties: bootstrap.servers: "localhost:9092"There are multiple ways to configure and access secrets in a Kubernetes installation. Use whichever method fits your environment.
Docker in Docker (DinD) Worker side car
By default, Docker in Docker (DinD) is installed on the worker in the rootless version. This can be restricted on some environments due to security limitations.
Some solutions you may try:
- On Google Kubernetes Engine (GKE), use a node pool based on
UBUNTU_CONTAINERDthat works well with Docker DinD, even rootless. - Some Kubernetes clusters only support a root version of DinD; to make your Kestra deployment work, disable the rootless version using the following Helm chart values:
dind: image: repository: docker pullPolicy: IfNotPresent tag: dind args: - --log-level=fatalDocker in Docker (DinD)
If you encounter issues using Docker in Docker (e.g., with Script tasks using io.kestra.plugin.scripts.runner.docker.Docker Task Runner), start troubleshooting by attaching the terminal: docker run -it --privileged docker:dind sh. Next, use docker logs container_ID to get the container logs. Also, try docker inspect container_ID to get more information about your Docker container. The output from this command displays details about the container, its environments, network settings, etc. This information can help you identify what might be wrong.
Docker in Docker using Helm charts
On some Kubernetes deployments, using DinD with our default Helm charts can lead to errors like below:
Device "ip_tables" does not exist.ip_tables 24576 4 iptable_raw,iptable_mangle,iptable_nat,iptable_filtermodprobe: can't change directory to '/lib/modules': No such file or directoryerror: attempting to run rootless dockerd but need 'kernel.unprivileged_userns_clone' (/proc/sys/kernel/unprivileged_userns_clone) set to 1The example below shows dind configuration properties and how to use the insecure mode for DinD:
dind: # -- Enable Docker-in-Docker (dind) sidecar. # @section -- kestra dind enabled: true # -- Dind mode (rootless or insecure). # @section -- kestra dind mode: 'rootless' base: # -- Rootless dind configuration. # @section -- kestra dind rootless rootless: image: repository: docker pullPolicy: IfNotPresent tag: dind-rootless securityContext: privileged: true runAsUser: 1000 runAsGroup: 1000 args: - --log-level=fatal - --group=1000 # -- Insecure dind configuration (privileged). # @section -- kestra dind insecure insecure: image: repository: docker pullPolicy: IfNotPresent tag: dind-rootless securityContext: privileged: true runAsUser: 0 runAsGroup: 0 allowPrivilegeEscalation: true capabilities: add: - SYS_ADMIN - NET_ADMIN - DAC_OVERRIDE - SETUID - SETGID args: - '--log-level=fatal'Disable Docker in Docker and use Kubernetes task runner
To avoid using root to spin up containers via DinD, disable DinD by setting the following Helm chart values:
dind: enabled: falseUse the Kubernetes task runner as the default method for running script tasks:
pluginDefaults: - type: io.kestra.plugin.scripts forced: true values: taskRunner: type: io.kestra.plugin.ee.kubernetes.runner.Kubernetes # ... your Kubernetes runner configurationWas this page helpful?