Deploy Kestra on Kubernetes with Helm icon Deploy Kestra on Kubernetes with Helm

Install Kestra in a Kubernetes cluster using a Helm chart.

Deploy Kestra on Kubernetes with Helm Charts

Kestra provides an official Helm chart to simplify deployment on Kubernetes. This guide walks you through adding the chart repository, installing Kestra, accessing the UI, and scaling services for production-grade deployments.

Before you begin, ensure you have the following tools installed:

  • kubectl — to interact with your cluster
  • Helm — to install and manage charts

Refer to their documentation if installation is required.


Helm chart repository

Kestra maintains three Helm charts:

  1. kestra — production-ready chart. No dependencies included. Best suited for production deployments with customizable database and storage.
  2. kestra-starter — includes PostgreSQL and MinIO for evaluation only. Great for getting started quickly and experimenting with Kestra.
  3. kestra-operator — installs the Enterprise Edition Kubernetes Operator.

Chart sources:

Chart configuration resources

To understand available configuration options and compare versions:

  • Compare versions: See differences between two Helm chart versions on ArtifactHub using the values comparison modal.
  • Full values reference: Review all available configuration options in the values.yaml file on GitHub.

Starter chart dependencies

The kestra-starter chart installs:

  • MinIO (object storage)
  • PostgreSQL (database)

These are not suitable for production.


Enterprise Edition

To deploy the Enterprise Edition, authenticate before pulling images:

docker login registry.kestra.io --username $LICENSEID --password $FINGERPRINT

Use:

  • registry.kestra.io/docker/kestra-ee:latest
  • or a pinned version such as registry.kestra.io/docker/kestra-ee:v1.0

Review Enterprise requirements before deploying. Compare editions in Open Source vs Enterprise if you are deciding between versions.


Install Kestra

Add the chart repository:

helm repo add kestra https://helm.kestra.io/
helm repo update

Install the kestra-starter chart:

helm install my-kestra kestra/kestra-starter

This deploys pods for Kestra, PostgreSQL (database), and MinIO (storage).

Alternatively, install the kestra production chart:

helm install my-kestra kestra/kestra

This deploys Kestra in standalone mode—all core components run in a single pod.


Access the Kestra UI

To list all pods run:

kubectl get pods -n default -l app.kubernetes.io/name=kestra

If you installed the kestra-starter chart, you will likely see something like:

my-kestra-kestra-starter-xxxxxx-xxxxx Running
my-kestra-postgresql-0 Running
my-kestra-minio-0 Running

The pod you want to port-forward is the Kestra standalone pod, usually named:

my-kestra-kestra-starter-xxxxx

If your release is my-kestra, the label selector will reliably find it.

Export the pod name:

export POD_NAME=$(kubectl get pods \
-l "app.kubernetes.io/name=kestra,app.kubernetes.io/instance=my-kestra,app.kubernetes.io/component=standalone" \
-o jsonpath="{.items[0].metadata.name}")

Check it with:

echo $POD_NAME

Port-forward the UI:

kubectl port-forward $POD_NAME 8080:8080

Open http://localhost:8080 in your browser and create your user.


Scaling Kestra on Kubernetes

For production deployments, run each Kestra component in its own pod for improved scalability and resource isolation.

Example values.yaml:

deployments:
webserver:
enabled: true
executor:
enabled: true
indexer:
enabled: true
scheduler:
enabled: true
worker:
enabled: true
standalone:
enabled: false

Apply changes:

helm upgrade my-kestra kestra/kestra -f values.yaml

Validate pod layout:

kubectl get pods -l app.kubernetes.io/name=kestra

Configuration

Kestra configuration is provided through Helm values and rendered into ConfigMaps and Secrets.

Minimal example (H2 database for testing only)

configurations:
application:
kestra:
queue:
type: h2
repository:
type: h2
storage:
type: local
local:
basePath: "/app/storage"
datasources:
h2:
url: jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
username: kestra
password: kestra
driverClassName: org.h2.Driver

Using secrets

Secrets can be mounted into Kestra through the secrets section and referenced via manifests.

Example: enabling Kafka using a Secret

configurations:
application:
kestra:
queue:
type: kafka
secrets:
- name: kafka-server
key: kafka.yml

Secret manifest:

extraManifests:
- apiVersion: v1
kind: Secret
metadata:
name: kafka-server
stringData:
kafka.yml: |
kestra:
kafka:
client:
properties:
bootstrap.servers: "localhost:9092"

Environment variables

Use extraEnv or extraEnvFrom to load values from existing Secrets or ConfigMaps.

Example:

common:
extraEnvFrom:
- secretRef:
name: basic-auth-secret

Secret manifest:

extraManifests:
- apiVersion: v1
kind: Secret
metadata:
name: basic-auth-secret
stringData:
basic-auth.yml: |
kestra:
server:
basic-auth:
enabled: true
username: admin@localhost.com
password: ChangeMe1234!

Docker-in-Docker (DinD)

Kestra workers support rootless Docker-in-Docker by default. Some clusters restrict this.

Disable rootless mode

dind:
image:
repository: docker
pullPolicy: IfNotPresent
tag: dind
args:
- --log-level=fatal

Troubleshooting DinD

docker run -it --privileged docker:dind sh
docker logs <container-id>
docker inspect <container-id>

Disable DinD and use Kubernetes task runner

dind:
enabled: false

Use the Kubernetes task runner instead:

pluginDefaults:
- type: io.kestra.plugin.scripts
forced: true
values:
taskRunner:
type: io.kestra.plugin.ee.kubernetes.runner.Kubernetes
type: local
local:
basePath: "/app/storage"
datasources:
h2:
url: jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
username: kestra
password: kestra
driverClassName: org.h2.Driver

Below is an example that shows how to enable Kafka as the queue implementation and configure its server property using a secret:

configurations:
application:
kestra:
queue:
type: kafka
secrets:
- name: kafka-server
key: kafka.yml

The Kafka secret key then points towards a secret in extraManifests:

extraManifests:
- apiVersion: v1
kind: Secret
metadata:
name: kafka-server
stringData:
kafka.yml: |
kestra:
kafka:
client:
properties:
bootstrap.servers: "localhost:9092"

There are multiple ways to configure and access secrets in a Kubernetes installation. Use whichever method fits your environment.

Docker in Docker (DinD) Worker side car

By default, Docker in Docker (DinD) is installed on the worker in the rootless version. This can be restricted on some environments due to security limitations.

Some solutions you may try:

  • On Google Kubernetes Engine (GKE), use a node pool based on UBUNTU_CONTAINERD that works well with Docker DinD, even rootless.
  • Some Kubernetes clusters only support a root version of DinD; to make your Kestra deployment work, disable the rootless version using the following Helm chart values:
dind:
image:
repository: docker
pullPolicy: IfNotPresent
tag: dind
args:
- --log-level=fatal

Docker in Docker (DinD)

If you encounter issues using Docker in Docker (e.g., with Script tasks using io.kestra.plugin.scripts.runner.docker.Docker Task Runner), start troubleshooting by attaching the terminal: docker run -it --privileged docker:dind sh. Next, use docker logs container_ID to get the container logs. Also, try docker inspect container_ID to get more information about your Docker container. The output from this command displays details about the container, its environments, network settings, etc. This information can help you identify what might be wrong.

Docker in Docker using Helm charts

On some Kubernetes deployments, using DinD with our default Helm charts can lead to errors like below:

Device "ip_tables" does not exist.
ip_tables 24576 4 iptable_raw,iptable_mangle,iptable_nat,iptable_filter
modprobe: can't change directory to '/lib/modules': No such file or directory
error: attempting to run rootless dockerd but need 'kernel.unprivileged_userns_clone' (/proc/sys/kernel/unprivileged_userns_clone) set to 1

The example below shows dind configuration properties and how to use the insecure mode for DinD:

dind:
# -- Enable Docker-in-Docker (dind) sidecar.
# @section -- kestra dind
enabled: true
# -- Dind mode (rootless or insecure).
# @section -- kestra dind
mode: 'rootless'
base:
# -- Rootless dind configuration.
# @section -- kestra dind rootless
rootless:
image:
repository: docker
pullPolicy: IfNotPresent
tag: dind-rootless
securityContext:
privileged: true
runAsUser: 1000
runAsGroup: 1000
args:
- --log-level=fatal
- --group=1000
# -- Insecure dind configuration (privileged).
# @section -- kestra dind insecure
insecure:
image:
repository: docker
pullPolicy: IfNotPresent
tag: dind-rootless
securityContext:
privileged: true
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
- NET_ADMIN
- DAC_OVERRIDE
- SETUID
- SETGID
args:
- '--log-level=fatal'

Disable Docker in Docker and use Kubernetes task runner

To avoid using root to spin up containers via DinD, disable DinD by setting the following Helm chart values:

dind:
enabled: false

Use the Kubernetes task runner as the default method for running script tasks:

pluginDefaults:
- type: io.kestra.plugin.scripts
forced: true
values:
taskRunner:
type: io.kestra.plugin.ee.kubernetes.runner.Kubernetes
# ... your Kubernetes runner configuration

Was this page helpful?