Kubernetes Deployment with Helm Charts
Install Kestra in a Kubernetes cluster using a Helm chart.
Prerequisites
- kubectl — to interact with your cluster
- Helm — to install and manage charts
Refer to the respective documentation if these tools are not yet installed.
Helm chart repository
Kestra maintains three Helm charts:
kestra— production-ready chart. No dependencies included. Best suited for production deployments with customizable database and storage.kestra-starter— includes PostgreSQL and Versity (S3-like storage) for evaluation only. Great for getting started quickly and experimenting with Kestra.kestra-operator— installs the Enterprise Edition Kubernetes Operator.
Chart sources:
- Repository: helm.kestra.io
- Source code: kestra helm chart
- ArtifactHub: kestra · kestra-starter
All default image tags are listed in the Docker installation guide.
Chart configuration resources
To understand available configuration options and compare versions:
- Compare versions: See differences between two Helm chart versions on ArtifactHub using the values comparison modal.
- Full values reference: Review all available configuration options in the values.yaml file on GitHub.
Starter chart dependencies
The kestra-starter chart installs:
- Versity (object storage)
- PostgreSQL (database)
These are not suitable for production.
Enterprise Edition
To deploy the Enterprise Edition, authenticate before pulling images:
docker login registry.kestra.io --username $LICENSEID --password $FINGERPRINTUse:
registry.kestra.io/docker/kestra-ee:latest- or a pinned version such as
registry.kestra.io/docker/kestra-ee:v1.0
Review Enterprise requirements before deploying. Compare editions in Open Source vs Enterprise if you are deciding between versions.
To manage flows declaratively using CRDs, install the Kestra Kubernetes Operator (Enterprise Edition).
Install Kestra
Add the chart repository:
helm repo add kestra https://helm.kestra.io/helm repo updateInstall the kestra-starter chart:
helm install my-kestra kestra/kestra-starterThis deploys pods for Kestra, PostgreSQL (database), and Versity (storage).
Alternatively, install the kestra production chart:
helm install my-kestra kestra/kestraThis deploys Kestra in standalone mode—all core components run in a single pod.
The kestra chart does not include PostgreSQL or object storage. Configure these before production deployment.
Access the Kestra UI
To list all pods run:
kubectl get pods -n default -l app.kubernetes.io/name=kestraIf you installed the kestra-starter chart, you will likely see something like:
my-kestra-kestra-starter-xxxxxx-xxxxx Runningmy-kestra-postgresql-0 Runningmy-kestra-versity-0 RunningThe pod you want to port-forward is the Kestra standalone pod, usually named:
my-kestra-kestra-starter-xxxxxIf your release is my-kestra, the label selector will reliably find it.
Export the pod name:
export POD_NAME=$(kubectl get pods \ -l "app.kubernetes.io/name=kestra,app.kubernetes.io/instance=my-kestra,app.kubernetes.io/component=standalone" \ -o jsonpath="{.items[0].metadata.name}")Check it with:
echo $POD_NAMEPort-forward the UI:
kubectl port-forward $POD_NAME 8080:8080Open http://localhost:8080 in your browser and create your user.
Scaling Kestra on Kubernetes
For production deployments, run each Kestra component in its own pod for improved scalability and resource isolation.
Example values.yaml:
deployments: webserver: enabled: true executor: enabled: true indexer: enabled: true scheduler: enabled: true worker: enabled: true standalone: enabled: falseApply changes:
helm upgrade my-kestra kestra/kestra -f values.yamlValidate pod layout:
kubectl get pods -l app.kubernetes.io/name=kestraConfiguration
Kestra configuration is provided through Helm values and rendered into ConfigMaps and Secrets.
Minimal example (H2 database for testing only)
configurations: application: kestra: queue: type: h2 repository: type: h2 storage: type: local local: basePath: "/app/storage"
datasources: h2: url: jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE username: kestra password: kestra driverClassName: org.h2.DriverUsing secrets
Secrets can be mounted into Kestra through the secrets section and referenced via manifests.
Example: enabling Kafka using a Secret
configurations: application: kestra: queue: type: kafka
secrets: - name: kafka-server key: kafka.ymlSecret manifest:
extraManifests: - apiVersion: v1 kind: Secret metadata: name: kafka-server stringData: kafka.yml: | kestra: kafka: client: properties: bootstrap.servers: "localhost:9092"Environment variables
Use extraEnv or extraEnvFrom to load values from existing Secrets or ConfigMaps.
Example:
common: extraEnvFrom: - secretRef: name: basic-auth-secretSecret manifest:
extraManifests: - apiVersion: v1 kind: Secret metadata: name: basic-auth-secret stringData: basic-auth.yml: | kestra: server: basic-auth: enabled: true username: admin@localhost.com password: ChangeMe1234!Docker-in-Docker (DinD)
Kestra workers support rootless Docker-in-Docker by default. Some clusters restrict this.
On Google Kubernetes Engine (GKE), using a node pool based on UBUNTU_CONTAINERD works well with rootless Docker DinD.
Disable rootless mode
Some clusters only support a root version of DinD. To enable insecure (privileged) mode instead, use the insecure mode Helm values:
dind: # -- Enable Docker-in-Docker (dind) sidecar. # @section -- kestra dind enabled: true # -- Dind mode (rootless or insecure). # @section -- kestra dind mode: 'rootless' base: # -- Rootless dind configuration. # @section -- kestra dind rootless rootless: image: repository: docker pullPolicy: IfNotPresent tag: dind-rootless securityContext: privileged: true runAsUser: 1000 runAsGroup: 1000 args: - --log-level=fatal - --group=1000 # -- Insecure dind configuration (privileged). # @section -- kestra dind insecure insecure: image: repository: docker pullPolicy: IfNotPresent tag: dind-rootless securityContext: privileged: true runAsUser: 0 runAsGroup: 0 allowPrivilegeEscalation: true capabilities: add: - SYS_ADMIN - NET_ADMIN - DAC_OVERRIDE - SETUID - SETGID args: - '--log-level=fatal'Troubleshooting DinD
If you encounter errors like the following on some Kubernetes deployments:
Device "ip_tables" does not exist.ip_tables 24576 4 iptable_raw,iptable_mangle,iptable_nat,iptable_filtermodprobe: can't change directory to '/lib/modules': No such file or directoryerror: attempting to run rootless dockerd but need 'kernel.unprivileged_userns_clone' (/proc/sys/kernel/unprivileged_userns_clone) set to 1Attach to the DinD container to inspect logs:
docker run -it --privileged docker:dind shdocker logs <container-id>docker inspect <container-id>Disable DinD and use Kubernetes task runner
To avoid using root to spin up containers via DinD, disable DinD by setting the following Helm chart values:
dind: enabled: falseUse the Kubernetes task runner as the default method for running script tasks:
pluginDefaults: - type: io.kestra.plugin.scripts forced: true values: taskRunner: type: io.kestra.plugin.ee.kubernetes.runner.Kubernetes # ... your Kubernetes runner configurationWas this page helpful?