Kubernetes
Install Kestra in a Kubernetes cluster using a Helm chart.
Helm Chart repository
For production workloads, we recommend using Kubernetes for deployment, as it enables scaling of specific Kestra services as needed. Before getting started, make sure that you have Helm and kubectl installed. Refer to their documentation if needed.
We provide an official Helm Chart to make the deployment easier.
- The chart repository is available under helm.kestra.io.
- The source code of the charts can be found in the kestra-io/helm-charts repository on GitHub. There are three charts:
kestra
: this is the production chart with no dependencies installed.kestra-starter
: this chart comes with dependencies such as PostgreSQL and MinIO that are maintained by the user -- not recommended for production installations.kestra-operator
: this is a separate chart for installing the operator.
All image tags provided by default can be found in the Docker installation guide.
Install the chart
helm repo add kestra https://helm.kestra.io/
helm install my-kestra kestra/kestra
You'll now see a pod has been started:
- Standalone: All components of Kestra deployed together in one pod
If using kestra-starter
, then there'll be three additional pods:
- PostgreSQL: Database service
- MinIO: Internal storage backend
To access, export the pod name as an environment variable with the following command:
helm upgrade my-kestra kestra/kestra -f values.yaml
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=kestra,app.kubernetes.io/instance=kestra,app.kubernetes.io/component=standalone" -o jsonpath="{.items[0].metadata.name}")
To then access from localhost, run the following port-forward
command with kubectl:
kubectl port-forward $POD_NAME 8080:8080
Scale Kestra with Kubernetes
By default, the chart deploys only one standalone Kestra service with one replica within a single pod. However, to increase scalability and flexibility, you can also configure the Kestra components to run in their own dedicated, standalone pods. Use the following Helm chart values to change that default behavior and deploy each service independently in their own pod:
To deploy each service independently in its own pod, add the values you want from the Helm chart values into a values.yaml
file in your instance's configuration like below:
deployments:
webserver:
enabled: true
executor:
enabled: true
indexer:
enabled: true
scheduler:
enabled: true
worker:
enabled: true
standalone:
enabled: false
The above configuration enables a standalone pod for each Kestra component (webserver, executor, indexer, etc.) and disables the combined, standalone pod. This offers more scalable, granular resource allocation of each component depending on workflow demand and how intensely each component is utilized.
To confirm these changes and re-deploy, save the values.yaml
and upgrade your Helm charts with the same commands as before:
helm upgrade my-kestra kestra/kestra -f values.yaml
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=kestra,app.kubernetes.io/instance=kestra,app.kubernetes.io/component=standalone" -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward $POD_NAME 8080:8080
Now you are able to access Kestra at localhost:8080
. Since Kestra 1.0, this chart doesn't include any external dependencies (like MinIO and Postgres), so make sure to configure your instance as needed.
The kestra-starter
chart deploys the following related services in Kestra (intended as a starter installation):
- A MinIO standalone server
- A PostgreSQL database
All external services in kestra-starter
(MinIO and PostgreSQL) are provided as-is and require proper configuration for production use. These services need to be fine-tuned according to your specific requirements, including resource allocation, security settings, and high availability configurations. We recommend the kestra
chart for production installations.
Note: PostgreSQL is configured to use a low amount of resources by default but it can be reconfigured as needed.
Secret environment variables
You can use secrets by adding them to your configurations
in the values.yaml
file.
configurations:
application:
kestra:
queue:
type: h2
repository:
type: h2
storage:
type: local
local:
basePath: "/app/storage"
datasources:
h2:
url: jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
username: kestra
password: ""
driverClassName: org.h2.Driver
configmaps:
- name: kestra-others
key: others.yml
secrets:
- name: kestra-basic-auth
key: basic-auth.yml
If you need to add extra environment variables from existing ConfigMaps
or Secrets
, you can use extraEnv
and extraEnvFrom
under the common entry.
common:
nodeSelector: {}
tolerations: []
affinity: {}
extraVolumeMounts: []
extraVolumes: []
extraEnv: []
# more...
The extraEnvFrom
property enables you to access the variables as secrets in your Kestra instance. For example, you could add the following to your configuration for basic authentication access:
common:
extraEnvFrom:
- secretRef:
name: basic-auth-secret
The extraEnvFrom
property then points to:
extraManifests:
- apiVersion: v1
kind: Secret
metadata:
name: basic-auth-secret
stringData:
basic-auth.yml: |
kestra:
server:
basicAuth:
enabled: true
username: [email protected]
password: ChangeMe1234!
There are multiple ways to configure and access secrets in a Kubernetes installation. Use whichever method fits your environment.
Configuration
There are two methods to adjust the Kestra configuration:
- Using a Kubernetes
ConfigMap
via theconfigurations
Helm value - Using a Kubernetes
Secret
via thesecrets
Helm value
Both must be valid YAML that is merged as the Kestra configuration file.
Below is an example that shows how to enable Kafka as the queue implementation and configure its server property using a secret:
configurations:
application:
kestra:
queue:
type: kafka
secrets:
- name: kafka-server
key: kafka.yml
The Kafka secret key then points towards a secret in extraManifests
:
extraManifests:
- apiVersion: v1
kind: Secret
metadata:
name: kafka-server
stringData:
kafka.yml: |
kestra:
kafka:
client:
properties:
bootstrap.servers: "localhost:9092"
There are multiple ways to configure and access secrets in a Kubernetes installation. Use whichever method fits your environment.
Docker in Docker (DinD) Worker side car
By default, Docker in Docker (DinD) is installed on the worker in the rootless version. This can be restricted on some environments due to security limitations.
Some solutions you may try:
- On Google Kubernetes Engine (GKE), use a node pool based on
UBUNTU_CONTAINERD
that works well with Docker DinD, even rootless. - Some Kubernetes clusters only support a root version of DinD; to make your Kestra deployment work, disable the rootless version using the following Helm chart values:
dind:
image:
repository: docker
pullPolicy: IfNotPresent
tag: dind
args:
- --log-level=fatal
Docker in Docker (DinD)
If you encounter issues using Docker in Docker (e.g., with Script tasks using io.kestra.plugin.scripts.runner.docker.Docker
Task Runner), start troubleshooting by attaching the terminal: docker run -it --privileged docker:dind sh
. Next, use docker logs container_ID
to get the container logs. Also, try docker inspect container_ID
to get more information about your Docker container. The output from this command displays details about the container, its environments, network settings, etc. This information can help you identify what might be wrong.
Docker in Docker using Helm charts
On some Kubernetes deployments, using DinD with our default Helm charts can lead to errors like below:
Device "ip_tables" does not exist.
ip_tables 24576 4 iptable_raw,iptable_mangle,iptable_nat,iptable_filter
modprobe: can't change directory to '/lib/modules': No such file or directory
error: attempting to run rootless dockerd but need 'kernel.unprivileged_userns_clone' (/proc/sys/kernel/unprivileged_userns_clone) set to 1
The example below shows dind
configuration properties and how to use the insecure mode for DinD:
dind:
# -- Enable Docker-in-Docker (dind) sidecar.
# @section -- kestra dind
enabled: true
# -- Dind mode (rootless or insecure).
# @section -- kestra dind
mode: 'rootless'
base:
# -- Rootless dind configuration.
# @section -- kestra dind rootless
rootless:
image:
repository: docker
pullPolicy: IfNotPresent
tag: dind-rootless
securityContext:
privileged: true
runAsUser: 1000
runAsGroup: 1000
args:
- --log-level=fatal
- --group=1000
# -- Insecure dind configuration (privileged).
# @section -- kestra dind insecure
insecure:
image:
repository: docker
pullPolicy: IfNotPresent
tag: dind-rootless
securityContext:
privileged: true
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
- NET_ADMIN
- DAC_OVERRIDE
- SETUID
- SETGID
args:
- '--log-level=fatal'
Disable Docker in Docker and use Kubernetes task runner
To avoid using root
to spin up containers via DinD, disable DinD by setting the following Helm chart values:
dind:
enabled: false
Use the Kubernetes task runner as the default method for running script tasks:
pluginDefaults:
- type: io.kestra.plugin.scripts
forced: true
values:
taskRunner:
type: io.kestra.plugin.ee.kubernetes.runner.Kubernetes
# ... your Kubernetes runner configuration
Was this page helpful?