Deployment Architecture
Examples of deployment architectures, depending on your needs.
Kestra is a Java application that is provided as an executable. You have many deployments options:
- Docker
- Kubernetes
- Manual deployment
At its heart, Kestra has a plugin system allowing you to choose the dependency type that fits your needs.
You can find three example deployment architectures below.
Small-sized deployment
For small-sized deployments, you can use the Kestra standalone server, an all-in-one server component that allows running all Kestra server components in a single process. This deployment architecture has no scaling capability.
In this case, a database is the only dependency. This allows running Kestra with a minimal stack to maintain. For now, we have three databases available:
- PostgreSQL
- MySQL
- H2
Medium-sized deployment
For medium-sized deployments, where high availability is not a strict requirement, you can use a database (Postgres or MySQL) as the only dependency. This allows running Kestra with a minimal stack to maintain. For now, we have two databases available for this kind of architecture, as H2 is not a good fit when running distributed components:
- PostgreSQL
- MySQL
All server components will communicate through the database.
In this deployment mode, unless all components run on the same host, you must use a distributed storage implementation like Google Cloud Storage, AWS S3, or Azure Blob Storage.
High-availability deployment
To support higher throughput, and full horizontal and vertical scaling of the Kestra cluster, we can replace the database with Kafka and Elasticsearch. In this case, all the server components can be scaled without any single point of failure.
Kafka and Elasticsearch are available only in the Enterprise Edition.
In this deployment mode, unless all components run on the same host, you must use a distributed storage implementation like Google Cloud Storage, AWS S3, or Azure Blob Storage
Kafka
Kafka is Kestra's primary dependency in high availability mode. Each of the most important server components in the deployment must have a Kafka instance up and running. Kafka allows Kestra to be a highly scalable solution.
Kafka Executor
With Kafka, the Executor is a heavy Kafka Stream application. The Executor processes all events from Kafka in the right order, keeps an internal state of the execution, and merges task run results from the Worker. It also detects dead Workers and resubmits the tasks run by a dead Worker.
As the Executor is a Kafka Stream, it can be scaled as needed (within the limits of partitions count on Kafka). Still, as no heavy computations are done in the Executor, this server component only requires a few resources (unless you have a very high rate of executions).
Kafka Worker
With Kafka, the Worker is a Kafka Consumer that will process any Task Run submitted to it. Workers will receive all tasks and dispatch them internally in their Thread Pool.
It can be scaled as needed (within the limits of partitions count on Kafka) and have many instances on multiple servers, each with its own Thread Pool.
With Kafka, if a Worker is dead, the Executor will detect it and resubmit their current task run to another Worker.
Elasticsearch
Elasticsearch is Kestra's User Interface database in high availability mode, allowing the display, search, and aggregation of all Kestra's data (Flows, Executions, etc.). Elasticsearch is only used by the Webserver (API and UI).
Was this page helpful?