Fundamentals
Start by building a simple Hello World flow.
To install Kestra, follow the Quickstart Guide or check the detailed Installation Guide.
Flows
Flows are defined in a declarative YAML syntax to keep the orchestration code portable and language-agnostic.
Each flow consists of three required components: id, namespace, and tasks.
idis the unique identifier of the flow.namespaceseparates projects, teams, and environments.tasksis a list of tasks executed in order.
Here are those three components in a YAML file:
id: getting_started
namespace: company.team
tasks:
- id: hello_world
type: io.kestra.plugin.core.log.Log
message: Hello World!
The id of a flow must be unique within its namespace. For example:
- ✅ You can have a flow named
getting_startedincompany.team1and another flow namedgetting_startedincompany.team2. - ❌ You cannot have two flows named
getting_startedincompany.teamat the same time.
The combination of id and namespace is the unique identifier for a flow.
Namespaces
Namespaces are used to group flows and provide structure. Keep in mind that a flow’s allocation to a namespace is immutable. Once a flow is created, you cannot change its namespace. If you need to change the namespace of a flow, create a new flow within the desired namespace and delete the old flow.
Labels
To add another layer of organization, use labels to group flows with key–value pairs. In short, labels are customizable tags to simplify monitoring and filtering of flows and executions.
Descriptions
You can optionally add a description property to document your flow's purpose or other useful information. The description is a string that supports markdown syntax. This markdown description is rendered and displayed in the UI.
Not only flows can have a description. You can also add a description property to tasks and triggers to document all the components of your workflow.
Here is the same flow as before, but with labels and descriptions:
id: getting_started
namespace: company.team
description: |
# Getting Started
Let's `write` some **markdown** - [first flow](https://t.ly/Vemr0) 🚀
labels:
owner: rick.astley
project: never-gonna-give-you-up
tasks:
- id: hello_world
type: io.kestra.plugin.core.log.Log
message: Hello World!
description: |
## About this task
This task prints "Hello World!" to the logs.
Learn more about flows in the Flows section.
Tasks
Tasks are atomic actions in your flows. You can design your tasks to be small and granular, such as fetching data from a REST API or running a self-contained Python script. However, tasks can also represent large and complex processes, like triggering containerized processes or long-running batch jobs (e.g., using dbt, Spark, AWS Batch, Azure Batch, etc.) and waiting for their completion.
The order of task execution
Tasks are defined as a list. By default, all tasks in the list will be executed sequentially — the second task will start as soon as the first one finishes successfully.
Kestra provides additional customization to run tasks in parallel, iterate (sequentially or in parallel) over a list of items, or allow specific tasks to fail without stopping the flow. These kinds of actions are called Flowable tasks because they define the flow logic.
A task in Kestra must have an id and a type. Other properties depend on the task type. You can think of a task as a step in a flow that executes a specific action, such as running a Python or Node.js script in a Docker container or loading data from a database.
tasks:
- id: python
type: io.kestra.plugin.scripts.python.Script
containerImage: python:slim
script: |
print("Hello World!")
Autocompletion
Kestra supports hundreds of tasks integrating with various external systems. Use the shortcut CTRL + SPACE on Windows/Linux or fn + control + SPACE on macOS to trigger autocompletion to list available tasks or properties of a given task.
If you want to comment out part of your code, use CTRL + K + C on Windows/Linux or ⌘ + fn + K + C on macOS. To uncomment, use CTRL + K + U on Windows/Linux or ⌘ + fn + K + U on macOS. All available keyboard shortcuts are listed in the code editor context menu.
Supported task types
Here are the supported task types.
Core
Core tasks from the io.kestra.plugin.core.flow category control flow logic. Use them to run tasks in parallel or sequentially, branch conditionally, iterate over items, pause, or allow specific tasks to fail without stopping the execution.
Scripts
Script tasks run scripts in Docker containers or local processes. You can run Python, Node.js, R, Julia, or other scripts, or execute commands in shell or PowerShell. See the Script tasks page for details.
Internal storage
Tasks from the io.kestra.plugin.core.storage category, along with Outputs, interact with internal storage. Kestra uses internal storage to pass data between tasks. You can treat internal storage like an S3 bucket, including your own private bucket.
This storage layer helps avoid connector sprawl. For example, the PostgreSQL plugin can extract data and load it into internal storage. Other tasks can then load that data into Snowflake, BigQuery, or Redshift—or process it with another plugin—without direct point-to-point connections.
KV store
Internal storage is mainly used to pass data within a single flow execution. To pass data between executions, use the KV Store. The Set, Get, and Delete tasks from io.kestra.plugin.core.kv persist data between executions (even across namespaces). For example, with dbt, you can persist manifest.json between runs to implement a slim CI pattern.
Plugins
Apart from core tasks, the plugins library provides integrations for data ingestion, data transformation, databases, object stores, message queues, and more. You can also create your own plugins to integrate with any system or language.
Create and run your first flow
Now, let's create and run your first flow. On the left side of the screen, click on the Flows menu. Then, click on the Create button.

Paste the following code into the Flow editor:
id: getting_started
namespace: company.team
tasks:
- id: api
type: io.kestra.plugin.core.http.Request
uri: https://dummyjson.com/products
Then, hit the Save button.

This flow has a single task that fetches data from the dummyjson API. Run it to see the output.

After execution, you’ll be directed to the Gantt view to see the stages of your flow’s progress. In this simple example, we see the API request successfully execute. We'll continue adding more to our flow in the coming sections.

Was this page helpful?