Naming Conventions
Common naming conventions to keep your flows and tasks well-organized and consistent in Kestra.
Namespace naming convention
We recommend following a company.team
structure for namespaces to maintain a clean, scalable, and consistent hierarchy across your workflows.
This approach helps with:
- Centralized governance for credentials and configurations
- Easier sharing of variables, plugin defaults, and secrets across teams
- Simplified Git synchronization
Why use the company.team
structure
By defining a root namespace named after your company, you can centralize management of plugin defaults, variables, and secrets.
These configurations can then be inherited by all namespaces under that root.
This structure also simplifies Git synchronization.
You can maintain a single synchronization flow that updates all namespaces under your company root.
The next level — named after your team (e.g., company.team
) — allows for shared governance and visibility at the team level.
From there, you can further divide namespaces by project, system, or other logical hierarchies.
When synced with Git, this nested structure maps directly to nested directories in your repository.
Example namespace structure
mycompany
├── mycompany.marketing
│ ├── mycompany.marketing.projectA
│ └── mycompany.marketing.projectB
└── mycompany.sales
├── mycompany.sales.projectC
└── mycompany.sales.projectD
Should you use environment-specific namespaces?
We generally recommend avoiding environment-specific namespaces (e.g., dev
, staging
, prod
) because they can introduce several issues:
- Shared risk: Development workflows can unintentionally impact production.
- Configuration drift: Duplicating configurations across environments can lead to inconsistencies.
Instead, run separate Kestra instances (or tenants in Enterprise Edition) for development and production.
Summary
Using a company.team
namespace structure creates a clear, maintainable hierarchy that mirrors your organization’s structure and simplifies Git synchronization.
To separate environments reliably, use distinct Kestra instances or tenants rather than environment-based namespaces.
ID naming convention
We recommend using a consistent naming pattern across all identifiers in Kestra, including:
- Flows
- Tasks
- Inputs
- Outputs
- Triggers
Valid characters and subscript notation
Kestra does not enforce a specific naming style, but IDs must match the regex pattern:^[a-zA-Z0-9][a-zA-Z0-9_-]*
This means:
- Only letters, numbers, underscores
_
, and hyphens-
are allowed. - If you use hyphens (e.g.,
kebab-case
), reference IDs using subscript notation, such as:{{ outputs.task_id["your-custom-value"].attribute }}
We recommend using snake_case or camelCase instead of kebab-case
, as they avoid the need for subscript notation and improve readability.
Snake case
Snake case is popular among Python developers, especially in data and AI workflows.
Here’s an example using snake_case
for all IDs:
id: api_python_sql
namespace: prod.marketing.attribution
inputs:
- id: api_endpoint
type: URL
defaults: https://dummyjson.com/products
tasks:
- id: fetch_products
type: io.kestra.plugin.core.http.Request
uri: "{{ inputs.api_endpoint }}"
- id: transform_in_python
type: io.kestra.plugin.scripts.python.Script
containerImage: python:slim
beforeCommands:
- pip install polars
outputFiles:
- "products.csv"
script: |
import polars as pl
data = {{ outputs.fetch_products.body | jq('.products') | first }}
df = pl.from_dicts(data)
df.glimpse()
df.select(["brand", "price"]).write_csv("products.csv")
- id: sql_query
type: io.kestra.plugin.jdbc.duckdb.Query
inputFiles:
in.csv: "{{ outputs.transform_in_python.outputFiles['products.csv'] }}"
sql: |
SELECT brand, round(avg(price), 2) as avg_price
FROM read_csv_auto('{{ workingDir }}/in.csv', header=True)
GROUP BY brand
ORDER BY avg_price DESC;
fetchType: STORE
outputs:
- id: final_result
value: "{{ outputs.sql_query.uri }}"
triggers:
- id: daily_at_9am
type: io.kestra.plugin.core.trigger.Schedule
cron: "0 9 * * *"
Camel case
Camel case is common in Java and JavaScript ecosystems.
Here’s the same example using camelCase
:
id: apiPythonSql
namespace: prod.marketing.attribution
inputs:
- id: apiEndpoint
type: URL
defaults: https://dummyjson.com/products
tasks:
- id: fetchProducts
type: io.kestra.plugin.core.http.Request
uri: "{{ inputs.apiEndpoint }}"
- id: transformInPython
type: io.kestra.plugin.scripts.python.Script
containerImage: python:slim
beforeCommands:
- pip install polars
outputFiles:
- "products.csv"
script: |
import polars as pl
data = {{ outputs.fetchProducts.body | jq('.products') | first }}
df = pl.from_dicts(data)
df.glimpse()
df.select(["brand", "price"]).write_csv("products.csv")
- id: sqlQuery
type: io.kestra.plugin.jdbc.duckdb.Query
inputFiles:
in.csv: "{{ outputs.transformInPython.outputFiles['products.csv'] }}"
sql: |
SELECT brand, round(avg(price), 2) as avgPrice
FROM read_csv_auto('{{ workingDir }}/in.csv', header=True)
GROUP BY brand
ORDER BY avgPrice DESC;
store: true
outputs:
- id: finalResult
value: "{{ outputs.sqlQuery.uri }}"
triggers:
- id: dailyAt9am
type: io.kestra.plugin.core.trigger.Schedule
cron: "0 9 * * *"
Both conventions are valid — choose the one that best matches your team’s coding standards.
Was this page helpful?