Trigger a flow in response to a state change in one or more other flows.
You can trigger a flow as soon as another flow ends. This allows you to add implicit dependencies between multiple flows, which can often be managed by different teams.
A flow trigger must have preconditions
which filter on other flow executions.
It can also have standard trigger conditions
.
type: "io.kestra.plugin.core.trigger.Flow"
- Trigger the
transform
flow after theextract
flow finishes successfully. Theextract
flow generates adate
output that is passed to thetransform
flow as an input.
id: extract
namespace: company.team
tasks:
- id: final_date
type: io.kestra.plugin.core.debug.Return
format: "{{ execution.startDate | dateAdd(-2, 'DAYS') | date('yyyy-MM-dd') }}"
outputs:
- id: date
type: STRING
value: "{{ outputs.final_date.value }}"
The transform
flow is triggered after the extract
flow finishes successfully.
id: transform
namespace: company.team
inputs:
- id: date
type: STRING
defaults: "2025-01-01"
variables:
result: |
Ingestion done in {{ trigger.executionId }}.
Now transforming data up to {{ inputs.date }}
tasks:
- id: run_transform
type: io.kestra.plugin.core.debug.Return
format: "{{ render(vars.result) }}"
- id: log
type: io.kestra.plugin.core.log.Log
message: "{{ render(vars.result) }}"
triggers:
- id: run_after_extract
type: io.kestra.plugin.core.trigger.Flow
inputs:
date: "{{ trigger.outputs.date }}"
preconditions:
id: flows
flows:
- namespace: company.team
flowId: extract
states: [SUCCESS]
- Trigger the
silver_layer
flow once thebronze_layer
flow finishes successfully by 9 AM.
id: bronze_layer
namespace: company.team
tasks:
- id: raw_data
type: io.kestra.plugin.core.log.Log
message: Ingesting raw data
id: silver_layer
namespace: company.team
tasks:
- id: transform_data
type: io.kestra.plugin.core.log.Log
message: deduplication, cleaning, and minor aggregations
triggers:
- id: flow_trigger
type: io.kestra.plugin.core.trigger.Flow
preconditions:
id: bronze_layer
timeWindow:
type: DAILY_TIME_DEADLINE
deadline: "09:00:00"
flows:
- namespace: company.team
flowId: bronze_layer
states: [SUCCESS]
- Create a
System Flow
to send a Slack alert on any failure or warning state within thecompany
namespace. This example uses the Slack webhook secret to notify the#general
channel about the failed flow.
id: alert
namespace: system
tasks:
- id: send_alert
type: io.kestra.plugin.notifications.slack.SlackExecution
url: "{{secret('SLACK_WEBHOOK')}}" # format: https://hooks.slack.com/services/xzy/xyz/xyz
channel: "#general"
executionId: "{{trigger.executionId}}"
triggers:
- id: alert_on_failure
type: io.kestra.plugin.core.trigger.Flow
states:
- FAILED
- WARNING
preconditions:
id: company_namespace
where:
- id: company
filters:
- field: NAMESPACE
type: STARTS_WITH
value: company
Pass upstream flow's outputs to inputs of the current flow.
The inputs allow you to pass data object or a file to the downstream flow as long as those outputs are defined on the flow-level in the upstream flow. : : alert{type="warning"} Make sure that the inputs and task outputs defined in this Flow trigger match the outputs of the upstream flow. Otherwise, the downstream flow execution will not to be created. If that happens, go to the Logs tab on the Flow page to understand the error. : :
Preconditions on upstream flow executions
Express preconditions to be met, on a time window, for the flow trigger to be evaluated.
List of execution states that will be evaluated by the trigger
By default, only executions in a terminal state will be evaluated.
Any ExecutionStatus
-type condition will be evaluated after the list of states
. Note that a Flow trigger cannot react to the CREATED
state because the Flow trigger reacts to state transitions. The CREATED
state is the initial state of an execution and does not represent a state transition.
: : alert{type="info"}
The trigger will be evaluated for each state change of matching executions. If a flow has two Pause
tasks, the execution will transition from PAUSED to a RUNNING state twice — one for each Pause task. In this case, a Flow trigger listening to a PAUSED
state will be evaluated twice.
: :
List of execution states after which a trigger should be stopped (a.k.a. disabled).
The execution ID that triggered the current flow.
The flow ID whose execution triggered the current flow.
The flow revision that triggered the current flow.
The namespace of the flow that triggered the current flow.
The execution state.
SLA daily deadline
Use it only for DAILY_TIME_DEADLINE
SLA.
SLA daily end time
Use it only for DAILY_TIME_WINDOW
SLA.
SLA daily start time
Use it only for DAILY_TIME_WINDOW
SLA.
The duration of the window
Use it only for DURATION_WINDOW
or SLIDING_WINDOW
SLA.
See ISO_8601 Durations for more information of available duration value.
The start of the window is always based on midnight except if you set windowAdvance parameter. Eg if you have a 10 minutes (PT10M) window,
the first window will be 00: 00 to 00: 10 and a new window will be started each 10 minutes
The window advance duration
Use it only for DURATION_WINDOW
SLA.
Allow to specify the start time of the window
Eg: you want a window of 6 hours (window=PT6H), by default the check will be done between: 00: 00 and 06: 00, 06: 00 and 12: 00, 12: 00 and 18: 00, and 18: 00 and 00: 00.
If you want to check the window between 03: 00 and 09: 00, 09: 00 and 15: 00, 15: 00 and 21: 00, and 21: 00 and 3: 00, you will have to shift the window of 3 hours by settings windowAdvance: PT3H
The field which will be filtered.
The single value to filter the field
on.
Must be set according to its type
.
The list of values to filter the field
on.
Must be set for the following types: IN, NOT_IN.
The namespace of the flow.
The flow id.
A key/value map of labels.
The execution states.
A unique id for the preconditions
Whether to reset the evaluation results of preconditions after a first successful evaluation within the given time window.
By default, after a successful evaluation of the set of preconditions, the evaluation result is reset. This means the same set of conditions needs to be successfully evaluated again within the same time window to trigger a new execution.
In this setup, to create multiple executions, the same set of conditions must be evaluated to true
multiple times within the defined window.
You can disable this by setting this property to false
, so that within the same window, each time one of the conditions is satisfied again after a successful evaluation, it will trigger a new execution.
Define the time window for evaluating preconditions.
You can set the type
of timeWindow
to one of the following values:
DURATION_WINDOW
: this is the defaulttype
. It uses a start time (windowAdvance
) and end time (window
) that advance to the next interval whenever the evaluation time reaches the end time, based on the defined durationwindow
. For example, with a 1-day window (the default option:window: PT1D
), the preconditions are evaluated during a 24-hour period starting at midnight (i.e., at "00: 00: 00+00: 00") each day. If you setwindowAdvance: PT6H
, the window will start at 6 AM each day. If you setwindowAdvance: PT6H
and also override thewindow
property toPT6H
, the window will start at 6 AM and last for 6 hours. In this configuration, the preconditions will be evaluated during the following intervals: 06: 00 to 12: 00, 12: 00 to 18: 00, 18: 00 to 00: 00, and 00: 00 to 06: 00.SLIDING_WINDOW
: this option evaluates preconditions over a fixed timewindow
but always goes backward from the current time. For example, a sliding window of 1 hour (window: PT1H
) evaluates executions within the past hour (from one hour ago up to now). It uses a default window of 1 day.DAILY_TIME_DEADLINE
: this option declares that preconditions should be met "before a specific time in a day." Using the string propertydeadline
, you can configure a daily cutoff for evaluating preconditions. For example,deadline: "09: 00: 00"
specifies that preconditions must be met from midnight until 9 AM UTC time each day; otherwise, the flow will not be triggered.DAILY_TIME_WINDOW
: this option declares that preconditions should be met "within a specific time range in a day". For example, a window fromstartTime: "06: 00: 00"
toendTime: "09: 00: 00"
evaluates executions within that interval each day. This option is particularly useful for defining freshness conditions declaratively when building data pipelines that span multiple teams and namespaces. Normally, a failure in any task in your flow will block the entire pipeline, but with this decoupled flow trigger alternative, you can proceed as soon as the data is successfully refreshed within the specified time window.