React Instantly to Kafka, SQS, and MQTT Events
Trigger workflows instantly as events occur, with millisecond latency.
Triggers in Kestra can listen to external events and start a workflow execution when the event occurs. Most triggers in Kestra poll external systems at regular intervals (e.g., every second) to detect new events. This is effective for batch-style data processing. However, business-critical workflows often demand immediate reactions — within milliseconds. Realtime Triggers address this need by listening directly for events and starting workflows as soon as they occur.
What are Realtime Triggers
Realtime Triggers continuously listen for events and launch a new workflow execution the moment an event occurs, such as:
- a message is published to a Kafka topic
- a message is published to a Pulsar topic
- a message is published to an AMQP queue
- a message is published to an MQTT queue
- a message is published to an AWS SQS queue
- a message is published to Google Pub/Sub
- a message is published to Azure Event Hubs
- a message is published to a NATS subject
- an item is added to a Redis list
- a row is added, modified or deleted in Postgres, MySQL, or SQL Server.
How Realtime Triggers work
Once a Realtime Trigger is added to a workflow, Kestra spins up a dedicated listener thread that remains active. As soon as a new event arrives, the listener immediately starts a workflow execution to process it.
Use cases
Realtime Triggers are ideal for orchestrating business-critical operations and event-driven microservices. Typical scenarios include:
- Fraud or anomaly detection
- Order and payment processing
- Real-time predictions or recommendations
- Stock price or market event reactions
- Shipping and delivery updates
- Any workflow requiring instant reaction to external events
In addition, Realtime Triggers can be used for data orchestration, especially for Change Data Capture use cases. The Debezium Postgres RealtimeTrigger plugin can listen to changes in a database table and start a workflow execution as soon as a new row is inserted, updated, or deleted.
When to use Triggers vs. Realtime Triggers
The table below compares Triggers with Realtime Triggers to help you choose the right trigger type for your use case:
| Criteria | Trigger | Realtime Trigger |
|---|---|---|
| Implementation | Micro-batch | Realtime |
| Event Processing | Batch-process all events received until the poll interval has elapsed | Process each event immediately as it happens |
| Latency | Second(s) or minute(s) | Millisecond(s) |
| Execution Model | Each execution processes one or many events | Each execution processes exactly one event |
| Data Handling | Store all received events in a file | Store each event in a raw format |
| Output format | URI of a file in internal storage | Raw data of the event payload and related metadata |
| Application | Data applications processing data in batch | Business-critical operations reacting to events in real time |
| Use cases | Data orchestration for analytics and building data products | Process and microservice orchestration (real time updates, anomaly detection, order processing) |
How to use Realtime Triggers
To use Realtime Triggers, choose the RealtimeTrigger as the trigger type for your desired service. The following flow uses the RealtimeTrigger to listen to new messages in an AWS SQS queue:
id: sqsnamespace: company.team
tasks: - id: log type: io.kestra.plugin.core.log.Log message: "{{ trigger }}"
triggers: - id: realtime_trigger type: io.kestra.plugin.aws.sqs.RealtimeTrigger region: eu-north-1 accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID')}}" secretKeyId: "{{ secret('AWS_SECRET_ACCESS_KEY') }}" queueUrl: https://sqs.eu-north-1.amazonaws.com/123456789/MyQueueWorker failover for Realtime Triggers
Each Realtime Trigger runs as a dedicated listener thread on one specific worker. If that worker stops, the listener stops with it. Kestra’s liveness mechanism detects this and re-emits the trigger so another available worker can pick it up.
The time before failover depends on how the worker stopped:
- Graceful shutdown (e.g.
docker stop, rolling deploy): the Executor waits forkestra.server.terminationGracePeriod(defaultPT5M) before reassigning the trigger. This prevents duplicate processing when the worker is expected to come back shortly, such as during a rolling deployment. - Abrupt failure (no heartbeat received): the Executor detects the missing heartbeat within
kestra.server.liveness.timeoutand reassigns the trigger without waiting for the grace period.
To reduce the failover time after a graceful shutdown, lower the terminationGracePeriod:
kestra: server: terminationGracePeriod: PT1M # default is PT5MEvents are not lost during the failover window. They remain in the source system (Kafka topic, SQS queue, etc.) and will be consumed once the trigger listener is restarted on another worker. ::
Comparison with real-time data processing engines
Kestra’s Realtime Triggers are not a replacement for real-time data processing engines such as Apache Flink, Apache Beam, or Google Dataflow.
Those data processing engines excel at stateful streaming applications and complex SQL transformations over real-time data streams.
Unlike streaming engines, Kestra’s Realtime Triggers are stateless — each event creates its own independent workflow execution. They are designed for orchestrating business workflows and microservices in response to events, not for continuous stateful stream processing.
To continue with Realtime Triggers, check out their How-to Guide.
Was this page helpful?