Getting Started with Declarative Orchestration
See how Kestra can simplify your workflows—and scale beyond real-time data flow routing.
Apache NiFi excels at real-time data routing and transformation with a visual canvas. Kestra orchestrates workflows across data pipelines, infrastructure, and AI workloads in declarative YAML. One moves data between systems. The other coordinates everything your business runs on.
Declarative YAML workflows versioned in Git, executed in isolated containers, deployed through CI/CD. Orchestrate data pipelines, infrastructure operations, AI workloads, and business processes across any language and any cloud. Event-driven at core.
Flow-based data routing and transformation platform built around a visual canvas. Connect processors to route, transform, and distribute data in real time. Purpose-built for data in motion: streaming ingestion, edge collection, and data provenance tracking.
NiFi's single Docker command gets you to a running instance, but production deployments require ZooKeeper or embedded clustering, SSL certificate configuration, and authorization policies. Kestra's Docker Compose bundles the database and UI in a single file — spin it up locally and run your first workflow in minutes.
curl -o docker-compose.yml \https://raw.githubusercontent.com/kestra-io/kestra/develop/docker-compose.ymldocker compose up
# Open localhost:8080# Pick a Blueprint, run it. Done.Download the Docker Compose file, spin it up, and you're ready. Database and config included. Open the UI, pick a Blueprint, run it. Your first workflow is the same YAML format it will be in production.
# Development: single nodedocker run --name nifi \ -p 8443:8443 \ -e SINGLE_USER_CREDENTIALS_USERNAME=admin \ -e SINGLE_USER_CREDENTIALS_PASSWORD=password \ apache/nifi:latest
# Production requires:# - ZooKeeper cluster (or embedded clustering)# - SSL certificates and keystore/truststore setup# - Authorization policies (file-based or Apache Ranger)NiFi runs in Docker for development, but production requires configuring clustering (ZooKeeper or embedded), SSL certificates, and user authentication. The visual canvas is ready immediately; the surrounding infrastructure takes significantly more time.
YAML is readable on day 1. Our docs are embedded in the UI for easy reference, the AI Copilot writes workflows for you, or start with our library of Blueprints. Every workflow lives in a file you can commit, review, and deploy through CI/CD.
Flows are built by dragging processors onto a canvas and connecting them. NiFi Registry supports version control by exporting flows as JSON, but the source of truth is the visual canvas, not a file you can diff in a pull request.
Orchestrate data pipelines, infrastructure operations, AI workloads, and business processes in one unified platform. Event-driven at its core, with native triggers for S3, webhooks, Kafka, database changes, and API events. 1200+ open-source plugins.
Ingest data from any source, route FlowFiles through a directed graph of processors, transform content in motion, and mediate between disparate systems in real time. NiFi's visual canvas connects processors for continuous data flow with built-in back-pressure and provenance tracking.
| | | |
|---|---|---|
| Primary use case | Universal workflow orchestration | Real-time data flow routing and transformation |
| Workflow definition | Declarative YAML (code-first) | Visual canvas (JSON export via NiFi Registry) |
| Version control | Native Git and CI/CD | NiFi Registry (JSON export, not file-based Git) |
| Architecture | Event-driven orchestrator | Flow-based data routing engine |
| Languages supported | Any (Python, SQL, Bash, Go, R, Node.js) | Java processors (scripting via ExecuteScript processor) |
| Infrastructure automation | Native support | Not designed for this |
| Business process automation | Native support | Not designed for this |
| Self-service for non-engineers | Kestra Apps | Web UI for flow monitoring and management |
| Multi-tenancy | Namespace isolation + RBAC out-of-box | Multi-tenant requires separate NiFi clusters |
| Air-gapped deployment | Supported | ✓ Supported |
| Streaming data | Via Kafka and Pulsar triggers and plugins | ✓ Purpose-built for real-time data streams |
| Data provenance | Execution logs and topology view per workflow | ✓ Built-in lineage for every data packet |
Kestra workflows are YAML files that live in your Git repository from day one. Commit them, review them in pull requests, and deploy through CI/CD — the same process as application code. Every change is a readable diff and every deployment is traceable.
Kestra runs Python, Bash, SQL, Go, R, and Node.js in isolated Docker containers. Your existing scripts work without modification — no wrappers, no rewrites, no language constraints. Each task gets its own container so dependencies never conflict.
One YAML workflow can ingest data, run a dbt model, provision a cloud resource, and notify a team — with retry logic and audit logs across every step. Kestra coordinates the full lifecycle: data pipelines, infrastructure updates, model training, approvals, and downstream notifications in a single unified definition.
Find answers to your questions right here, and don't hesitate to Contact Us if you couldn't find what you're looking for.
See how Kestra can simplify your workflows—and scale beyond real-time data flow routing.