AI Copilot in Kestra – Generate and Edit Flows
Build and modify flows directly from natural language prompts.
Create and edit flows with AI Copilot
The AI Copilot can generate and iteratively edit declarative flow code with AI-assisted suggestions.
The AI Copilot is designed to help build and modify flows directly from natural language prompts. Describe what you are trying to build, and Copilot will generate the YAML flow code for you to accept or adjust. Once your initial flow is created, you can iteratively refine it with Copilot’s help, adding new tasks or adjusting triggers without touching unrelated parts of the flow. Everything stays as code and in Kestra’s usual declarative syntax.
Copilot is available anywhere you build in Kestra — Flows, Apps, Unit tests, and Dashboards — so you can keep iterating with the same AI assistant across the product surface.
You can type prompts or click the microphone button in the Copilot panel to dictate them with speech-to-text directly from the UI.
Copilot grounds its suggestions in your Namespace metadata. It automatically reads Plugin Defaults, Variables, Secrets, and Key-Value pairs configured in the current Namespace, so prompts like “Create a task that integrates with MongoDB” can reuse your existing pluginDefaults, stored credentials, or variables without extra hints.
Configuration
To add Copilot to your flow editor, add the following to your Kestra configuration. The providers array lets you register multiple LLMs and pick a default (isDefault: true):
kestra: ai: providers: - id: gemini display-name: Gemini - Private type: gemini configuration: model-name: gemini-2.5-flash api-key: YOUR_GEMINI_API_KEY - id: gpt display-name: Open AI type: openai isDefault: true configuration: model-name: gpt-4 api-key: YOUR_OPENAI_API_KEYLegacy single-provider configs (kestra.ai.type + provider block) still work, but the providers array lets you register multiple providers and choose a default (isDefault: true).
When multiple providers are configured, users can switch models from a dropdown in the Copilot UI instead of relying only on the default.
Replace api-key with your provider credentials. Copilot appears in the top right corner of the flow editor. Optionally, you can add the following properties inside each provider configuration block (availability varies by provider):
temperature: Controls randomness in responses — lower values make outputs more focused and deterministic, while higher values increase creativity and variability.topP(nucleus sampling): Ranges from 0.0–1.0; lower values (0.1–0.3) produce safer, more focused responses for technical tasks, while higher values (0.7–0.9) encourage more creative and varied outputs.topK: Typically ranges from 1–200+ depending on the API; lower values restrict choices to a few predictable tokens, while higher values allow more options and greater variety in responses.maxOutputTokens: Sets the maximum number of tokens the model can generate, capping the response length.logRequests: Creates logs in Kestra for LLM requests.logResponses: Creates logs in Kestra for LLM responses.baseURL: Specifies the endpoint address where the LLM API is hosted.clientPem: (Required for mTLS) PEM bundle with client cert + private key (e.g.,cat client.crt.pem client.key.pem > client-bundle.pem). Used for mutual TLS.caPem: CA PEM file to add a custom CA withouttrustAll. Usually not needed since hosts already trust the CA.customHeaders: Specify custom HTTP headers for authentication and routing through internal AI gateways. Custom headers should be passed as a map inside the property.timeout: Specifies the maximum duration to wait for an AI model API request to complete before timing out. ISO 8601 duration format (Java Duration):PT30S= 30 seconds. You can set it per provider to enforce strict SLAs.
Enterprise Edition includes an RBAC permission that lets administrators allow or disallow Copilot usage per role at tenant or namespace scope.

The open-source version supports only Google Gemini models. Enterprise Edition users can configure any LLM provider, including Amazon Bedrock, Anthropic, Azure OpenAI, DeepSeek, Google Gemini, Google Vertex AI, Mistral, and all open-source models supported by Ollama. Navigate down to the Enterprise configurations section for your provider. If you use a different provider, please reach out to us and we’ll add it.
Build flows with Copilot
In the above demo, we want to create a flow that uses a Python script to fetch New York City weather data. To get started, open the Copilot and write a prompt. For example:
Create a flow with a Python script that fetches weather data for New York CityOnce prompted, the Copilot generates YAML directly in the flow editor that can be accepted or refused in the bottom right corner.

If accepted, the flow is created and can be saved for execution, iterated on manually, or continually iterated upon by the Copilot. For example, you want a trigger added to the flow to run it on a schedule. Reopen the Copilot and prompt it with the desired trigger setup such as:
Add a trigger to run the flow every day at 9 AMThe Copilot again makes a suggestion to add to the flow, but only in the targeted section, in this case a triggers block. This is also the case if you want the Copilot only to consider a specific task, input, plugin default, and so on.

You can continuously collaborate with Copilot until the flow is exactly as you imagined. If accepted, suggestions are always declaratively written and manageable as code. You can keep track of the revision history using the built-in Revisions tab or with the help of Git Sync.
Fix with AI
With Copilot configured, there is also the added benefit of consulting Copilot to resolve execution errors from the Logs and Gantt views. For failed tasks, you can open the task and click the three dots to “Fix with AI”. This option reopens the flow editor with the Copilot automatically prompted with the error context to help resolve any issues with the task.

Starter prompts
To get started with Copilot, here are some example prompts to test, iterate on, and use as a starting point for collaboratively building flows with AI in Kestra:
Example prompts to get started
- Create a flow that runs a dbt build command on DuckDB- Create a flow cloning https://github.com/kestra-io/dbt-example Git repository from a main branch, then add a dbt CLI task using DuckDB backend that will run dbt build command for that cloned repository using my_dbt_project profile and dev target. The dbt project is located in the root directory so no dbt project needs to be configured.- Create a flow that sends a POST request to https://dummyjson.com/products/add- Send a POST request to https://dummyjson.com/products/add- Write a Python script that sends a POST request to https://dummyjson.com/products/add- Write a Node.js script that sends a POST request to https://dummyjson.com/products/add- Create a flow with a Python script that fetches weather data for New York City- Make a REST API call to https://kestra.io/api/mock and allow failure- Create a flow that logs "Hello from AI" to the console- Create a flow that returns Hello as output- Create a flow that outputs Hello as value- Run a flow every 10 minutes- Run a flow every day at 9 AM- Run a shell command echo 'Hello Docker' in a Docker container- Run a command python main.py in a Docker container- Run a script main.py stored as namespace file- Build a Docker image from an inline Dockerfile and push it to a GitHub Container Registry- Build a Docker image from an inline Dockerfile and push it to a DockerHub Container Registry- Create a flow that adds a string KV pair called MYKEY with value myvalue to namespace company- Fetch value for KV pair called MYKEY from namespace company- Create a flow that downloads a file mydata.csv from S3 bucket named mybucket- Create a flow that downloads all files from the folder kestra/plugins/ from S3 bucket mybucket in us-east-1- Send a Slack notification that approval is needed and Pause the flow for manual approval- Send a Slack alert whenever any execution from namespace company fails- Fetch value for string kv pair called mykey from Redis- Fetch value for mykey from Redis- Set value for mykey in Redis to myvalue- Sync all flows and scripts for selected namespaces from Git to Kestra- Create a flow that clones a Git repository and runs a Python script- Export a Postgres table called mytable to a CSV file- Query a Postgres table called mytable- Find documents in a MongoDB collection called mycollection- Load documents into a MongoDB mycollection using a file from input mydata- Trigger an Airbyte connection sync and retry it up to 3 times- Run an Airflow DAG called mydag- Orchestrate an Ansible playbook stored in Namespace Files- Run a DuckDB query that reads a CSV file- Fetch AWS ECR authorization token to push Docker images to Amazon ECR- Run a flow whenever 5 records are available in Kafka topic mytopic- Submit a run for a Databricks jobEnterprise Edition Copilot configurations
Enterprise Edition users can configure any LLM provider, including Amazon Bedrock, Anthropic, Azure OpenAI, DeepSeek, Google Gemini, Google Vertex AI, Mistral, OpenAI, OpenRouter, and all open-source models supported by Ollama. Add one or more of the snippets below as entries inside kestra.ai.providers (set isDefault: true on the default provider). Each configuration has slight differences, so make sure to adjust for your provider.
Only non-thinking modes are supported. If the used LLM is a pure thinking model (one that possesses thinking ability and cannot be disabled), the generated Flow will be incorrect and contain thinking elements.
Amazon Bedrock
kestra: ai: providers: - id: bedrock display-name: Amazon Bedrock type: bedrock configuration: model-name: amazon.nova-lite-v1:0 access-key-id: BEDROCK_ACCESS_KEY_ID secret-access-key: BEDROCK_SECRET_ACCESS_KEYAnthropic
kestra: ai: providers: - id: anthropic display-name: Anthropic type: anthropic configuration: model-name: claude-opus-4-1-20250805 api-key: CLAUDE_API_KEYAzure OpenAI
kestra: ai: providers: - id: azure-openai display-name: Azure OpenAI type: azure-openai configuration: model-name: gpt-4o-2024-11-20 api-key: AZURE_OPENAI_API_KEY tenant-id: AZURE_TENANT_ID client-id: AZURE_CLIENT_ID client-secret: AZURE_CLIENT_SECRET endpoint: "https://your-resource.openai.azure.com/"Deepseek
kestra: ai: providers: - id: deepseek display-name: DeepSeek type: deepseek configuration: model-name: deepseek-chat api-key: DEEPSEEK_API_KEY base-url: "https://api.deepseek.com/v1"Google Gemini
kestra: ai: providers: - id: gemini display-name: Google Gemini type: gemini configuration: model-name: gemini-2.5-flash api-key: YOUR_GEMINI_API_KEYGoogle Vertex AI
kestra: ai: providers: - id: vertex display-name: Google Vertex AI type: googlevertexai configuration: model-name: gemini-2.5-flash project: GOOGLE_PROJECT_ID location: GOOGLE_CLOUD_REGION endpoint: VERTEX-AI-ENDPOINTMistral
kestra: ai: providers: - id: mistral display-name: Mistral type: mistralai configuration: model-name: mistral:7b api-key: MISTRALAI_API_KEY base-url: "https://api.mistral.ai/v1"Ollama
kestra: ai: providers: - id: ollama display-name: Ollama type: ollama configuration: model-name: llama3 base-url: http://localhost:11434If Ollama is running locally on your host machine while Kestra is running inside a container, connection errors may occur when using localhost. In thi”s case, use the Docker internal network URL instead — for example, set the base URL to http://host.docker.internal:11434.
Some Ollama model names can be confusing. For example, at the time of writing, the model qwen3:30b-a3b is pointing to SHA ad815644918f, which is the qwen3:30b-a3b-thinking-2507-q4_K_M model behind the scenes. This is a thinking model that doesn’t support disabling it.
Please double-check that the chosen model has a non-thinking version or that a toggle is available.
OpenAI
kestra: ai: providers: - id: openai display-name: OpenAI type: openai configuration: model-name: gpt-5-nano api-key: OPENAI_API_KEY base-url: https://api.openai.com/v1OpenRouter
kestra: ai: providers: - id: openrouter display-name: OpenRouter type: openrouter configuration: api-key: OPENROUTER_API_KEY base-url: "https://openrouter.ai/api/v1" model-name: "anthropic/claude-sonnet-4"Was this page helpful?