
ChatCompletion
Create a Retrieval Augmented Generation (RAG) pipeline
type: "io.kestra.plugin.ai.rag.ChatCompletion"Examples
Chat with your data using Retrieval Augmented Generation (RAG). This flow will index documents and use the RAG Chat task to interact with your data using natural language prompts. The flow contrasts prompts to LLM with and without RAG. The Chat with RAG retrieves embeddings stored in the KV Store and provides a response grounded in data rather than hallucinating. WARNING: the Kestra KV embedding store is for quick prototyping only, as it stores the embedding vectors in Kestra's KV store and loads them all into memory.
id: rag
namespace: company.ai
tasks:
- id: ingest
type: io.kestra.plugin.ai.rag.IngestDocument
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-embedding-exp-03-07
apiKey: "{{ kv('GEMINI_API_KEY') }}"
embeddings:
type: io.kestra.plugin.ai.embeddings.KestraKVStore
drop: true
fromExternalURLs:
- https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-24.md
- id: parallel
type: io.kestra.plugin.core.flow.Parallel
tasks:
- id: chat_without_rag
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
messages:
- type: USER
content: Which features were released in Kestra 0.24?
- id: chat_with_rag
type: io.kestra.plugin.ai.rag.ChatCompletion
chatProvider:
type: io.kestra.plugin.ai.provider.GoogleGemini
embeddingProvider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-embedding-exp-03-07
embeddings:
type: io.kestra.plugin.ai.embeddings.KestraKVStore
systemMessage: You are a helpful assistant that can answer questions about Kestra.
prompt: Which features were released in Kestra 0.24?
pluginDefaults:
- type: io.kestra.plugin.ai.provider.GoogleGemini
values:
apiKey: "{{ kv('GEMINI_API_KEY') }}"
modelName: gemini-2.5-flashRAG chat with a web search content retriever (answers grounded in search results)
id: rag_with_websearch_content_retriever
namespace: company.ai
tasks:
- id: chat_with_rag_and_websearch_content_retriever
type: io.kestra.plugin.ai.rag.ChatCompletion
chatProvider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-2.5-flash
apiKey: "{{ kv('GEMINI_API_KEY') }}"
contentRetrievers:
- type: io.kestra.plugin.ai.retriever.TavilyWebSearch
apiKey: "{{ kv('TAVILY_API_KEY') }}"
systemMessage: You are a helpful assistant that can answer questions about Kestra.
prompt: What is the latest release of Kestra?Store chat memory as a Kestra KV pair
id: chat_with_memory
namespace: company.ai
inputs:
- id: first
type: STRING
defaults: Hello, my name is John and I'm from Paris
- id: second
type: STRING
defaults: What's my name and where do I live?
tasks:
- id: first
type: io.kestra.plugin.ai.rag.ChatCompletion
chatProvider:
type: io.kestra.plugin.ai.provider.GoogleGemini
embeddingProvider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-embedding-exp-03-07
embeddings:
type: io.kestra.plugin.ai.embeddings.KestraKVStore
memory:
type: io.kestra.plugin.ai.memory.KestraKVStore
ttl: PT1M
systemMessage: You are a helpful assistant, answer concisely
prompt: "{{inputs.first}}"
- id: second
type: io.kestra.plugin.ai.rag.ChatCompletion
chatProvider:
type: io.kestra.plugin.ai.provider.GoogleGemini
embeddingProvider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-embedding-exp-03-07
embeddings:
type: io.kestra.plugin.ai.embeddings.KestraKVStore
memory:
type: io.kestra.plugin.ai.memory.KestraKVStore
systemMessage: You are a helpful assistant, answer concisely
prompt: "{{inputs.second}}"
pluginDefaults:
- type: io.kestra.plugin.ai.provider.GoogleGemini
values:
apiKey: "{{ kv('GEMINI_API_KEY') }}"
modelName: gemini-2.5-flashClassify recent Kestra releases into MINOR or PATCH using a JSON schema. Note: not all LLMs support structured outputs, or they may not support them when combined with tools like web search. This example uses Mistral, which supports structured output with content retrievers.
id: chat_with_structured_output
namespace: company.ai
tasks:
- id: categorize_releases
type: io.kestra.plugin.ai.rag.ChatCompletion
chatProvider:
type: io.kestra.plugin.ai.provider.MistralAI
apiKey: "{{ kv('MISTRAL_API_KEY') }}"
modelName: open-mistral-7b
contentRetrievers:
- type: io.kestra.plugin.ai.retriever.TavilyWebSearch
apiKey: "{{ kv('TAVILY_API_KEY') }}"
maxResults: 8
chatConfiguration:
responseFormat:
type: JSON
jsonSchema:
type: object
required: ["releases"]
properties:
releases:
type: array
minItems: 1
items:
type: object
additionalProperties: false
required: ["version", "date", "semver"]
properties:
version:
type: string
description: "Release tag, e.g., 0.24.0"
date:
type: string
description: "Release date"
semver:
type: string
enum: ["MINOR", "PATCH"]
summary:
type: string
description: "Short plain-text summary (optional)"
systemMessage: |
You are a release analyst. Use the Tavily web retriever to find recent Kestra releases.
Determine each release's SemVer category:
- MINOR: new features, no major breaking changes (y in x.Y.z)
- PATCH: bug fixes/patches only (z in x.y.Z)
Return ONLY valid JSON matching the schema. No prose, no extra keys.
prompt: |
Find most recent Kestra releases (within the last ~6 months).
Output their version, release date, semver category, and a one-line summary.Properties
chatProvider*RequiredNon-dynamic
Chat model provider
Amazon Bedrock Model Provider
COHERECOHERETITANAnthropic AI Model Provider
Azure OpenAI Model Provider
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
Google Gemini Model Provider
Google VertexAI Model Provider
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
Ollama Model Provider
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
WorkersAI Model Provider
ZhiPu AI Model Provider
https://open.bigmodel.cn/prompt*Requiredstring
User prompt
The user input for this run. May be templated from flow inputs.
chatConfigurationNon-dynamic
{}Chat configuration
io.kestra.plugin.ai.domain.ChatConfiguration
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
TEXTTEXTJSONcontentRetrieverConfigurationNon-dynamic
{
"maxResults": 3,
"minScore": 0
}Content retriever configuration
io.kestra.plugin.ai.rag.ChatCompletion-ContentRetrieverConfiguration
3Maximum results to return from the embedding store
0Minimum similarity score (0-1 inclusive). Only results with score ≥ minScore are returned.
contentRetrievers
Additional content retrievers
Some content retrievers like WebSearch can also be used as tools, but using them as content retrievers will ensure that they are always called whereas tools are only used when the LLM decides to.
Web search content retriever for Google Custom Search
3SQL Database content retriever using LangChain4j experimental SqlDatabaseContentRetriever. ⚠ IMPORTANT: the database user should have READ-ONLY permissions.
Amazon Bedrock Model Provider
COHERECOHERETITANAnthropic AI Model Provider
Azure OpenAI Model Provider
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
Google Gemini Model Provider
Google VertexAI Model Provider
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
Ollama Model Provider
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
WorkersAI Model Provider
ZhiPu AI Model Provider
https://open.bigmodel.cn/{}io.kestra.plugin.ai.domain.ChatConfiguration
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
TEXTTEXTJSON2WebSearch content retriever for Tavily Search
3embeddingProviderNon-dynamic
Embedding model provider
Optional. If not set, the embedding model is created from chatProvider. Ensure the chosen chat provider supports embeddings.
Amazon Bedrock Model Provider
COHERECOHERETITANAnthropic AI Model Provider
Azure OpenAI Model Provider
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
Google Gemini Model Provider
Google VertexAI Model Provider
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
Ollama Model Provider
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
WorkersAI Model Provider
ZhiPu AI Model Provider
https://open.bigmodel.cn/embeddingsNon-dynamic
Embedding store
Optional when at least one entry is provided in contentRetrievers.
Chroma Embedding Store
The database base URL
Elasticsearch Embedding Store
io.kestra.plugin.ai.embeddings.Elasticsearch-ElasticsearchConnection
1List of HTTP Elasticsearch servers
Must be a URI like https://example.com: 9200 with scheme and port
Basic authorization configuration
io.kestra.plugin.ai.embeddings.Elasticsearch-ElasticsearchConnection-BasicAuth
Basic authorization password
Basic authorization username
List of HTTP headers to be sent with every request
Each item is a key: value string, e.g., Authorization: Token XYZ
Path prefix for all HTTP requests
If set to /my/path, each client request becomes /my/path/ + endpoint. Useful when Elasticsearch is behind a proxy providing a base path; do not use otherwise.
Treat responses with deprecation warnings as failures
Trust all SSL CA certificates
Use this if the server uses a self-signed SSL certificate
The name of the index to store embeddings
In-memory embedding store that stores data as Kestra KV pairs
{{flow.id}}-embedding-storeThe name of the KV pair to use
MariaDB Embedding Store
Whether to create the table if it doesn't exist
Database URL of the MariaDB database (e.g., jdbc: mariadb://host: port/dbname)
Name of the column used as the unique ID in the database
Name of the table where embeddings will be stored
Metadata Column Definitions
List of SQL column definitions for metadata fields (e.g., 'text TEXT', 'source TEXT'). Required only when using COLUMN_PER_KEY storage mode.
Metadata Index Definitions
List of SQL index definitions for metadata columns (e.g., 'INDEX idx_text (text)'). Used only with COLUMN_PER_KEY storage mode.
Metadata Storage Mode
Determines how metadata is stored: - COLUMN_PER_KEY: Use individual columns for each metadata field (requires columnDefinitions and indexes). - COMBINED_JSON (default): Store metadata as a JSON object in a single column. If columnDefinitions and indexes are provided, COLUMN_PER_KEY must be used.
Milvus Embedding Store
Token
Milvus auth token. Required if authentication is enabled; omit for local deployments without auth.
Auto flush on delete
If true, flush after delete operations.
Auto flush on insert
If true, flush after insert operations. Setting it to false can improve throughput.
Collection name
Target collection. Created automatically if it does not exist. Default: "default".
Read/write consistency level. Common values include STRONG, BOUNDED, or EVENTUALLY (depends on client/version).
Logical database to use. If not provided, the default database is used.
Milvus host name (used when uri is not set). Default: "localhost".
ID field name
Field name for document IDs. Default depends on collection schema.
Index type
Vector index type (e.g., IVF_FLAT, IVF_SQ8, HNSW). Depends on Milvus deployment and dataset.
Field name for metadata. Default depends on collection schema.
Metric type
Similarity metric (e.g., L2, IP, COSINE). Should match the embedding provider’s expected metric.
Password
Milvus port (used when uri is not set). Typical: 19530 (gRPC) or 9091 (HTTP). Default: 19530.
Retrieve embeddings on search
If true, return stored embeddings along with matches. Default: false.
Text field name
Field name for original text. Default depends on collection schema.
URI
Connection URI. Use either uri OR host/port (not both).
Examples:
- gRPC (typical): "milvus://host: 19530"
- HTTP: "http://host: 9091"
Username
Required when authentication/TLS is enabled. See https://milvus.io/docs/authenticate.md
Vector field name
Field name for the embedding vector. Must match the index definition and embedding dimensionality.
MongoDB Atlas Embedding Store
The host
The scheme (e.g., mongodb+srv)
Create the index
The database
The metadata field names
The connection string options
The password
The username
PGVector Embedding Store
The database name
The database password
The table to store embeddings in
The database user
falseWhether to use use an IVFFlat index
An IVFFlat index divides vectors into lists, and then searches a subset of those lists closest to the query vector. It has faster build times and uses less memory than HNSW but has lower query performance (in terms of speed-recall tradeoff).
Pinecone Embedding Store
The cloud provider
The index
The cloud provider region
The namespace (default will be used if not provided)
Qdrant Embedding Store
The API key
The collection name
Redis Embedding Store
The database server host
The database server port
embedding-indexThe index name
Tablestore Embedding Store
Access Key ID
The access key ID used for authentication with the database.
Access Key Secret
The access key secret used for authentication with the database.
The base URL for the Tablestore database endpoint.
Instance Name
The name of the Tablestore database instance.
Metadata Schema List
Optional list of metadata field schemas for the collection.
com.alicloud.openservices.tablestore.model.search.FieldSchema
SingleWordMaxWordMinWordSplitFuzzycom.alicloud.openservices.tablestore.model.search.analysis.AnalyzerParameter
LONGDOUBLEBOOLEANKEYWORDTEXTNESTEDGEO_POINTDATEVECTORFUZZY_KEYWORDIPJSONUNKNOWNDOCSFREQSPOSITIONSOFFSETSFLATTENNESTEDcom.alicloud.openservices.tablestore.model.search.FieldSchema
SingleWordMaxWordMinWordSplitFuzzyLONGDOUBLEBOOLEANKEYWORDTEXTNESTEDGEO_POINTDATEVECTORFUZZY_KEYWORDIPJSONUNKNOWNDOCSFREQSPOSITIONSOFFSETSFLATTENNESTEDcom.alicloud.openservices.tablestore.model.search.vector.VectorOptions
EUCLIDEANCOSINEDOT_PRODUCTWeaviate Embedding Store
Weaviate API key. Omit for local deployments without auth.
Host
Cluster host name without protocol, e.g., "abc123.weaviate.network".
Avoid duplicates
If true (default), a hash-based ID is derived from each text segment to prevent duplicates. If false, a random ID is used.
ONEQUORUMALLConsistency level
Write consistency: ONE, QUORUM (default), or ALL.
gRPC port
Port for gRPC if enabled (e.g., 50051).
Metadata field name
Field used to store metadata. Defaults to "_metadata" if not set.
Metadata keys
The list of metadata keys to store - if not provided, it will default to an empty list.
Object class
Weaviate class to store objects in (must start with an uppercase letter). Defaults to "Default" if not set.
Port
Optional port (e.g., 443 for https, 80 for http). Leave unset to use provider defaults.
Scheme
Cluster scheme: "https" (recommended) or "http".
Secure gRPC
Whether the gRPC connection is secured (TLS).
Use gRPC for batch inserts
If true, use gRPC for batch inserts. HTTP remains required for search operations.
memoryNon-dynamic
Chat memory
Stores conversation history and injects it into context on subsequent runs.
In-memory Chat Memory that stores its data as Kestra KV pairs
NEVERNEVERBEFORE_TASKRUNAFTER_TASKRUN{{ labels.system.correlationId }}10PT1HdurationChat Memory backed by PostgreSQL
Database name
The name of the PostgreSQL database
PostgreSQL host
The hostname of your PostgreSQL server
The password to connect to PostgreSQL
Database user
The username to connect to PostgreSQL
NEVERNEVERBEFORE_TASKRUNAFTER_TASKRUN{{ labels.system.correlationId }}105432PostgreSQL port
The port of your PostgreSQL server
chat_memoryTable name
The name of the table used to store chat memory. Defaults to 'chat_memory'.
PT1HdurationChat Memory backed by Redis
Redis host
The hostname of your Redis server (e.g., localhost or redis-server)
NEVERNEVERBEFORE_TASKRUNAFTER_TASKRUNDrop memory: never, before, or after the agent's task run
By default, the memory ID is the value of the system.correlationId label, meaning that the same memory will be used by all tasks of the flow and its subflows.
If you want to remove the memory eagerly (before expiration), you can set drop: AFTER_TASKRUN to erase the memory after the taskrun.
You can also set drop: BEFORE_TASKRUN to drop the memory before the taskrun.
{{ labels.system.correlationId }}Memory ID - defaults to the value of the system.correlationId label. This means that a memory is valid for the entire flow execution including its subflows.
10Maximum number of messages to keep in memory. If memory is full, the oldest messages will be removed in a FIFO manner. The last system message is always kept.
6379Redis port
The port of your Redis server
PT1HdurationMemory duration - defaults to 1h
systemMessagestring
Instruction that sets the assistant's role, tone, and constraints for this task.
toolsNon-dynamic
Optional tools the LLM may call to augment its response
Call a remote AI agent via the A2A protocol.
Server URL
The URL of the remote agent A2A server
toolCall an AI Agent as a tool
Agent description
The description will be used to instruct the LLM what the tool is doing.
Amazon Bedrock Model Provider
COHERECOHERETITANAnthropic AI Model Provider
Azure OpenAI Model Provider
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
Google Gemini Model Provider
Google VertexAI Model Provider
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
Ollama Model Provider
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
WorkersAI Model Provider
ZhiPu AI Model Provider
https://open.bigmodel.cn/{}io.kestra.plugin.ai.domain.ChatConfiguration
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
JSON Schema (used when type = JSON)
Provide a JSON Schema describing the expected structure of the response. In Kestra flows, define the schema in YAML (it is still a JSON Schema object). Example (YAML):
responseFormat:
type: JSON
jsonSchema:
type: object
required: ["category", "priority"]
properties:
category:
type: string
enum: ["ACCOUNT", "BILLING", "TECHNICAL", "GENERAL"]
priority:
type: string
enum: ["LOW", "MEDIUM", "HIGH"]
Note: Provider support for strict schema enforcement varies. If unsupported, guide the model about the expected output structure via the prompt and validate downstream.
Schema description (optional)
Natural-language description of the schema to help the model produce the right fields. Example: "Classify a customer ticket into category and priority."
TEXTTEXTJSONResponse format type
Specifies how the LLM should return output. Allowed values:
- TEXT (default): free-form natural language.
- JSON: structured output validated against a JSON Schema.
Content retrievers
Some content retrievers, like WebSearch, can also be used as tools. However, when configured as content retrievers, they will always be used, whereas tools are only invoked when the LLM decides to use them.
Web search content retriever for Google Custom Search
3Maximum number of results
SQL Database content retriever using LangChain4j experimental SqlDatabaseContentRetriever. ⚠ IMPORTANT: the database user should have READ-ONLY permissions.
Database password
Language model provider
Amazon Bedrock Model Provider
AWS Access Key ID
AWS Secret Access Key
COHERECOHERETITANAmazon Bedrock Embedding Model Type
Anthropic AI Model Provider
Maximum Tokens
Specifies the maximum number of tokens that the model is allowed to generate in its response.
Azure OpenAI Model Provider
API endpoint
The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/
Client ID
Client secret
Tenant ID
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1If you use a model in the China (Beijing) region, you need to replace the URL with: https://dashscope.aliyuncs.com/api/v1,
otherwise use the Singapore region of: "https://dashscope-intl.aliyuncs.com/api/v1.
The default value is computed based on the system timezone.
Whether the model uses Internet search results for reference when generating text or not
Repetition in a continuous sequence during model generation
Increasing repetition_penalty reduces the repetition in model generation,
1.0 means no penalty. Value range: (0, +inf)
Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
GitHub Token
Personal Access Token (PAT) used to access GitHub Models.
Google Gemini Model Provider
Google VertexAI Model Provider
Endpoint URL
Project location
Project ID
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
OCID of OCI Compartment with the model
OCI Region to connect the client to
OCI SDK Authentication provider
Ollama Model Provider
Model endpoint
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
WorkersAI Model Provider
Account Identifier
Unique identifier assigned to an account
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
ZhiPu AI Model Provider
Model name
https://open.bigmodel.cn/API base URL
The base URL for ZhiPu API (defaults to https://open.bigmodel.cn/)
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
The maximum retry times to request
The maximum number of tokens returned by this request
With the stop parameter, the model will automatically stop generating text when it is about to contain the specified string or token_id
Database username
{}Language model configuration
io.kestra.plugin.ai.domain.ChatConfiguration
Log LLM requests
If true, prompts and configuration sent to the LLM will be logged at INFO level.
Log LLM responses
If true, raw responses from the LLM will be logged at INFO level.
Maximum number of tokens the model can generate in the completion (response). This limits the length of the output.
Response format
Defines the expected output format. Default is plain text.
Some providers allow requesting JSON or schema-constrained outputs, but support varies and may be incompatible with tool use.
When using a JSON schema, the output will be returned under the key jsonOutput.
Return Thinking
Controls whether to return the model's internal reasoning or 'thinking' text, if available. When enabled, the reasoning content is extracted from the response and made available in the AiMessage object. It Does not trigger the thinking process itself—only affects whether the output is parsed and returned.
Seed
Optional random seed for reproducibility. Provide a positive integer (e.g., 42, 1234). Using the same seed with identical settings produces repeatable outputs.
Temperature
Controls randomness in generation. Typical range is 0.0–1.0. Lower values (e.g., 0.2) make outputs more focused and deterministic, while higher values (e.g., 0.7–1.0) increase creativity and variability.
Thinking Token Budget
Specifies the maximum number of tokens allocated as a budget for internal reasoning processes, such as generating intermediate thoughts or chain-of-thought sequences, allowing the model to perform multi-step reasoning before producing the final output.
Enable Thinking
Enables internal reasoning ('thinking') in supported language models, allowing the model to perform intermediate reasoning steps before producing a final output; this is useful for complex tasks like multi-step problem solving or decision making, but may increase token usage and response time, and is only applicable to compatible models.
Top-K
Limits sampling to the top K most likely tokens at each step. Typical values are between 20 and 100. Smaller values reduce randomness; larger values allow more diverse outputs.
Top-P (nucleus sampling)
Selects from the smallest set of tokens whose cumulative probability is ≤ topP. Typical values are 0.8–0.95. Lower values make the output more focused, higher values increase diversity.
Optional JDBC driver class name – automatically resolved if not provided.
JDBC connection URL to the target database
2Maximum number of database connections in the pool
WebSearch content retriever for Tavily Search
API Key
3Maximum number of results to return
Maximum sequential tools invocations
toolAgent name
It must be set to a different value than the default in case you want to have multiple agents used as tools in the same task.
System message
The system message for the language model
Tools that the LLM may use to augment its response
Code execution tool using Judge0
RapidAPI key for Judge0
You can obtain it from the RapidAPI website.
Model Context Protocol (MCP) Docker client tool
Container image
API version
Volume binds
Docker certificate path
Docker configuration
Docker context
Docker host
Whether Docker should verify TLS certificates
falseWhether to log events
Container registry email
Container registry password
Container registry URL
Container registry username
Google Custom Search web tool
API key
Custom search engine ID (cx)
Call a Kestra flow as a tool
Description of the flow if not already provided inside the flow itself
Use it only if you define the flow in the tool definition. The LLM needs a tool description to identify whether to call it. If the flow has a description, the tool will use it. Otherwise, the description property must be explicitly defined.
Flow ID of the flow that should be called
falseWhether the flow should inherit labels from this execution that triggered it
By default, labels are not inherited. If you set this option to true, the flow execution will inherit all labels from the agent's execution.
Any labels passed by the LLM will override those defined here.
Input values that should be passed to flow's execution
Any inputs passed by the LLM will override those defined here.
Labels that should be added to the flow's execution
Any labels passed by the LLM will override those defined here.
Namespace of the flow that should be called
Revision of the flow that should be called
date-timeSchedule the flow execution at a later date
If the LLM sets a scheduleDate, it will override the one defined here.
Call a Kestra runnable task as a tool
List of Kestra runnable tasks
Model Context Protocol (MCP) SSE client tool
SSE URL of the MCP server
Could be useful, for example, to add authentication tokens via the Authorization header.
falsefalsedurationModel Context Protocol (MCP) Stdio client tool
MCP client command, as a list of command parts
Environment variables
falseLog events
Model Context Protocol (MCP) SSE client tool
URL of the MCP server
Custom headers
Useful, for example, for adding authentication tokens via the Authorization header.
falseLog requests
falseLog responses
durationConnection timeout duration
WebSearch tool for Tavily Search
Tavily API Key - you can obtain one from the Tavily website
Outputs
finishReasonstring
STOPLENGTHTOOL_EXECUTIONCONTENT_FILTEROTHERintermediateResponsesarray
Intermediate responses
io.kestra.plugin.ai.domain.AIOutput-AIResponse
Generated text completion
The result of the text completion
STOPLENGTHTOOL_EXECUTIONCONTENT_FILTEROTHERFinish reason
Response identifier
io.kestra.plugin.ai.domain.TokenUsage
Tool execution requests
io.kestra.plugin.ai.domain.AIOutput-AIResponse-ToolExecutionRequest
Tool request arguments
Tool execution request identifier
Tool name
jsonOutputobject
LLM output for JSON response format
The result of the LLM completion for response format of type JSON, null otherwise.
outputFilesobject
URIs of the generated files in Kestra's internal storage
requestDurationinteger
Request duration in milliseconds
sourcesarray
Content sources used during RAG retrieval
io.kestra.plugin.ai.domain.AIOutput-ContentSource
Extracted text segment
A snippet of text relevant to the user's query, typically a sentence, paragraph, or other discrete unit of text.
Source metadata
Key-value pairs providing context about the origin of the content, such as URLs, document titles, or other relevant attributes.
textOutputstring
LLM output for TEXT response format
The result of the LLM completion for response format of type TEXT (default), null otherwise.
thinkingstring
Model's Thinking Output
Contains the model's internal reasoning or 'thinking' text, if the model supports it and 'returnThinking' is enabled. This may include intermediate reasoning steps, such as chain-of-thought explanations. Null if thinking is not supported, not enabled, or not returned by the model.
tokenUsage
Token usage
io.kestra.plugin.ai.domain.TokenUsage
toolExecutionsarray
Tool executions
io.kestra.plugin.ai.domain.AIOutput-ToolExecution
Metrics
input.token.countcounter
tokenLarge Language Model (LLM) input token count
output.token.countcounter
tokenLarge Language Model (LLM) output token count
total.token.countcounter
tokenLarge Language Model (LLM) total token count