
ChatCompletion
Chat completion with AI models
Chat completion with AI models
Chat completion with AI models
Handles chat interactions using AI models (OpenAI, Ollama, Gemini, Anthropic, MistralAI, Deepseek).
type: "io.kestra.plugin.ai.completion.ChatCompletion"Examples
Chat completion with Google Gemini
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
apiKey: "{{ kv('GOOGLE_API_KEY') }}"
modelName: gemini-2.5-flash
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{inputs.prompt}}"
Chat Completion with Google Gemini and a WebSearch tool
id: chat_completion_with_tools
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion_with_tools
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
apiKey: "{{ kv('GOOGLE_API_KEY') }}"
modelName: gemini-2.5-flash
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{inputs.prompt}}"
tools:
- type: io.kestra.plugin.ai.tool.GoogleCustomWebSearch
apiKey: "{{ kv('GOOGLE_SEARCH_API_KEY') }}"
csi: "{{ kv('GOOGLE_SEARCH_CSI') }}"
Extract structured outputs with a JSON schema. Not all model providers support JSON schema; in those cases, you have to specify the schema in the prompt.
id: structured-output
namespace: company.ai
inputs:
- id: prompt
type: STRING
defaults: |
Hello, my name is John. I was born on January 1, 2000.
tasks:
- id: ai-agent
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-2.5-flash
apiKey: "{{ kv('GEMINI_API_KEY') }}"
configuration:
responseFormat:
type: JSON
jsonSchema:
type: object
properties:
name:
type: string
birth:
type: string
messages:
- type: USER
content: "{{inputs.prompt}}"
Properties
messages*Requiredarray
Chat Messages
The list of chat messages for the current conversation. There can be only one system message, and the last message must be a user message
io.kestra.plugin.ai.domain.ChatMessage
SYSTEMAIUSERprovider*RequiredNon-dynamic
Language Model Provider
Amazon Bedrock Model Provider
COHERECOHERETITANAnthropic AI Model Provider
Azure OpenAI Model Provider
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
Google Gemini Model Provider
Google VertexAI Model Provider
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
Ollama Model Provider
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
Watsonx AI Model Provider
WorkersAI Model Provider
ZhiPu AI Model Provider
https://open.bigmodel.cn/configurationNon-dynamic
{}Chat configuration
io.kestra.plugin.ai.domain.ChatConfiguration
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
TEXTTEXTJSONtoolsNon-dynamic
Call a remote AI agent via the A2A protocol.
Server URL
The URL of the remote agent A2A server
toolCall an AI Agent as a tool
Agent description
The description will be used to instruct the LLM what the tool is doing.
Amazon Bedrock Model Provider
COHERECOHERETITANAnthropic AI Model Provider
Azure OpenAI Model Provider
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
Google Gemini Model Provider
Google VertexAI Model Provider
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
Ollama Model Provider
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
Watsonx AI Model Provider
WorkersAI Model Provider
ZhiPu AI Model Provider
https://open.bigmodel.cn/{}io.kestra.plugin.ai.domain.ChatConfiguration
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
JSON Schema (used when type = JSON)
Provide a JSON Schema describing the expected structure of the response. In Kestra flows, define the schema in YAML (it is still a JSON Schema object). Example (YAML):
responseFormat:
type: JSON
jsonSchema:
type: object
required: ["category", "priority"]
properties:
category:
type: string
enum: ["ACCOUNT", "BILLING", "TECHNICAL", "GENERAL"]
priority:
type: string
enum: ["LOW", "MEDIUM", "HIGH"]
Note: Provider support for strict schema enforcement varies. If unsupported, guide the model about the expected output structure via the prompt and validate downstream.
Schema description (optional)
Natural-language description of the schema to help the model produce the right fields. Example: "Classify a customer ticket into category and priority."
TEXTTEXTJSONResponse format type
Specifies how the LLM should return output. Allowed values:
- TEXT (default): free-form natural language.
- JSON: structured output validated against a JSON Schema.
Content retrievers
Some content retrievers, like WebSearch, can also be used as tools. However, when configured as content retrievers, they will always be used, whereas tools are only invoked when the LLM decides to use them.
Embedding store content retriever for RAG (Retrieval Augmented Generation)
Embedding model provider
Provider used to generate embeddings for the query. Must support embedding generation.
Amazon Bedrock Model Provider
COHERECOHERETITANAnthropic AI Model Provider
Azure OpenAI Model Provider
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
Google Gemini Model Provider
Google VertexAI Model Provider
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
Ollama Model Provider
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
Watsonx AI Model Provider
WorkersAI Model Provider
ZhiPu AI Model Provider
https://open.bigmodel.cn/Embedding store
The embedding store to retrieve relevant content from
Chroma Embedding Store
The database base URL
Elasticsearch Embedding Store
The name of the index to store embeddings
In-memory embedding store that stores data as Kestra KV pairs
{{flow.id}}-embedding-storeThe name of the KV pair to use
MariaDB Embedding Store
Whether to create the table if it doesn't exist
Database URL of the MariaDB database (e.g., jdbc: mariadb://host: port/dbname)
Name of the column used as the unique ID in the database
Name of the table where embeddings will be stored
Metadata Column Definitions
List of SQL column definitions for metadata fields (e.g., 'text TEXT', 'source TEXT'). Required only when using COLUMN_PER_KEY storage mode.
Metadata Index Definitions
List of SQL index definitions for metadata columns (e.g., 'INDEX idx_text (text)'). Used only with COLUMN_PER_KEY storage mode.
Metadata Storage Mode
Determines how metadata is stored: - COLUMN_PER_KEY: Use individual columns for each metadata field (requires columnDefinitions and indexes). - COMBINED_JSON (default): Store metadata as a JSON object in a single column. If columnDefinitions and indexes are provided, COLUMN_PER_KEY must be used.
Milvus Embedding Store
Token
Milvus auth token. Required if authentication is enabled; omit for local deployments without auth.
Auto flush on delete
If true, flush after delete operations.
Auto flush on insert
If true, flush after insert operations. Setting it to false can improve throughput.
Collection name
Target collection. Created automatically if it does not exist. Default: "default".
Read/write consistency level. Common values include STRONG, BOUNDED, or EVENTUALLY (depends on client/version).
Database name
Logical database to use. If not provided, the default database is used.
Milvus host name (used when uri is not set). Default: "localhost".
ID field name
Field name for document IDs. Default depends on collection schema.
Index type
Vector index type (e.g., IVF_FLAT, IVF_SQ8, HNSW). Depends on Milvus deployment and dataset.
Field name for metadata. Default depends on collection schema.
Metric type
Similarity metric (e.g., L2, IP, COSINE). Should match the embedding provider’s expected metric.
Password
Milvus port (used when uri is not set). Typical: 19530 (gRPC) or 9091 (HTTP). Default: 19530.
Retrieve embeddings on search
If true, return stored embeddings along with matches. Default: false.
Text field name
Field name for original text. Default depends on collection schema.
URI
Connection URI. Use either uri OR host/port (not both).
Examples:
- gRPC (typical): "milvus://host: 19530"
- HTTP: "http://host: 9091"
Username
Required when authentication/TLS is enabled. See https://milvus.io/docs/authenticate.md
Vector field name
Field name for the embedding vector. Must match the index definition and embedding dimensionality.
MongoDB Atlas Embedding Store
The host
The scheme (e.g., mongodb+srv)
Create the index
The database
The metadata field names
The connection string options
The password
The username
PGVector Embedding Store
The database name
The database password
The table to store embeddings in
The database user
falseWhether to use use an IVFFlat index
An IVFFlat index divides vectors into lists, and then searches a subset of those lists closest to the query vector. It has faster build times and uses less memory than HNSW but has lower query performance (in terms of speed-recall tradeoff).
Pinecone Embedding Store
The cloud provider
The index
The cloud provider region
The namespace (default will be used if not provided)
Qdrant Embedding Store
The API key
The collection name
Redis Embedding Store
The database server host
The database server port
embedding-indexThe index name
Tablestore Embedding Store
Access Key ID
The access key ID used for authentication with the database.
Access Key Secret
The access key secret used for authentication with the database.
The base URL for the Tablestore database endpoint.
Instance Name
The name of the Tablestore database instance.
Metadata Schema List
Optional list of metadata field schemas for the collection.
Weaviate Embedding Store
Weaviate API key. Omit for local deployments without auth.
Host
Cluster host name without protocol, e.g., "abc123.weaviate.network".
Avoid duplicates
If true (default), a hash-based ID is derived from each text segment to prevent duplicates. If false, a random ID is used.
ONEQUORUMALLConsistency level
Write consistency: ONE, QUORUM (default), or ALL.
gRPC port
Port for gRPC if enabled (e.g., 50051).
Metadata field name
Field used to store metadata. Defaults to "_metadata" if not set.
Metadata keys
The list of metadata keys to store - if not provided, it will default to an empty list.
Object class
Weaviate class to store objects in (must start with an uppercase letter). Defaults to "Default" if not set.
Port
Optional port (e.g., 443 for https, 80 for http). Leave unset to use provider defaults.
Scheme
Cluster scheme: "https" (recommended) or "http".
Secure gRPC
Whether the gRPC connection is secured (TLS).
Use gRPC for batch inserts
If true, use gRPC for batch inserts. HTTP remains required for search operations.
3Maximum number of results to return from the embedding store
0.0Minimum similarity score
Only results with a similarity score ≥ minScore are returned. Range: 0.0 to 1.0 inclusive.
Web search content retriever for Google Custom Search
3Maximum number of results
SQL Database content retriever using LangChain4j experimental SqlDatabaseContentRetriever. ⚠ IMPORTANT: the database user should have READ-ONLY permissions.
Database password
Language model provider
Amazon Bedrock Model Provider
AWS Access Key ID
AWS Secret Access Key
COHERECOHERETITANAmazon Bedrock Embedding Model Type
Anthropic AI Model Provider
Maximum Tokens
Specifies the maximum number of tokens that the model is allowed to generate in its response.
Azure OpenAI Model Provider
API endpoint
The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/
Client ID
Client secret
Tenant ID
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1If you use a model in the China (Beijing) region, you need to replace the URL with: https://dashscope.aliyuncs.com/api/v1,
otherwise use the Singapore region of: "https://dashscope-intl.aliyuncs.com/api/v1.
The default value is computed based on the system timezone.
Whether the model uses Internet search results for reference when generating text or not
Repetition in a continuous sequence during model generation
Increasing repetition_penalty reduces the repetition in model generation,
1.0 means no penalty. Value range: (0, +inf)
Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
GitHub Token
Personal Access Token (PAT) used to access GitHub Models.
Google Gemini Model Provider
Google VertexAI Model Provider
Endpoint URL
Project location
Project ID
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
OCID of OCI Compartment with the model
OCI Region to connect the client to
OCI SDK Authentication provider
Ollama Model Provider
Model endpoint
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
Watsonx AI Model Provider
Project Id
WorkersAI Model Provider
Account Identifier
Unique identifier assigned to an account
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
ZhiPu AI Model Provider
Model name
https://open.bigmodel.cn/API base URL
The base URL for ZhiPu API (defaults to https://open.bigmodel.cn/)
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
The maximum retry times to request
The maximum number of tokens returned by this request
With the stop parameter, the model will automatically stop generating text when it is about to contain the specified string or token_id
Database username
{}Language model configuration
io.kestra.plugin.ai.domain.ChatConfiguration
Log LLM requests
If true, prompts and configuration sent to the LLM will be logged at INFO level.
Log LLM responses
If true, raw responses from the LLM will be logged at INFO level.
Maximum number of tokens the model can generate in the completion (response). This limits the length of the output.
Response format
Defines the expected output format. Default is plain text.
Some providers allow requesting JSON or schema-constrained outputs, but support varies and may be incompatible with tool use.
When using a JSON schema, the output will be returned under the key jsonOutput.
Return Thinking
Controls whether to return the model's internal reasoning or 'thinking' text, if available. When enabled, the reasoning content is extracted from the response and made available in the AiMessage object. It Does not trigger the thinking process itself—only affects whether the output is parsed and returned.
Seed
Optional random seed for reproducibility. Provide a positive integer (e.g., 42, 1234). Using the same seed with identical settings produces repeatable outputs.
Temperature
Controls randomness in generation. Typical range is 0.0–1.0. Lower values (e.g., 0.2) make outputs more focused and deterministic, while higher values (e.g., 0.7–1.0) increase creativity and variability.
Thinking Token Budget
Specifies the maximum number of tokens allocated as a budget for internal reasoning processes, such as generating intermediate thoughts or chain-of-thought sequences, allowing the model to perform multi-step reasoning before producing the final output.
Enable Thinking
Enables internal reasoning ('thinking') in supported language models, allowing the model to perform intermediate reasoning steps before producing a final output; this is useful for complex tasks like multi-step problem solving or decision making, but may increase token usage and response time, and is only applicable to compatible models.
Top-K
Limits sampling to the top K most likely tokens at each step. Typical values are between 20 and 100. Smaller values reduce randomness; larger values allow more diverse outputs.
Top-P (nucleus sampling)
Selects from the smallest set of tokens whose cumulative probability is ≤ topP. Typical values are 0.8–0.95. Lower values make the output more focused, higher values increase diversity.
Optional JDBC driver class name – automatically resolved if not provided.
JDBC connection URL to the target database
2Maximum number of database connections in the pool
WebSearch content retriever for Tavily Search
API Key
3Maximum number of results to return
Maximum sequential tools invocations
toolAgent name
It must be set to a different value than the default in case you want to have multiple agents used as tools in the same task.
System message
The system message for the language model
Tools that the LLM may use to augment its response
Code execution tool using Judge0
RapidAPI key for Judge0
You can obtain it from the RapidAPI website.
Model Context Protocol (MCP) Docker client tool
Container image
API version
Volume binds
Docker certificate path
Docker configuration
Docker context
Docker host
Whether Docker should verify TLS certificates
falseWhether to log events
Container registry email
Container registry password
Container registry URL
Container registry username
Google Custom Search web tool
API key
Custom search engine ID (cx)
Call a Kestra flow as a tool
Description of the flow if not already provided inside the flow itself
Use it only if you define the flow in the tool definition. The LLM needs a tool description to identify whether to call it. If the flow has a description, the tool will use it. Otherwise, the description property must be explicitly defined.
Flow ID of the flow that should be called
falseWhether the flow should inherit labels from this execution that triggered it
By default, labels are not inherited. If you set this option to true, the flow execution will inherit all labels from the agent's execution.
Any labels passed by the LLM will override those defined here.
Input values that should be passed to flow's execution
Any inputs passed by the LLM will override those defined here.
Labels that should be added to the flow's execution
Any labels passed by the LLM will override those defined here.
Namespace of the flow that should be called
Revision of the flow that should be called
date-timeSchedule the flow execution at a later date
If the LLM sets a scheduleDate, it will override the one defined here.
Call a Kestra runnable task as a tool
List of Kestra runnable tasks
Model Context Protocol (MCP) SSE client tool
SSE URL of the MCP server
Could be useful, for example, to add authentication tokens via the Authorization header.
falsefalsedurationModel Context Protocol (MCP) Stdio client tool
MCP client command, as a list of command parts
Environment variables
falseLog events
Model Context Protocol (MCP) SSE client tool
URL of the MCP server
Custom headers
Useful, for example, for adding authentication tokens via the Authorization header.
falseLog requests
falseLog responses
durationConnection timeout duration
WebSearch tool for Tavily Search
Tavily API Key - you can obtain one from the Tavily website
Outputs
finishReasonstring
STOPLENGTHTOOL_EXECUTIONCONTENT_FILTEROTHERintermediateResponsesarray
Intermediate responses
io.kestra.plugin.ai.domain.AIOutput-AIResponse
Generated text completion
The result of the text completion
STOPLENGTHTOOL_EXECUTIONCONTENT_FILTEROTHERFinish reason
Response identifier
io.kestra.plugin.ai.domain.TokenUsage
Tool execution requests
io.kestra.plugin.ai.domain.AIOutput-AIResponse-ToolExecutionRequest
Tool request arguments
Tool execution request identifier
Tool name
jsonOutputobject
LLM output for JSON response format
The result of the LLM completion for response format of type JSON, null otherwise.
outputFilesobject
URIs of the generated files in Kestra's internal storage
requestDurationinteger
Request duration in milliseconds
sourcesarray
Content sources used during RAG retrieval
io.kestra.plugin.ai.domain.AIOutput-ContentSource
Extracted text segment
A snippet of text relevant to the user's query, typically a sentence, paragraph, or other discrete unit of text.
Source metadata
Key-value pairs providing context about the origin of the content, such as URLs, document titles, or other relevant attributes.
textOutputstring
LLM output for TEXT response format
The result of the LLM completion for response format of type TEXT (default), null otherwise.
thinkingstring
Model's Thinking Output
Contains the model's internal reasoning or 'thinking' text, if the model supports it and 'returnThinking' is enabled. This may include intermediate reasoning steps, such as chain-of-thought explanations. Null if thinking is not supported, not enabled, or not returned by the model.
tokenUsage
Token usage
io.kestra.plugin.ai.domain.TokenUsage
toolExecutionsarray
Tool executions
io.kestra.plugin.ai.domain.AIOutput-ToolExecution
Metrics
input.token.countcounter
tokenLarge Language Model (LLM) input token count
output.token.countcounter
tokenLarge Language Model (LLM) output token count
total.token.countcounter
tokenLarge Language Model (LLM) total token count