ChatCompletionChatCompletion
​Chat​CompletionCertified

Create a Retrieval Augmented Generation (RAG) pipeline

yaml
type: "io.kestra.plugin.ai.rag.ChatCompletion"

Chat with your data using Retrieval Augmented Generation (RAG). This flow will index documents and use the RAG Chat task to interact with your data using natural language prompts. The flow contrasts prompts to LLM with and without RAG. The Chat with RAG retrieves embeddings stored in the KV Store and provides a response grounded in data rather than hallucinating. WARNING: the Kestra KV embedding store is for quick prototyping only, as it stores the embedding vectors in Kestra's KV store and loads them all into memory.

yaml
id: rag
namespace: company.ai

tasks:
  - id: ingest
    type: io.kestra.plugin.ai.rag.IngestDocument
    provider:
      type: io.kestra.plugin.ai.provider.GoogleGemini
      modelName: gemini-embedding-exp-03-07
      apiKey: "{{ kv('GEMINI_API_KEY') }}"
    embeddings:
      type: io.kestra.plugin.ai.embeddings.KestraKVStore
    drop: true
    fromExternalURLs:
      - https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-24.md

  - id: parallel
    type: io.kestra.plugin.core.flow.Parallel
    tasks:
      - id: chat_without_rag
        type: io.kestra.plugin.ai.completion.ChatCompletion
        provider:
          type: io.kestra.plugin.ai.provider.GoogleGemini
        messages:
          - type: USER
            content: Which features were released in Kestra 0.24?

      - id: chat_with_rag
        type: io.kestra.plugin.ai.rag.ChatCompletion
        chatProvider:
          type: io.kestra.plugin.ai.provider.GoogleGemini
        embeddingProvider:
          type: io.kestra.plugin.ai.provider.GoogleGemini
          modelName: gemini-embedding-exp-03-07
        embeddings:
          type: io.kestra.plugin.ai.embeddings.KestraKVStore
        systemMessage: You are a helpful assistant that can answer questions about Kestra.
        prompt: Which features were released in Kestra 0.24?

pluginDefaults:
  - type: io.kestra.plugin.ai.provider.GoogleGemini
    values:
      apiKey: "{{ kv('GEMINI_API_KEY') }}"
      modelName: gemini-2.5-flash

RAG chat with a web search content retriever (answers grounded in search results)

yaml
id: rag_with_websearch_content_retriever
namespace: company.ai

tasks:
  - id: chat_with_rag_and_websearch_content_retriever
    type: io.kestra.plugin.ai.rag.ChatCompletion
    chatProvider:
      type: io.kestra.plugin.ai.provider.GoogleGemini
      modelName: gemini-2.5-flash
      apiKey: "{{ kv('GEMINI_API_KEY') }}"
    contentRetrievers:
      - type: io.kestra.plugin.ai.retriever.TavilyWebSearch
        apiKey: "{{ kv('TAVILY_API_KEY') }}"
    systemMessage: You are a helpful assistant that can answer questions about Kestra.
    prompt: What is the latest release of Kestra?

Store chat memory as a Kestra KV pair

yaml
id: chat_with_memory
namespace: company.ai

inputs:
  - id: first
    type: STRING
    defaults: Hello, my name is John and I'm from Paris

  - id: second
    type: STRING
    defaults: What's my name and where do I live?

tasks:
  - id: first
    type: io.kestra.plugin.ai.rag.ChatCompletion
    chatProvider:
      type: io.kestra.plugin.ai.provider.GoogleGemini
    embeddingProvider:
      type: io.kestra.plugin.ai.provider.GoogleGemini
      modelName: gemini-embedding-exp-03-07
    embeddings:
      type: io.kestra.plugin.ai.embeddings.KestraKVStore
    memory:
      type: io.kestra.plugin.ai.memory.KestraKVStore
      ttl: PT1M
    systemMessage: You are a helpful assistant, answer concisely
    prompt: "{{inputs.first}}"

  - id: second
    type: io.kestra.plugin.ai.rag.ChatCompletion
    chatProvider:
      type: io.kestra.plugin.ai.provider.GoogleGemini
    embeddingProvider:
      type: io.kestra.plugin.ai.provider.GoogleGemini
      modelName: gemini-embedding-exp-03-07
    embeddings:
      type: io.kestra.plugin.ai.embeddings.KestraKVStore
    memory:
      type: io.kestra.plugin.ai.memory.KestraKVStore
    systemMessage: You are a helpful assistant, answer concisely
    prompt: "{{inputs.second}}"

pluginDefaults:
  - type: io.kestra.plugin.ai.provider.GoogleGemini
    values:
      apiKey: "{{ kv('GEMINI_API_KEY') }}"
      modelName: gemini-2.5-flash

Classify recent Kestra releases into MINOR or PATCH using a JSON schema. Note: not all LLMs support structured outputs, or they may not support them when combined with tools like web search. This example uses Mistral, which supports structured output with content retrievers.

yaml
id: chat_with_structured_output
namespace: company.ai

tasks:
  - id: categorize_releases
    type: io.kestra.plugin.ai.rag.ChatCompletion
    chatProvider:
      type: io.kestra.plugin.ai.provider.MistralAI
      apiKey: "{{ kv('MISTRAL_API_KEY') }}"
      modelName: open-mistral-7b

    contentRetrievers:
      - type: io.kestra.plugin.ai.retriever.TavilyWebSearch
        apiKey: "{{ kv('TAVILY_API_KEY') }}"
        maxResults: 8

    chatConfiguration:
      responseFormat:
        type: JSON
        jsonSchema:
          type: object
          required: ["releases"]
          properties:
            releases:
              type: array
              minItems: 1
              items:
                type: object
                additionalProperties: false
                required: ["version", "date", "semver"]
                properties:
                  version:
                    type: string
                    description: "Release tag, e.g., 0.24.0"
                  date:
                    type: string
                    description: "Release date"
                  semver:
                    type: string
                    enum: ["MINOR", "PATCH"]
                  summary:
                    type: string
                    description: "Short plain-text summary (optional)"

    systemMessage: |
      You are a release analyst. Use the Tavily web retriever to find recent Kestra releases.
      Determine each release's SemVer category:
        - MINOR: new features, no major breaking changes (y in x.Y.z)
        - PATCH: bug fixes/patches only (z in x.y.Z)
      Return ONLY valid JSON matching the schema. No prose, no extra keys.

    prompt: |
      Find most recent Kestra releases (within the last ~6 months).
      Output their version, release date, semver category, and a one-line summary.
Properties

Chat model provider

Definitions
accessKeyId*Requiredstring
modelName*Requiredstring
secretAccessKey*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
modelTypestring
DefaultCOHERE
Possible Values
COHERETITAN
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
maxTokensintegerstring
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
apiKeystring
baseUrlstring
caPemstring
clientIdstring
clientPemstring
clientSecretstring
serviceVersionstring
tenantIdstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://dashscope-intl.aliyuncs.com/api/v1
caPemstring
clientPemstring
enableSearchbooleanstring
maxTokensintegerstring
repetitionPenaltynumberstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.deepseek.com/v1
caPemstring
clientPemstring
gitHubToken*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring
location*Requiredstring
modelName*Requiredstring
project*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://router.huggingface.co/v1
caPemstring
clientPemstring
baseUrl*Requiredstring
modelName*Requiredstring
type*Requiredobject
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
compartmentId*Requiredstring
modelName*Requiredstring
region*Requiredstring
type*Requiredobject
authProviderstring
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.openai.com/v1
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
accountId*Requiredstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://open.bigmodel.cn/
caPemstring
clientPemstring
maxRetriesintegerstring
maxTokenintegerstring
stopsarray
SubTypestring

User prompt

The user input for this run. May be templated from flow inputs.

Default{}

Chat configuration

Definitions
logRequestsbooleanstring
logResponsesbooleanstring
maxTokenintegerstring
responseFormat
jsonSchemaobject
jsonSchemaDescriptionstring
typestring
DefaultTEXT
Possible Values
TEXTJSON
returnThinkingbooleanstring
seedintegerstring
temperaturenumberstring
thinkingBudgetTokensintegerstring
thinkingEnabledbooleanstring
topKintegerstring
topPnumberstring
Default{ "maxResults": 3, "minScore": 0 }

Content retriever configuration

Definitions
maxResultsinteger
Default3

Maximum results to return from the embedding store

minScorenumber
Default0

Minimum similarity score (0-1 inclusive). Only results with score ≥ minScore are returned.

Additional content retrievers

Some content retrievers like WebSearch can also be used as tools, but using them as content retrievers will ensure that they are always called whereas tools are only used when the LLM decides to.

Definitions
apiKey*Requiredstring
csi*Requiredstring
type*Requiredobject
maxResultsintegerstring
Default3
databaseType*Requiredobject
password*Requiredstring
provider*Required
accessKeyId*Requiredstring
modelName*Requiredstring
secretAccessKey*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
modelTypestring
DefaultCOHERE
Possible Values
COHERETITAN
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
maxTokensintegerstring
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
apiKeystring
baseUrlstring
caPemstring
clientIdstring
clientPemstring
clientSecretstring
serviceVersionstring
tenantIdstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://dashscope-intl.aliyuncs.com/api/v1
caPemstring
clientPemstring
enableSearchbooleanstring
maxTokensintegerstring
repetitionPenaltynumberstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.deepseek.com/v1
caPemstring
clientPemstring
gitHubToken*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring
location*Requiredstring
modelName*Requiredstring
project*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://router.huggingface.co/v1
caPemstring
clientPemstring
baseUrl*Requiredstring
modelName*Requiredstring
type*Requiredobject
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
compartmentId*Requiredstring
modelName*Requiredstring
region*Requiredstring
type*Requiredobject
authProviderstring
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.openai.com/v1
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
accountId*Requiredstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://open.bigmodel.cn/
caPemstring
clientPemstring
maxRetriesintegerstring
maxTokenintegerstring
stopsarray
SubTypestring
type*Requiredobject
username*Requiredstring
configuration
Default{}
logRequestsbooleanstring
logResponsesbooleanstring
maxTokenintegerstring
responseFormat
jsonSchemaobject
jsonSchemaDescriptionstring
typestring
DefaultTEXT
Possible Values
TEXTJSON
returnThinkingbooleanstring
seedintegerstring
temperaturenumberstring
thinkingBudgetTokensintegerstring
thinkingEnabledbooleanstring
topKintegerstring
topPnumberstring
driverstring
jdbcUrlstring
maxPoolSizeintegerstring
Default2
apiKey*Requiredstring
type*Requiredobject
maxResultsintegerstring
Default3

Embedding model provider

Optional. If not set, the embedding model is created from chatProvider. Ensure the chosen chat provider supports embeddings.

Definitions
accessKeyId*Requiredstring
modelName*Requiredstring
secretAccessKey*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
modelTypestring
DefaultCOHERE
Possible Values
COHERETITAN
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
maxTokensintegerstring
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
apiKeystring
baseUrlstring
caPemstring
clientIdstring
clientPemstring
clientSecretstring
serviceVersionstring
tenantIdstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://dashscope-intl.aliyuncs.com/api/v1
caPemstring
clientPemstring
enableSearchbooleanstring
maxTokensintegerstring
repetitionPenaltynumberstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.deepseek.com/v1
caPemstring
clientPemstring
gitHubToken*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring
location*Requiredstring
modelName*Requiredstring
project*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://router.huggingface.co/v1
caPemstring
clientPemstring
baseUrl*Requiredstring
modelName*Requiredstring
type*Requiredobject
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
compartmentId*Requiredstring
modelName*Requiredstring
region*Requiredstring
type*Requiredobject
authProviderstring
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.openai.com/v1
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
accountId*Requiredstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://open.bigmodel.cn/
caPemstring
clientPemstring
maxRetriesintegerstring
maxTokenintegerstring
stopsarray
SubTypestring

Embedding store

Optional when at least one entry is provided in contentRetrievers.

Definitions
baseUrl*Requiredstring

The database base URL

collectionName*Requiredstring
type*Requiredobject
connection*Required
hosts*Requiredarray
SubTypestring
Min items1

List of HTTP Elasticsearch servers

Must be a URI like https://example.com: 9200 with scheme and port

basicAuth

Basic authorization configuration

passwordstring

Basic authorization password

usernamestring

Basic authorization username

headersarray
SubTypestring

List of HTTP headers to be sent with every request

Each item is a key: value string, e.g., Authorization: Token XYZ

pathPrefixstring

Path prefix for all HTTP requests

If set to /my/path, each client request becomes /my/path/ + endpoint. Useful when Elasticsearch is behind a proxy providing a base path; do not use otherwise.

strictDeprecationModebooleanstring

Treat responses with deprecation warnings as failures

trustAllSslbooleanstring

Trust all SSL CA certificates

Use this if the server uses a self-signed SSL certificate

indexName*Requiredstring

The name of the index to store embeddings

type*Requiredobject
type*Requiredobject
kvNamestring
Default{{flow.id}}-embedding-store

The name of the KV pair to use

createTable*Requiredbooleanstring

Whether to create the table if it doesn't exist

databaseUrl*Requiredstring

Database URL of the MariaDB database (e.g., jdbc: mariadb://host: port/dbname)

fieldName*Requiredstring

Name of the column used as the unique ID in the database

password*Requiredstring
tableName*Requiredstring

Name of the table where embeddings will be stored

type*Requiredobject
username*Requiredstring
columnDefinitionsarray
SubTypestring

Metadata Column Definitions

List of SQL column definitions for metadata fields (e.g., 'text TEXT', 'source TEXT'). Required only when using COLUMN_PER_KEY storage mode.

indexesarray
SubTypestring

Metadata Index Definitions

List of SQL index definitions for metadata columns (e.g., 'INDEX idx_text (text)'). Used only with COLUMN_PER_KEY storage mode.

metadataStorageModestring

Metadata Storage Mode

Determines how metadata is stored: - COLUMN_PER_KEY: Use individual columns for each metadata field (requires columnDefinitions and indexes). - COMBINED_JSON (default): Store metadata as a JSON object in a single column. If columnDefinitions and indexes are provided, COLUMN_PER_KEY must be used.

token*Requiredstring

Token

Milvus auth token. Required if authentication is enabled; omit for local deployments without auth.

type*Requiredobject
autoFlushOnDeletebooleanstring

Auto flush on delete

If true, flush after delete operations.

autoFlushOnInsertbooleanstring

Auto flush on insert

If true, flush after insert operations. Setting it to false can improve throughput.

collectionNamestring

Collection name

Target collection. Created automatically if it does not exist. Default: "default".

consistencyLevelstring

Read/write consistency level. Common values include STRONG, BOUNDED, or EVENTUALLY (depends on client/version).

databaseNamestring

Logical database to use. If not provided, the default database is used.

hoststring

Milvus host name (used when uri is not set). Default: "localhost".

idFieldNamestring

ID field name

Field name for document IDs. Default depends on collection schema.

indexTypestring

Index type

Vector index type (e.g., IVF_FLAT, IVF_SQ8, HNSW). Depends on Milvus deployment and dataset.

metadataFieldNamestring

Field name for metadata. Default depends on collection schema.

metricTypestring

Metric type

Similarity metric (e.g., L2, IP, COSINE). Should match the embedding provider’s expected metric.

passwordstring

Password

portintegerstring

Milvus port (used when uri is not set). Typical: 19530 (gRPC) or 9091 (HTTP). Default: 19530.

retrieveEmbeddingsOnSearchbooleanstring

Retrieve embeddings on search

If true, return stored embeddings along with matches. Default: false.

textFieldNamestring

Text field name

Field name for original text. Default depends on collection schema.

uristring

URI

Connection URI. Use either uri OR host/port (not both). Examples:

  • gRPC (typical): "milvus://host: 19530"
  • HTTP: "http://host: 9091"
usernamestring

Username

Required when authentication/TLS is enabled. See https://milvus.io/docs/authenticate.md

vectorFieldNamestring

Vector field name

Field name for the embedding vector. Must match the index definition and embedding dimensionality.

collectionName*Requiredstring
host*Requiredstring

The host

indexName*Requiredstring
scheme*Requiredstring

The scheme (e.g., mongodb+srv)

type*Requiredobject
createIndexbooleanstring

Create the index

databasestring

The database

metadataFieldNamesarray
SubTypestring

The metadata field names

optionsobject

The connection string options

passwordstring

The password

usernamestring

The username

database*Requiredstring

The database name

host*Requiredstring
password*Requiredstring

The database password

port*Requiredintegerstring
table*Requiredstring

The table to store embeddings in

type*Requiredobject
user*Requiredstring

The database user

useIndexbooleanstring
Defaultfalse

Whether to use use an IVFFlat index

An IVFFlat index divides vectors into lists, and then searches a subset of those lists closest to the query vector. It has faster build times and uses less memory than HNSW but has lower query performance (in terms of speed-recall tradeoff).

apiKey*Requiredstring
cloud*Requiredstring

The cloud provider

index*Requiredstring

The index

region*Requiredstring

The cloud provider region

type*Requiredobject
namespacestring

The namespace (default will be used if not provided)

apiKey*Requiredstring

The API key

collectionName*Requiredstring

The collection name

host*Requiredstring
port*Requiredintegerstring
type*Requiredobject
host*Requiredstring

The database server host

port*Requiredintegerstring

The database server port

type*Requiredobject
indexNamestring
Defaultembedding-index

The index name

accessKeyId*Requiredstring

Access Key ID

The access key ID used for authentication with the database.

accessKeySecret*Requiredstring

Access Key Secret

The access key secret used for authentication with the database.

endpoint*Requiredstring

The base URL for the Tablestore database endpoint.

instanceName*Requiredstring

Instance Name

The name of the Tablestore database instance.

type*Requiredobject
metadataSchemaListarray

Metadata Schema List

Optional list of metadata field schemas for the collection.

analyzerstring
Possible Values
SingleWordMaxWordMinWordSplitFuzzy
analyzerParameter
dateFormatsarray
SubTypestring
enableHighlightingboolean
enableSortAndAggboolean
fieldNamestring
fieldTypestring
Possible Values
LONGDOUBLEBOOLEANKEYWORDTEXTNESTEDGEO_POINTDATEVECTORFUZZY_KEYWORDIPJSONUNKNOWN
indexboolean
indexOptionsstring
Possible Values
DOCSFREQSPOSITIONSOFFSETS
isArrayboolean
jsonTypestring
Possible Values
FLATTENNESTED
sourceFieldNamesarray
SubTypestring
storeboolean
subFieldSchemasarray
analyzerstring
Possible Values
SingleWordMaxWordMinWordSplitFuzzy
analyzerParameter
dateFormatsarray
SubTypestring
enableHighlightingboolean
enableSortAndAggboolean
fieldNamestring
fieldTypestring
Possible Values
LONGDOUBLEBOOLEANKEYWORDTEXTNESTEDGEO_POINTDATEVECTORFUZZY_KEYWORDIPJSONUNKNOWN
indexboolean
indexOptionsstring
Possible Values
DOCSFREQSPOSITIONSOFFSETS
isArrayboolean
jsonTypestring
Possible Values
FLATTENNESTED
sourceFieldNamesarray
SubTypestring
storeboolean
subFieldSchemasarray
vectorOptions
vectorOptions
dataTypestring
dimensioninteger
metricTypestring
Possible Values
EUCLIDEANCOSINEDOT_PRODUCT
apiKey*Requiredstring

Weaviate API key. Omit for local deployments without auth.

host*Requiredstring

Host

Cluster host name without protocol, e.g., "abc123.weaviate.network".

type*Requiredobject
avoidDupsbooleanstring

Avoid duplicates

If true (default), a hash-based ID is derived from each text segment to prevent duplicates. If false, a random ID is used.

consistencyLevelstring
Possible Values
ONEQUORUMALL

Consistency level

Write consistency: ONE, QUORUM (default), or ALL.

grpcPortintegerstring

gRPC port

Port for gRPC if enabled (e.g., 50051).

metadataFieldNamestring

Metadata field name

Field used to store metadata. Defaults to "_metadata" if not set.

metadataKeysarray
SubTypestring

Metadata keys

The list of metadata keys to store - if not provided, it will default to an empty list.

objectClassstring

Object class

Weaviate class to store objects in (must start with an uppercase letter). Defaults to "Default" if not set.

portintegerstring

Port

Optional port (e.g., 443 for https, 80 for http). Leave unset to use provider defaults.

schemestring

Scheme

Cluster scheme: "https" (recommended) or "http".

securedGrpcbooleanstring

Secure gRPC

Whether the gRPC connection is secured (TLS).

useGrpcForInsertsbooleanstring

Use gRPC for batch inserts

If true, use gRPC for batch inserts. HTTP remains required for search operations.

Chat memory

Stores conversation history and injects it into context on subsequent runs.

Definitions
type*Requiredobject
dropstring
DefaultNEVER
Possible Values
NEVERBEFORE_TASKRUNAFTER_TASKRUN
memoryIdstring
Default{{ labels.system.correlationId }}
messagesintegerstring
Default10
ttlstring
DefaultPT1H
Formatduration
database*Requiredstring

Database name

The name of the PostgreSQL database

host*Requiredstring

PostgreSQL host

The hostname of your PostgreSQL server

password*Requiredstring

The password to connect to PostgreSQL

type*Requiredobject
user*Requiredstring

Database user

The username to connect to PostgreSQL

dropstring
DefaultNEVER
Possible Values
NEVERBEFORE_TASKRUNAFTER_TASKRUN
memoryIdstring
Default{{ labels.system.correlationId }}
messagesintegerstring
Default10
portintegerstring
Default5432

PostgreSQL port

The port of your PostgreSQL server

tableNamestring
Defaultchat_memory

Table name

The name of the table used to store chat memory. Defaults to 'chat_memory'.

ttlstring
DefaultPT1H
Formatduration
host*Requiredstring

Redis host

The hostname of your Redis server (e.g., localhost or redis-server)

type*Requiredobject
dropstring
DefaultNEVER
Possible Values
NEVERBEFORE_TASKRUNAFTER_TASKRUN

Drop memory: never, before, or after the agent's task run

By default, the memory ID is the value of the system.correlationId label, meaning that the same memory will be used by all tasks of the flow and its subflows. If you want to remove the memory eagerly (before expiration), you can set drop: AFTER_TASKRUN to erase the memory after the taskrun. You can also set drop: BEFORE_TASKRUN to drop the memory before the taskrun.

memoryIdstring
Default{{ labels.system.correlationId }}

Memory ID - defaults to the value of the system.correlationId label. This means that a memory is valid for the entire flow execution including its subflows.

messagesintegerstring
Default10

Maximum number of messages to keep in memory. If memory is full, the oldest messages will be removed in a FIFO manner. The last system message is always kept.

portintegerstring
Default6379

Redis port

The port of your Redis server

ttlstring
DefaultPT1H
Formatduration

Memory duration - defaults to 1h

Instruction that sets the assistant's role, tone, and constraints for this task.

Optional tools the LLM may call to augment its response

Definitions
description*Requiredstring
serverUrl*Requiredstring

Server URL

The URL of the remote agent A2A server

type*Requiredobject
namestring
Defaulttool
description*Requiredstring

Agent description

The description will be used to instruct the LLM what the tool is doing.

provider*Required
accessKeyId*Requiredstring
modelName*Requiredstring
secretAccessKey*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
modelTypestring
DefaultCOHERE
Possible Values
COHERETITAN
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
maxTokensintegerstring
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
apiKeystring
baseUrlstring
caPemstring
clientIdstring
clientPemstring
clientSecretstring
serviceVersionstring
tenantIdstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://dashscope-intl.aliyuncs.com/api/v1
caPemstring
clientPemstring
enableSearchbooleanstring
maxTokensintegerstring
repetitionPenaltynumberstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.deepseek.com/v1
caPemstring
clientPemstring
gitHubToken*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring
location*Requiredstring
modelName*Requiredstring
project*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://router.huggingface.co/v1
caPemstring
clientPemstring
baseUrl*Requiredstring
modelName*Requiredstring
type*Requiredobject
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
compartmentId*Requiredstring
modelName*Requiredstring
region*Requiredstring
type*Requiredobject
authProviderstring
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.openai.com/v1
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
accountId*Requiredstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://open.bigmodel.cn/
caPemstring
clientPemstring
maxRetriesintegerstring
maxTokenintegerstring
stopsarray
SubTypestring
type*Requiredobject
configuration
Default{}
logRequestsbooleanstring
logResponsesbooleanstring
maxTokenintegerstring
responseFormat
jsonSchemaobject

JSON Schema (used when type = JSON)

Provide a JSON Schema describing the expected structure of the response. In Kestra flows, define the schema in YAML (it is still a JSON Schema object). Example (YAML):

text
responseFormat: 
    type: JSON
    jsonSchema: 
      type: object
      required: ["category", "priority"]
      properties: 
        category: 
          type: string
          enum: ["ACCOUNT", "BILLING", "TECHNICAL", "GENERAL"]
        priority: 
          type: string
          enum: ["LOW", "MEDIUM", "HIGH"]

Note: Provider support for strict schema enforcement varies. If unsupported, guide the model about the expected output structure via the prompt and validate downstream.

jsonSchemaDescriptionstring

Schema description (optional)

Natural-language description of the schema to help the model produce the right fields. Example: "Classify a customer ticket into category and priority."

typestring
DefaultTEXT
Possible Values
TEXTJSON

Response format type

Specifies how the LLM should return output. Allowed values:

  • TEXT (default): free-form natural language.
  • JSON: structured output validated against a JSON Schema.
returnThinkingbooleanstring
seedintegerstring
temperaturenumberstring
thinkingBudgetTokensintegerstring
thinkingEnabledbooleanstring
topKintegerstring
topPnumberstring
contentRetrievers

Content retrievers

Some content retrievers, like WebSearch, can also be used as tools. However, when configured as content retrievers, they will always be used, whereas tools are only invoked when the LLM decides to use them.

apiKey*Requiredstring
csi*Requiredstring
type*Requiredobject
maxResultsintegerstring
Default3

Maximum number of results

databaseType*Requiredobject
password*Requiredstring

Database password

provider*Required

Language model provider

accessKeyId*Requiredstring

AWS Access Key ID

modelName*Requiredstring
secretAccessKey*Requiredstring

AWS Secret Access Key

type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
modelTypestring
DefaultCOHERE
Possible Values
COHERETITAN

Amazon Bedrock Embedding Model Type

apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
maxTokensintegerstring

Maximum Tokens

Specifies the maximum number of tokens that the model is allowed to generate in its response.

endpoint*Requiredstring

API endpoint

The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/

modelName*Requiredstring
type*Requiredobject
apiKeystring
baseUrlstring
caPemstring
clientIdstring

Client ID

clientPemstring
clientSecretstring

Client secret

serviceVersionstring
tenantIdstring

Tenant ID

apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://dashscope-intl.aliyuncs.com/api/v1
text
If you use a model in the China (Beijing) region, you need to replace the URL with: https://dashscope.aliyuncs.com/api/v1,
otherwise use the Singapore region of: "https://dashscope-intl.aliyuncs.com/api/v1.
The default value is computed based on the system timezone.
caPemstring
clientPemstring
enableSearchbooleanstring

Whether the model uses Internet search results for reference when generating text or not

maxTokensintegerstring
repetitionPenaltynumberstring

Repetition in a continuous sequence during model generation

text
Increasing repetition_penalty reduces the repetition in model generation,
1.0 means no penalty. Value range: (0, +inf)
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.deepseek.com/v1
caPemstring
clientPemstring
gitHubToken*Requiredstring

GitHub Token

Personal Access Token (PAT) used to access GitHub Models.

modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring

Endpoint URL

location*Requiredstring

Project location

modelName*Requiredstring
project*Requiredstring

Project ID

type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://router.huggingface.co/v1
caPemstring
clientPemstring
baseUrl*Requiredstring
modelName*Requiredstring
type*Requiredobject
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
compartmentId*Requiredstring

OCID of OCI Compartment with the model

modelName*Requiredstring
region*Requiredstring

OCI Region to connect the client to

type*Requiredobject
authProviderstring

OCI SDK Authentication provider

baseUrlstring
caPemstring
clientPemstring
endpoint*Requiredstring

Model endpoint

modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Defaulthttps://api.openai.com/v1
caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
accountId*Requiredstring

Account Identifier

Unique identifier assigned to an account

apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring

Base URL

Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).

caPemstring
clientPemstring
apiKey*Requiredstring
modelName*Requiredstring

Model name

type*Requiredobject
baseUrlstring
Defaulthttps://open.bigmodel.cn/

API base URL

The base URL for ZhiPu API (defaults to https://open.bigmodel.cn/)

caPemstring

CA PEM certificate content

CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.

clientPemstring

Client PEM certificate content

PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.

maxRetriesintegerstring

The maximum retry times to request

maxTokenintegerstring

The maximum number of tokens returned by this request

stopsarray
SubTypestring

With the stop parameter, the model will automatically stop generating text when it is about to contain the specified string or token_id

type*Requiredobject
username*Requiredstring

Database username

configuration
Default{}

Language model configuration

logRequestsbooleanstring

Log LLM requests

If true, prompts and configuration sent to the LLM will be logged at INFO level.

logResponsesbooleanstring

Log LLM responses

If true, raw responses from the LLM will be logged at INFO level.

maxTokenintegerstring

Maximum number of tokens the model can generate in the completion (response). This limits the length of the output.

responseFormat

Response format

Defines the expected output format. Default is plain text. Some providers allow requesting JSON or schema-constrained outputs, but support varies and may be incompatible with tool use. When using a JSON schema, the output will be returned under the key jsonOutput.

returnThinkingbooleanstring

Return Thinking

Controls whether to return the model's internal reasoning or 'thinking' text, if available. When enabled, the reasoning content is extracted from the response and made available in the AiMessage object. It Does not trigger the thinking process itself—only affects whether the output is parsed and returned.

seedintegerstring

Seed

Optional random seed for reproducibility. Provide a positive integer (e.g., 42, 1234). Using the same seed with identical settings produces repeatable outputs.

temperaturenumberstring

Temperature

Controls randomness in generation. Typical range is 0.0–1.0. Lower values (e.g., 0.2) make outputs more focused and deterministic, while higher values (e.g., 0.7–1.0) increase creativity and variability.

thinkingBudgetTokensintegerstring

Thinking Token Budget

Specifies the maximum number of tokens allocated as a budget for internal reasoning processes, such as generating intermediate thoughts or chain-of-thought sequences, allowing the model to perform multi-step reasoning before producing the final output.

thinkingEnabledbooleanstring

Enable Thinking

Enables internal reasoning ('thinking') in supported language models, allowing the model to perform intermediate reasoning steps before producing a final output; this is useful for complex tasks like multi-step problem solving or decision making, but may increase token usage and response time, and is only applicable to compatible models.

topKintegerstring

Top-K

Limits sampling to the top K most likely tokens at each step. Typical values are between 20 and 100. Smaller values reduce randomness; larger values allow more diverse outputs.

topPnumberstring

Top-P (nucleus sampling)

Selects from the smallest set of tokens whose cumulative probability is ≤ topP. Typical values are 0.8–0.95. Lower values make the output more focused, higher values increase diversity.

driverstring

Optional JDBC driver class name – automatically resolved if not provided.

jdbcUrlstring

JDBC connection URL to the target database

maxPoolSizeintegerstring
Default2

Maximum number of database connections in the pool

apiKey*Requiredstring

API Key

type*Requiredobject
maxResultsintegerstring
Default3

Maximum number of results to return

maxSequentialToolsInvocationsintegerstring

Maximum sequential tools invocations

namestring
Defaulttool

Agent name

It must be set to a different value than the default in case you want to have multiple agents used as tools in the same task.

systemMessagestring

System message

The system message for the language model

tools

Tools that the LLM may use to augment its response

apiKey*Requiredstring

RapidAPI key for Judge0

You can obtain it from the RapidAPI website.

type*Requiredobject
image*Requiredstring

Container image

type*Requiredobject
apiVersionstring

API version

bindsarray
SubTypestring

Volume binds

commandarray
SubTypestring
dockerCertPathstring

Docker certificate path

dockerConfigstring

Docker configuration

dockerContextstring

Docker context

dockerHoststring

Docker host

dockerTlsVerifybooleanstring

Whether Docker should verify TLS certificates

envobject
SubTypestring
logEventsbooleanstring
Defaultfalse

Whether to log events

registryEmailstring

Container registry email

registryPasswordstring

Container registry password

registryUrlstring

Container registry URL

registryUsernamestring

Container registry username

apiKey*Requiredstring

API key

csi*Requiredstring

Custom search engine ID (cx)

type*Requiredobject
type*Requiredobject
descriptionstring

Description of the flow if not already provided inside the flow itself

Use it only if you define the flow in the tool definition. The LLM needs a tool description to identify whether to call it. If the flow has a description, the tool will use it. Otherwise, the description property must be explicitly defined.

flowIdstring

Flow ID of the flow that should be called

inheritLabelsbooleanstring
Defaultfalse

Whether the flow should inherit labels from this execution that triggered it

By default, labels are not inherited. If you set this option to true, the flow execution will inherit all labels from the agent's execution. Any labels passed by the LLM will override those defined here.

inputsobject

Input values that should be passed to flow's execution

Any inputs passed by the LLM will override those defined here.

labelsarrayobject

Labels that should be added to the flow's execution

Any labels passed by the LLM will override those defined here.

namespacestring

Namespace of the flow that should be called

revisionintegerstring

Revision of the flow that should be called

scheduleDatestring
Formatdate-time

Schedule the flow execution at a later date

If the LLM sets a scheduleDate, it will override the one defined here.

tasks*Requiredarray

List of Kestra runnable tasks

type*Requiredobject
sseUrl*Requiredstring

SSE URL of the MCP server

type*Requiredobject
headersobject
SubTypestring

Could be useful, for example, to add authentication tokens via the Authorization header.

logRequestsbooleanstring
Defaultfalse
logResponsesbooleanstring
Defaultfalse
timeoutstring
Formatduration
command*Requiredarray
SubTypestring

MCP client command, as a list of command parts

type*Requiredobject
envobject
SubTypestring

Environment variables

logEventsbooleanstring
Defaultfalse

Log events

type*Requiredobject
url*Requiredstring

URL of the MCP server

headersobject
SubTypestring

Custom headers

Useful, for example, for adding authentication tokens via the Authorization header.

logRequestsbooleanstring
Defaultfalse

Log requests

logResponsesbooleanstring
Defaultfalse

Log responses

timeoutstring
Formatduration

Connection timeout duration

apiKey*Requiredstring

Tavily API Key - you can obtain one from the Tavily website

type*Requiredobject
Possible Values
STOPLENGTHTOOL_EXECUTIONCONTENT_FILTEROTHER

Intermediate responses

Definitions
completionstring

Generated text completion

The result of the text completion

finishReasonstring
Possible Values
STOPLENGTHTOOL_EXECUTIONCONTENT_FILTEROTHER

Finish reason

idstring

Response identifier

requestDurationinteger
tokenUsage
inputTokenCountinteger
outputTokenCountinteger
totalTokenCountinteger
toolExecutionRequestsarray

Tool execution requests

argumentsobject

Tool request arguments

idstring

Tool execution request identifier

namestring

Tool name

LLM output for JSON response format

The result of the LLM completion for response format of type JSON, null otherwise.

SubTypestring

URIs of the generated files in Kestra's internal storage

Request duration in milliseconds

Content sources used during RAG retrieval

Definitions
contentstring

Extracted text segment

A snippet of text relevant to the user's query, typically a sentence, paragraph, or other discrete unit of text.

metadataobject

Source metadata

Key-value pairs providing context about the origin of the content, such as URLs, document titles, or other relevant attributes.

LLM output for TEXT response format

The result of the LLM completion for response format of type TEXT (default), null otherwise.

Model's Thinking Output

Contains the model's internal reasoning or 'thinking' text, if the model supports it and 'returnThinking' is enabled. This may include intermediate reasoning steps, such as chain-of-thought explanations. Null if thinking is not supported, not enabled, or not returned by the model.

Token usage

Definitions
inputTokenCountinteger
outputTokenCountinteger
totalTokenCountinteger

Tool executions

Definitions
requestArgumentsobject
requestIdstring
requestNamestring
resultstring
Unittoken

Large Language Model (LLM) input token count

Unittoken

Large Language Model (LLM) output token count

Unittoken

Large Language Model (LLM) total token count