AIAgentAIAgent
​A​I​AgentCertified

This tool allows an LLM to call an AI Agent. Make sure to specify a name and a description so the LLM can understand what it does to decide if it needs to call it.

Call an AI Agent as a tool

This tool allows an LLM to call an AI Agent. Make sure to specify a name and a description so the LLM can understand what it does to decide if it needs to call it.

yaml
type: "io.kestra.plugin.ai.tool.AIAgent"

Call an AI agent as a tool

yaml
id: ai-agent-with-agent-tools
namespace: company.ai

inputs:
  - id: prompt
    type: STRING
    defaults: |
      Each flow can produce outputs that can be consumed by other flows. This is a list property, so that your flow can produce as many outputs as you need.
      Each output needs to have an ID (the name of the output), a type (the same types you know from inputs, e.g., STRING, URI, or JSON), and a value, which is the actual output value that will be stored in internal storage and passed to other flows when needed.
tasks:
  - id: ai-agent
    type: io.kestra.plugin.ai.agent.AIAgent
    provider:
      type: io.kestra.plugin.ai.provider.GoogleGemini
      modelName: gemini-2.5-flash
      apiKey: "{{ kv('GEMINI_API_KEY') }}"
    systemMessage: Summarize the user message, then translate it into French using the provided tool.
    prompt: "{{inputs.prompt}}"
    tools:
      - type: io.kestra.plugin.ai.tool.AIAgent
        description: Translation expert
        systemMessage: You are an expert in translating text between multiple languages
        provider:
          type: io.kestra.plugin.ai.provider.GoogleGemini
          modelName: gemini-2.5-flash-lite
          apiKey: "{{ kv('GEMINI_API_KEY') }}"
Properties
Definitions
accessKeyId*Requiredstring
modelName*Requiredstring
secretAccessKey*Requiredstring
baseUrlstring
caPemstring
clientPemstring
modelTypestring
DefaultCOHERE
Possible Values
COHERETITAN
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
maxTokensintegerstring
typeobject
endpoint*Requiredstring
modelName*Requiredstring
apiKeystring
baseUrlstring
caPemstring
clientIdstring
clientPemstring
clientSecretstring
serviceVersionstring
tenantIdstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://dashscope-intl.aliyuncs.com/api/v1
caPemstring
clientPemstring
enableSearchbooleanstring
maxTokensintegerstring
repetitionPenaltynumberstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://api.deepseek.com/v1
caPemstring
clientPemstring
typeobject
gitHubToken*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
endpoint*Requiredstring
location*Requiredstring
modelName*Requiredstring
project*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://router.huggingface.co/v1
caPemstring
clientPemstring
typeobject
baseUrl*Requiredstring
modelName*Requiredstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
compartmentId*Requiredstring
modelName*Requiredstring
region*Requiredstring
authProviderstring
baseUrlstring
caPemstring
clientPemstring
typeobject
endpoint*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://api.openai.com/v1
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
accountId*Requiredstring
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://open.bigmodel.cn/
caPemstring
clientPemstring
maxRetriesintegerstring
maxTokenintegerstring
stopsarray
SubTypestring
typeobject
Default{}
Definitions
logRequestsbooleanstring
logResponsesbooleanstring
maxTokenintegerstring
responseFormat
jsonSchemaobject
jsonSchemaDescriptionstring
typestring
DefaultTEXT
Possible Values
TEXTJSON
returnThinkingbooleanstring
seedintegerstring
temperaturenumberstring
thinkingBudgetTokensintegerstring
thinkingEnabledbooleanstring
topKintegerstring
topPnumberstring

Content retrievers

Some content retrievers, like WebSearch, can also be used as tools. However, when configured as content retrievers, they will always be used, whereas tools are only invoked when the LLM decides to use them.

Definitions
apiKey*Requiredstring
csi*Requiredstring
maxResultsintegerstring
Default3

Maximum number of results

typeobject
databaseType*Requiredobject
password*Requiredstring

Database password

provider*Required

Language model provider

accessKeyId*Requiredstring

AWS Access Key ID

modelName*Requiredstring
secretAccessKey*Requiredstring

AWS Secret Access Key

baseUrlstring
caPemstring
clientPemstring
modelTypestring
DefaultCOHERE
Possible Values
COHERETITAN

Amazon Bedrock Embedding Model Type

typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
maxTokensintegerstring

Maximum Tokens

Specifies the maximum number of tokens that the model is allowed to generate in its response.

typeobject
endpoint*Requiredstring

API endpoint

The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/

modelName*Requiredstring
apiKeystring
baseUrlstring
caPemstring
clientIdstring

Client ID

clientPemstring
clientSecretstring

Client secret

serviceVersionstring
tenantIdstring

Tenant ID

typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://dashscope-intl.aliyuncs.com/api/v1
text
If you use a model in the China (Beijing) region, you need to replace the URL with: https://dashscope.aliyuncs.com/api/v1,
otherwise use the Singapore region of: "https://dashscope-intl.aliyuncs.com/api/v1.
The default value is computed based on the system timezone.
caPemstring
clientPemstring
enableSearchbooleanstring

Whether the model uses Internet search results for reference when generating text or not

maxTokensintegerstring
repetitionPenaltynumberstring

Repetition in a continuous sequence during model generation

text
Increasing repetition_penalty reduces the repetition in model generation,
1.0 means no penalty. Value range: (0, +inf)
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://api.deepseek.com/v1
caPemstring
clientPemstring
typeobject
gitHubToken*Requiredstring

GitHub Token

Personal Access Token (PAT) used to access GitHub Models.

modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
endpoint*Requiredstring

Endpoint URL

location*Requiredstring

Project location

modelName*Requiredstring
project*Requiredstring

Project ID

baseUrlstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://router.huggingface.co/v1
caPemstring
clientPemstring
typeobject
baseUrl*Requiredstring
modelName*Requiredstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
compartmentId*Requiredstring

OCID of OCI Compartment with the model

modelName*Requiredstring
region*Requiredstring

OCI Region to connect the client to

authProviderstring

OCI SDK Authentication provider

baseUrlstring
caPemstring
clientPemstring
typeobject
endpoint*Requiredstring

Model endpoint

modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
Defaulthttps://api.openai.com/v1
caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring
caPemstring
clientPemstring
typeobject
accountId*Requiredstring

Account Identifier

Unique identifier assigned to an account

apiKey*Requiredstring
modelName*Requiredstring
baseUrlstring

Base URL

Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).

caPemstring
clientPemstring
typeobject
apiKey*Requiredstring
modelName*Requiredstring

Model name

baseUrlstring
Defaulthttps://open.bigmodel.cn/

API base URL

The base URL for ZhiPu API (defaults to https://open.bigmodel.cn/)

caPemstring

CA PEM certificate content

CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.

clientPemstring

Client PEM certificate content

PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.

maxRetriesintegerstring

The maximum retry times to request

maxTokenintegerstring

The maximum number of tokens returned by this request

stopsarray
SubTypestring

With the stop parameter, the model will automatically stop generating text when it is about to contain the specified string or token_id

typeobject
username*Requiredstring

Database username

configuration
Default{}

Language model configuration

logRequestsbooleanstring

Log LLM requests

If true, prompts and configuration sent to the LLM will be logged at INFO level.

logResponsesbooleanstring

Log LLM responses

If true, raw responses from the LLM will be logged at INFO level.

maxTokenintegerstring

Maximum number of tokens the model can generate in the completion (response). This limits the length of the output.

responseFormat

Response format

Defines the expected output format. Default is plain text. Some providers allow requesting JSON or schema-constrained outputs, but support varies and may be incompatible with tool use. When using a JSON schema, the output will be returned under the key jsonOutput.

jsonSchemaobject

JSON Schema (used when type = JSON)

Provide a JSON Schema describing the expected structure of the response. In Kestra flows, define the schema in YAML (it is still a JSON Schema object). Example (YAML):

text
responseFormat: 
    type: JSON
    jsonSchema: 
      type: object
      required: ["category", "priority"]
      properties: 
        category: 
          type: string
          enum: ["ACCOUNT", "BILLING", "TECHNICAL", "GENERAL"]
        priority: 
          type: string
          enum: ["LOW", "MEDIUM", "HIGH"]

Note: Provider support for strict schema enforcement varies. If unsupported, guide the model about the expected output structure via the prompt and validate downstream.

jsonSchemaDescriptionstring

Schema description (optional)

Natural-language description of the schema to help the model produce the right fields. Example: "Classify a customer ticket into category and priority."

typestring
DefaultTEXT
Possible Values
TEXTJSON

Response format type

Specifies how the LLM should return output. Allowed values:

  • TEXT (default): free-form natural language.
  • JSON: structured output validated against a JSON Schema.
returnThinkingbooleanstring

Return Thinking

Controls whether to return the model's internal reasoning or 'thinking' text, if available. When enabled, the reasoning content is extracted from the response and made available in the AiMessage object. It Does not trigger the thinking process itself—only affects whether the output is parsed and returned.

seedintegerstring

Seed

Optional random seed for reproducibility. Provide a positive integer (e.g., 42, 1234). Using the same seed with identical settings produces repeatable outputs.

temperaturenumberstring

Temperature

Controls randomness in generation. Typical range is 0.0–1.0. Lower values (e.g., 0.2) make outputs more focused and deterministic, while higher values (e.g., 0.7–1.0) increase creativity and variability.

thinkingBudgetTokensintegerstring

Thinking Token Budget

Specifies the maximum number of tokens allocated as a budget for internal reasoning processes, such as generating intermediate thoughts or chain-of-thought sequences, allowing the model to perform multi-step reasoning before producing the final output.

thinkingEnabledbooleanstring

Enable Thinking

Enables internal reasoning ('thinking') in supported language models, allowing the model to perform intermediate reasoning steps before producing a final output; this is useful for complex tasks like multi-step problem solving or decision making, but may increase token usage and response time, and is only applicable to compatible models.

topKintegerstring

Top-K

Limits sampling to the top K most likely tokens at each step. Typical values are between 20 and 100. Smaller values reduce randomness; larger values allow more diverse outputs.

topPnumberstring

Top-P (nucleus sampling)

Selects from the smallest set of tokens whose cumulative probability is ≤ topP. Typical values are 0.8–0.95. Lower values make the output more focused, higher values increase diversity.

driverstring

Optional JDBC driver class name – automatically resolved if not provided.

jdbcUrlstring

JDBC connection URL to the target database

maxPoolSizeintegerstring
Default2

Maximum number of database connections in the pool

typeobject
apiKey*Requiredstring

API Key

maxResultsintegerstring
Default3

Maximum number of results to return

typeobject

Maximum sequential tools invocations

Defaulttool

System message

The system message for the language model

Tools that the LLM may use to augment its response

Definitions
description*Requiredstring

Agent description

The description will be used to instruct the LLM what the tool is doing.

serverUrl*Requiredstring

Server URL

The URL of the remote agent A2A server

namestring
Defaulttool

Agent name

It must be set to a different value than the default in case you want to have multiple agents used as tools in the same task.

typeobject
apiKey*Requiredstring

RapidAPI key for Judge0

You can obtain it from the RapidAPI website.

typeobject
image*Requiredstring

Container image

apiVersionstring

API version

bindsarray
SubTypestring

Volume binds

commandarray
SubTypestring
dockerCertPathstring

Docker certificate path

dockerConfigstring

Docker configuration

dockerContextstring

Docker context

dockerHoststring

Docker host

dockerTlsVerifybooleanstring

Whether Docker should verify TLS certificates

envobject
SubTypestring
logEventsbooleanstring
Defaultfalse

Whether to log events

registryEmailstring

Container registry email

registryPasswordstring

Container registry password

registryUrlstring

Container registry URL

registryUsernamestring

Container registry username

typeobject
apiKey*Requiredstring

API key

csi*Requiredstring

Custom search engine ID (cx)

typeobject
descriptionstring

Description of the flow if not already provided inside the flow itself

Use it only if you define the flow in the tool definition. The LLM needs a tool description to identify whether to call it. If the flow has a description, the tool will use it. Otherwise, the description property must be explicitly defined.

flowIdstring

Flow ID of the flow that should be called

inheritLabelsbooleanstring
Defaultfalse

Whether the flow should inherit labels from this execution that triggered it

By default, labels are not inherited. If you set this option to true, the flow execution will inherit all labels from the agent's execution. Any labels passed by the LLM will override those defined here.

inputsobject

Input values that should be passed to flow's execution

Any inputs passed by the LLM will override those defined here.

labelsarrayobject

Labels that should be added to the flow's execution

Any labels passed by the LLM will override those defined here.

namespacestring

Namespace of the flow that should be called

revisionintegerstring

Revision of the flow that should be called

scheduleDatestring
Formatdate-time

Schedule the flow execution at a later date

If the LLM sets a scheduleDate, it will override the one defined here.

typeobject
tasks*Requiredarray

List of Kestra runnable tasks

typeobject
sseUrl*Requiredstring

SSE URL of the MCP server

headersobject
SubTypestring

Could be useful, for example, to add authentication tokens via the Authorization header.

logRequestsbooleanstring
Defaultfalse
logResponsesbooleanstring
Defaultfalse
timeoutstring
Formatduration
typeobject
command*Requiredarray
SubTypestring

MCP client command, as a list of command parts

envobject
SubTypestring

Environment variables

logEventsbooleanstring
Defaultfalse

Log events

typeobject
url*Requiredstring

URL of the MCP server

headersobject
SubTypestring

Custom headers

Useful, for example, for adding authentication tokens via the Authorization header.

logRequestsbooleanstring
Defaultfalse

Log requests

logResponsesbooleanstring
Defaultfalse

Log responses

timeoutstring
Formatduration

Connection timeout duration

typeobject
apiKey*Requiredstring

Tavily API Key - you can obtain one from the Tavily website

typeobject