JSONStructuredExtraction
Extract JSON fields from text
JSONStructuredExtraction
Extract JSON fields from text
yaml
type: io.kestra.plugin.ai.completion.JSONStructuredExtractionExamples
yaml
id: json_structured_extraction
namespace: company.ai
tasks:
- id: extract_person
type: io.kestra.plugin.ai.completion.JSONStructuredExtraction
schemaName: Person
jsonFields:
- name
- city
- country
- email
prompt: |
From the text below, extract the person's name, city, and email.
If a field is missing, leave it blank.
Text:
"Hi! I'm John Smith from Paris, France. You can reach me at john.smith@example.com."
systemMessage: You extract structured data in JSON format.
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
apiKey: "{{ secret('GEMINI_API_KEY') }}"
modelName: gemini-2.5-flash
yaml
id: json_structured_extraction_order
namespace: company.ai
tasks:
- id: extract_order
type: io.kestra.plugin.ai.completion.JSONStructuredExtraction
schemaName: Order
jsonFields:
- order_id
- customer_name
- city
- total_amount
prompt: |
Extract the order_id, customer_name, city, and total_amount from the message.
For the total amount, keep only the number without the currency symbol.
Return only JSON with the requested keys.
Message:
"Order #A-1043 for Jane Doe, shipped to Berlin. Total: 249.99 EUR."
systemMessage: You are a precise JSON data extraction assistant.
provider:
type: io.kestra.plugin.ai.provider.OpenAI
apiKey: "{{ secret('OPENAI_API_KEY') }}"
modelName: gpt-5-mini
guardrails:
input:
- expression: "{{ message.length < 10000 }}"
message: "Message too long"
output:
- expression: "{{ not (response contains 'CONFIDENTIAL') }}"
message: "Response contains confidential information"
Properties
jsonFields *Requiredarray
SubTypestring
provider *RequiredNon-dynamic
Definitions
Use Amazon Bedrock models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.AmazonBedrock
accessKeyId: "{{ secret('AWS_ACCESS_KEY') }}"
secretAccessKey: "{{ secret('AWS_SECRET_KEY') }}"
modelName: anthropic.claude-3-sonnet-20240229-v1:0
thinkingBudgetTokens: 1024
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
accessKeyId*Requiredstring
modelName*Requiredstring
secretAccessKey*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.AmazonBedrockio.kestra.plugin.langchain4j.provider.AmazonBedrockbaseUrlstring
caPemstring
clientPemstring
modelTypestring
Default
COHEREPossible Values
COHERETITANUse Anthropic Claude models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.Anthropic
apiKey: "{{ secret('ANTHROPIC_API_KEY') }}"
modelName: claude-3-haiku-20240307
thinkingEnabled: true
thinkingBudgetTokens: 1024
returnThinking: false
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.Anthropicio.kestra.plugin.langchain4j.provider.AnthropicbaseUrlstring
caPemstring
clientPemstring
maxTokensintegerstring
Use Azure OpenAI deployments
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.AzureOpenAI
apiKey: "{{ secret('AZURE_API_KEY') }}"
endpoint: https://your-resource.openai.azure.com/
modelName: anthropic.claude-3-sonnet-20240229-v1:0
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.AzureOpenAIio.kestra.plugin.langchain4j.provider.AzureOpenAIapiKeystring
baseUrlstring
caPemstring
clientIdstring
clientPemstring
clientSecretstring
serviceVersionstring
tenantIdstring
Use DashScope (Qwen) models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.DashScope
apiKey: "{{ secret('DASHSCOPE_API_KEY') }}"
modelName: qwen-plus
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Default
https://dashscope-intl.aliyuncs.com/api/v1caPemstring
clientPemstring
enableSearchbooleanstring
maxTokensintegerstring
repetitionPenaltynumberstring
Use DeepSeek models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.DeepSeek
apiKey: "{{ secret('DEEPSEEK_API_KEY') }}"
modelName: deepseek-chat
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.DeepSeekio.kestra.plugin.langchain4j.provider.DeepSeekbaseUrlstring
Default
https://api.deepseek.com/v1caPemstring
clientPemstring
Use GitHub Models via Azure AI Inference
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GitHubModels
gitHubToken: "{{ secret('GITHUB_TOKEN') }}"
modelName: gpt-4o-mini
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely.
- type: USER
content: "{{ inputs.prompt }}"
gitHubToken*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
Use Google Gemini models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
apiKey: "{{ secret('GOOGLE_API_KEY') }}"
modelName: gemini-2.5-flash
thinkingEnabled: true
thinkingBudgetTokens: 1024
returnThinking: true
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
apiKey: "{{ secret('GOOGLE_API_KEY') }}"
modelName: gemini-2.5-flash
clientPem: "{{ secret('CLIENT_PEM') }}"
caPem: "{{ secret('CA_PEM') }}"
baseUrl: "https://internal.gemini.company.com/endpoint"
thinkingEnabled: true
thinkingBudgetTokens: 1024
returnThinking: true
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.GoogleGeminiio.kestra.plugin.langchain4j.provider.GoogleGeminibaseUrlstring
caPemstring
clientPemstring
embeddingModelConfiguration
io.kestra.plugin.ai.provider.GoogleGemini-EmbeddingModelConfiguration
maxRetriesintegerstring
outputDimensionalityintegerstring
taskTypestring
Possible Values
RETRIEVAL_QUERYRETRIEVAL_DOCUMENTSEMANTIC_SIMILARITYCLASSIFICATIONCLUSTERINGQUESTION_ANSWERINGFACT_VERIFICATIONtimeoutstring
titleMetadataKeystring
Use Google Vertex AI models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleVertexAI
endpoint: your-vertex-ai-endpoint
location: your-google-cloud-region
project: your-google-cloud-project-id
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
endpoint*Requiredstring
location*Requiredstring
modelName*Requiredstring
project*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.GoogleVertexAIio.kestra.plugin.langchain4j.provider.GoogleVertexAIbaseUrlstring
caPemstring
clientPemstring
Use Hugging Face Inference endpoints
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.HuggingFace
apiKey: "{{ secret('HUGGING_FACE_API_KEY') }}"
modelName: HuggingFaceTB/SmolLM3-3B:hf-inference
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Default
https://router.huggingface.co/v1caPemstring
clientPemstring
Use LocalAI OpenAI-compatible server
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.LocalAI
modelName: gemma-3-1b-it
baseUrl: http://localhost:8080/v1
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
baseUrl*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.LocalAIio.kestra.plugin.langchain4j.provider.LocalAIcaPemstring
clientPemstring
Use Mistral models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.MistralAI
apiKey: "{{ secret('MISTRAL_API_KEY') }}"
modelName: mistral:7b
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.MistralAIio.kestra.plugin.langchain4j.provider.MistralAIbaseUrlstring
caPemstring
clientPemstring
Use OCI Generative AI models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.OciGenAI
region: "{{ secret('OCI_GENAI_MODEL_REGION_PROPERTY') }}"
compartmentId: "{{ secret('OCI_GENAI_COMPARTMENT_ID_PROPERTY') }}"
authProvider: "{{ secret('OCI_GENAI_CONFIG_PROFILE_PROPERTY') }}"
modelName: oracle.chat.gpt-3.5
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
compartmentId*Requiredstring
modelName*Requiredstring
region*Requiredstring
type*Requiredobject
authProviderstring
baseUrlstring
caPemstring
clientPemstring
Use local Ollama models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.Ollama
modelName: llama3
endpoint: http://localhost:11434
thinkingEnabled: true
returnThinking: true
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
endpoint*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.Ollamaio.kestra.plugin.langchain4j.provider.OllamabaseUrlstring
caPemstring
clientPemstring
Use OpenAI models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.OpenAI
apiKey: "{{ secret('OPENAI_API_KEY') }}"
modelName: gpt-5-mini
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.OpenAIio.kestra.plugin.langchain4j.provider.OpenAIbaseUrlstring
Default
https://api.openai.com/v1caPemstring
clientPemstring
Use OpenRouter models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.OpenRouter
apiKey: "{{ secret('OPENROUTER_API_KEY') }}"
baseUrl: https://openrouter.ai/api/v1
modelName: x-ai/grok-beta
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.OpenRouterio.kestra.plugin.langchain4j.provider.OpenRouterbaseUrlstring
caPemstring
clientPemstring
Use IBM watsonx.ai models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.WatsonxAI
apiKey: "{{ secret('WATSONX_API_KEY') }}"
projectId: "{{ secret('WATSONX_PROJECT_ID') }}"
modelName: ibm/granite-3-3-8b-instruct
baseUrl : "https://api.eu-de.dataplatform.cloud.ibm.com/wx"
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
projectId*Requiredstring
type*Requiredobject
baseUrlstring
caPemstring
clientPemstring
Use Cloudflare Workers AI models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.WorkersAI
accountId: "{{ secret('WORKERS_AI_ACCOUNT_ID') }}"
apiKey: "{{ secret('WORKERS_AI_API_KEY') }}"
modelName: @cf/meta/llama-2-7b-chat-fp16
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
accountId*Requiredstring
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
Possible Values
io.kestra.plugin.ai.provider.WorkersAIio.kestra.plugin.langchain4j.provider.WorkersAIbaseUrlstring
caPemstring
clientPemstring
Use ZhiPu AI models
Example
yaml
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.ZhiPuAI
apiKey: "{{ secret('ZHIPU_API_KEY') }}"
modelName: glm-4.5-flash
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{ inputs.prompt }}"
apiKey*Requiredstring
modelName*Requiredstring
type*Requiredobject
baseUrlstring
Default
https://open.bigmodel.cn/caPemstring
clientPemstring
maxRetriesintegerstring
maxTokenintegerstring
stopsarray
SubTypestring
schemaName *Requiredstring
configuration Non-dynamic
Default
{} Definitions
io.kestra.plugin.ai.domain.ChatConfiguration
logRequestsbooleanstring
logResponsesbooleanstring
maxTokenintegerstring
promptCachingbooleanstring
responseFormat
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
jsonSchemaobject
jsonSchemaDescriptionstring
strictJsonbooleanstring
Default
falsetypestring
Default
TEXTPossible Values
TEXTJSONreturnThinkingbooleanstring
seedintegerstring
temperaturenumberstring
thinkingBudgetTokensintegerstring
thinkingEnabledbooleanstring
topKintegerstring
topPnumberstring
contentBlocks array
Definitions
io.kestra.plugin.ai.domain.ChatMessage-ContentBlock
textstring
typestring
Possible Values
TEXTIMAGEPDFuristring
guardrails Non-dynamic
Definitions
io.kestra.plugin.ai.domain.Guardrails
inputarray
io.kestra.plugin.ai.domain.GuardrailRule
expression*Requiredstring
Min length
1message*Requiredstring
Min length
1outputarray
io.kestra.plugin.ai.domain.GuardrailRule
expression*Requiredstring
Min length
1message*Requiredstring
Min length
1prompt string
systemMessage string
Default
You are a structured JSON extraction assistant. Always respond with valid JSON.Outputs
extractedJson string
finishReason string
Possible Values
STOPLENGTHTOOL_EXECUTIONCONTENT_FILTEROTHERguardrailViolated boolean
Default
falseguardrailViolationMessage string
schemaName string
tokenUsage
Definitions
io.kestra.plugin.ai.domain.TokenUsage
inputTokenCountinteger
outputTokenCountinteger
totalTokenCountinteger
Metrics
input.token.count counter
Unit
tokenoutput.token.count counter
Unit
tokentotal.token.count counter
Unit
token