ChatCompletion
Given a prompt, get a response from an LLM using the OpenAI’s Chat Completions API.
For more information, refer to the Chat Completions API docs.
type: "io.kestra.plugin.openai.ChatCompletion"
Examples
Based on a prompt input, generate a completion response and pass it to a downstream task.
id: openai_chat
namespace: company.team
inputs:
- id: prompt
type: STRING
defaults: What is data orchestration?
tasks:
- id: completion
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "{{ secret('OPENAI_API_KEY') }}"
model: gpt-4o
prompt: "{{ inputs.prompt }}"
- id: log_output
type: io.kestra.plugin.core.log.Log
message: "{{ outputs.completion.choices[0].message.content }}"
Send a prompt to OpenAI's ChatCompletion API.
id: openai_chat
namespace: company.team
tasks:
- id: prompt
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "{{ secret('OPENAI_API_KEY') }}"
model: gpt-4o
prompt: Explain in one sentence why data engineers build data pipelines
- id: use_output
type: io.kestra.plugin.core.log.Log
message: "{{ outputs.prompt.choices | jq('.[].message.content') | first }}"
Based on a prompt input, ask OpenAI to call a function that determines whether you need to respond to a customer's review immediately or wait until later, and then comes up with a suggested response.
id: openai_chat
namespace: company.team
inputs:
- id: prompt
type: STRING
defaults: I love your product and would purchase it again!
tasks:
- id: prioritize_response
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "yourOpenAIapiKey"
model: gpt-4o
messages:
- role: user
content: "{{ inputs.prompt }}"
functions:
- name: respond_to_review
description: Given the customer product review provided as input, determines how urgently a reply is required and then provides suggested response text.
parameters:
- name: response_urgency
type: string
description: How urgently this customer review needs a reply. Bad reviews
must be addressed immediately before anyone sees them. Good reviews can
wait until later.
required: true
enumValues:
- reply_immediately
- reply_later
- name: response_text
type: string
description: The text to post online in response to this review.
required: true
- id: response_urgency
type: io.kestra.plugin.core.debug.Return
format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_urgency }}"
- id: response_text
type: io.kestra.plugin.core.debug.Return
format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_text }}"
Properties
apiKey *Requiredstring
OpenAI API key
model *Requiredstring
ID of the model to use e.g. 'gpt-4'
See the OpenAI model's documentation page for more details.
clientTimeout Non-dynamicinteger
10
The maximum number of seconds to wait for a response
frequencyPenalty numberstring
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far. Defaults to 0.
functionCall string
auto
The name of the function OpenAI should generate a call for.
Enter a specific function name, or 'auto' to let the model decide. The default is auto.
logitBias object
Modify the likelihood of specified tokens appearing in the completion. Defaults to null.
maxTokens integerstring
The maximum number of tokens to generate in the chat completion. No limits are set by default.
messages array
n integerstring
1
How many chat completion choices to generate for each input message. Defaults to 1.
presencePenalty numberstring
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. Defaults to 0.
prompt string
The prompt(s) to generate completions for. By default, this prompt will be sent as a user
role.
If not provided, make sure to set the messages
property.
stop array
Up to 4 sequences where the API will stop generating further tokens. Defaults to null.
temperature numberstring
1.0
What sampling temperature to use, between 0 and 2. Defaults to 1.
topP numberstring
1.0
An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass. Defaults to 1.
user string
A unique identifier representing your end-user
Outputs
id string
Unique ID assigned to this Chat Completion
model string
The GPT model used
object string
The type of object returned, should be "chat.completion".
usage CompletionUsage
The API usage for this request
Definitions
com.openai.models.completions.CompletionUsage
com.openai.core.JsonField
com.openai.models.chat.completions.ChatCompletion-Choice
com.openai.core.JsonField
io.kestra.plugin.openai.ChatCompletion-ChatMessage
content string
name string
role string
io.kestra.plugin.openai.ChatCompletion-PluginChatFunctionParameter
description *Requiredstring
A description of the function parameter
Provide as many details as possible to ensure the model returns an accurate parameter.
name *Requiredstring
The name of the function parameter
type *Requiredstring
The OpenAPI data type of the parameter
Valid types are string, number, integer, boolean, array, object
enumValues array
A list of values that the model must choose from when setting this parameter.
Optional, but useful when for classification problems.
required booleanstring
Whether or not the model is required to provide this parameter
Defaults to false.