ChatCompletion
Given a prompt, get a response from an LLM using the OpenAI’s Chat Completions API.
For more information, refer to the Chat Completions API docs.
type: "io.kestra.plugin.openai.ChatCompletion"
Based on a prompt input, generate a completion response and pass it to a downstream task.
id: openai
namespace: company.team
inputs:
- id: prompt
type: STRING
defaults: What is data orchestration?
tasks:
- id: completion
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "yourOpenAIapiKey"
model: gpt-4o
prompt: "{{ inputs.prompt }}"
- id: response
type: io.kestra.plugin.core.debug.Return
format: {{ outputs.completion.choices[0].message.content }}"
Based on a prompt input, ask OpenAI to call a function that determines whether you need to respond to a customer's review immediately or wait until later, and then comes up with a suggested response.
id: openai
namespace: company.team
inputs:
- id: prompt
type: STRING
defaults: I love your product and would purchase it again!
tasks:
- id: prioritize_response
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "yourOpenAIapiKey"
model: gpt-4o
messages:
- role: user
content: "{{ inputs.prompt }}"
functions:
- name: respond_to_review
description: Given the customer product review provided as input, determines how urgently a reply is required and then provides suggested response text.
parameters:
- name: response_urgency
type: string
description: How urgently this customer review needs a reply. Bad reviews
must be addressed immediately before anyone sees them. Good reviews can
wait until later.
required: true
enumValues:
- reply_immediately
- reply_later
- name: response_text
type: string
description: The text to post online in response to this review.
required: true
- id: response_urgency
type: io.kestra.plugin.core.debug.Return
format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_urgency }}"
- id: response_text
type: io.kestra.plugin.core.debug.Return
format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_text }}"
The OpenAI API key.
ID of the model to use e.g. 'gpt-4'
See the OpenAI model's documentation page for more details.
The maximum number of seconds to wait for a response.
The name of the function OpenAI should generate a call for.
Enter a specific function name, or 'auto' to let the model decide. The default is auto.
Modify the likelihood of specified tokens appearing in the completion. Defaults to null.
The prompt(s) to generate completions for. By default, this prompt will be sent as a user
role.
If not provided, make sure to set the messages
property.
Up to 4 sequences where the API will stop generating further tokens. Defaults to null.
A unique identifier representing your end-user.
A description of the function parameter.
Provide as many details as possible to ensure the model returns an accurate parameter.
The name of the function parameter.
A list of values that the model must choose from when setting this parameter.
Optional, but useful when for classification problems.