ChatCompletion​Chat​Completion

yaml
type: "io.kestra.plugin.openai.ChatCompletion"

Given a prompt, get a response from an LLM using the OpenAI’s Chat Completions API.

Examples

Based on a prompt input, generate a completion response and pass it to a downstream task.

yaml
id: openai
namespace: company.team

inputs:
  - id: prompt
    type: STRING
    defaults: What is data orchestration?

tasks:
  - id: completion
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "yourOpenAIapiKey"
    model: gpt-4o
    prompt: "{{ inputs.prompt }}"

  - id: response
    type: io.kestra.plugin.core.debug.Return
    format: {{ outputs.completion.choices[0].message.content }}"

Based on a prompt input, ask OpenAI to call a function that determines whether you need to respond to a customer's review immediately or wait until later, and then comes up with a suggested response.

yaml
id: openai
namespace: company.team

inputs:
  - id: prompt
    type: STRING
    defaults: I love your product and would purchase it again!

tasks:
  - id: prioritize_response
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "yourOpenAIapiKey"
    model: gpt-4o
    messages:
      - role: user
        content: "{{ inputs.prompt }}"
    functions:
      - name: respond_to_review
        description: Given the customer product review provided as input, determines how urgently a reply is required and then provides suggested response text.
        parameters:
          - name: response_urgency
            type: string
            description: How urgently this customer review needs a reply. Bad reviews
                         must be addressed immediately before anyone sees them. Good reviews can
                         wait until later.
            required: true
            enumValues:
              - reply_immediately
              - reply_later
          - name: response_text
            type: string
            description: The text to post online in response to this review.
            required: true

  - id: response_urgency
    type: io.kestra.plugin.core.debug.Return
    format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_urgency }}"

  - id: response_text
    type: io.kestra.plugin.core.debug.Return
    format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_text }}"

Properties

apiKey

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

The OpenAI API key.

model

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

ID of the model to use e.g. 'gpt-4'

clientTimeout

  • Type: integer
  • Dynamic:
  • Required:
  • Default: 10

The maximum number of seconds to wait for a response.

frequencyPenalty

  • Type:
    • number
    • string
  • Dynamic: ✔️
  • Required:

functionCall

  • Type: string
  • Dynamic: ✔️
  • Required:

The name of the function OpenAI should generate a call for.

Enter a specific function name, or 'auto' to let the model decide. The default is auto.

functions

The function call(s) the API can use when generating completions.

logitBias

  • Type: object
  • SubType: integer
  • Dynamic: ✔️
  • Required:

Modify the likelihood of specified tokens appearing in the completion. Defaults to null.

maxTokens

  • Type:
    • integer
    • string
  • Dynamic: ✔️
  • Required:

messages

  • Type: array
  • SubType: ChatMessage
  • Dynamic: ✔️
  • Required:

A list of messages comprising the conversation so far.

Required if prompt is not set.

n

  • Type:
    • integer
    • string
  • Dynamic: ✔️
  • Required:

presencePenalty

  • Type:
    • number
    • string
  • Dynamic: ✔️
  • Required:

prompt

  • Type: string
  • Dynamic: ✔️
  • Required:

The prompt(s) to generate completions for. By default, this prompt will be sent as a user role.

If not provided, make sure to set the messages property.

stop

  • Type: array
  • SubType: string
  • Dynamic: ✔️
  • Required:

Up to 4 sequences where the API will stop generating further tokens. Defaults to null.

temperature

  • Type:
    • number
    • string
  • Dynamic: ✔️
  • Required:

topP

  • Type:
    • number
    • string
  • Dynamic: ✔️
  • Required:

user

  • Type: string
  • Dynamic: ✔️
  • Required:

A unique identifier representing your end-user.

Outputs

choices

id

  • Type: string
  • Required:

model

  • Type: string
  • Required:

object

  • Type: string
  • Required:

usage

Definitions

com.theokanning.openai.completion.chat.ChatFunctionCall

  • arguments
  • name
    • Type: string
    • Dynamic:
    • Required:

io.kestra.plugin.openai.ChatCompletion-PluginChatFunctionParameter

  • description
    • Type: string
    • Dynamic: ✔️
    • Required: ✔️
  • name
    • Type: string
    • Dynamic: ✔️
    • Required: ✔️
  • enumValues
    • Type: array
    • SubType: string
    • Dynamic: ✔️
    • Required:
  • required
    • Type:
      • boolean
      • string
    • Dynamic: ✔️
    • Required:

com.fasterxml.jackson.databind.JsonNode

io.kestra.plugin.openai.ChatCompletion-PluginChatFunction

com.theokanning.openai.completion.chat.ChatCompletionChoice

  • finish_reason
    • Type: string
    • Dynamic:
    • Required:
  • index
    • Type: integer
    • Dynamic:
    • Required:
  • message

com.theokanning.openai.Usage

  • completion_tokens
    • Type: integer
    • Dynamic:
    • Required:
  • prompt_tokens
    • Type: integer
    • Dynamic:
    • Required:
  • total_tokens
    • Type: integer
    • Dynamic:
    • Required:

com.theokanning.openai.completion.chat.ChatMessage

  • content
    • Type: string
    • Dynamic:
    • Required:
  • function_call
  • name
    • Type: string
    • Dynamic:
    • Required:
  • role
    • Type: string
    • Dynamic:
    • Required: